The following series implements fence and converts dma-buf and android sync to use it. Patch 6 and 7 add support for polling to dma-buf, blocking until all fences are signaled.
I've dropped the extra patch to copy an export from the core, and instead use the public version of it. I've had to fix some fallout from the rebase, hopefully everything's clean now, and ready for -next. --- Maarten Lankhorst (6): fence: dma-buf cross-device synchronization (v17) seqno-fence: Hardware dma-buf implementation of fencing (v4) dma-buf: use reservation objects android: convert sync to fence api, v3 reservation: add support for fences to enable cross-device synchronisation dma-buf: add poll support, v2
Documentation/DocBook/device-drivers.tmpl | 3 drivers/base/Kconfig | 9 drivers/base/Makefile | 2 drivers/base/dma-buf.c | 123 +++ drivers/base/fence.c | 465 +++++++++++++ drivers/gpu/drm/drm_prime.c | 8 drivers/gpu/drm/exynos/exynos_drm_dmabuf.c | 2 drivers/gpu/drm/i915/i915_gem_dmabuf.c | 2 drivers/gpu/drm/nouveau/nouveau_drm.c | 1 drivers/gpu/drm/nouveau/nouveau_gem.h | 1 drivers/gpu/drm/nouveau/nouveau_prime.c | 7 drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 drivers/gpu/drm/radeon/radeon_drv.c | 2 drivers/gpu/drm/radeon/radeon_prime.c | 8 drivers/gpu/drm/ttm/ttm_object.c | 2 drivers/media/v4l2-core/videobuf2-dma-contig.c | 2 drivers/staging/android/Kconfig | 1 drivers/staging/android/Makefile | 2 drivers/staging/android/sw_sync.c | 4 drivers/staging/android/sync.c | 892 ++++++++---------------- drivers/staging/android/sync.h | 80 +- drivers/staging/android/sync_debug.c | 245 +++++++ drivers/staging/android/trace/sync.h | 12 include/drm/drmP.h | 2 include/linux/dma-buf.h | 21 - include/linux/fence.h | 329 +++++++++ include/linux/reservation.h | 18 include/linux/seqno-fence.h | 109 +++ include/trace/events/fence.h | 125 +++ 29 files changed, 1822 insertions(+), 657 deletions(-) create mode 100644 drivers/base/fence.c create mode 100644 drivers/staging/android/sync_debug.c create mode 100644 include/linux/fence.h create mode 100644 include/linux/seqno-fence.h create mode 100644 include/trace/events/fence.h
A fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace.
A driver must allocate a fence context for each execution ring that can run in parallel. The function for this takes an argument with how many contexts to allocate: + fence_context_alloc()
A fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence: + fence_signal()
To have a rough approximation whether a fence is fired, call: + fence_is_signaled()
The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf.
The one pending on the fence can add an async callback: + fence_add_callback()
The callback can optionally be cancelled with: + fence_remove_callback()
To wait synchronously, optionally with a timeout: + fence_wait() + fence_wait_timeout()
When emitting a fence, call: + trace_fence_emit()
To annotate that a fence is blocking on another fence, call: + trace_fence_annotate_wait_on(fence, on_fence)
A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example:
fence = custom_get_fence(...); if ((seqno_fence = to_seqno_fence(fence)) != NULL) { dma_buf *fence_buf = seqno_fence->sync_buf; get_dma_buf(fence_buf);
... tell the hw the memory location to wait ... custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno); } else { /* fall-back to sw sync * / fence_add_callback(fence, my_cb); }
On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way.
enable_signaling callback is used to provide sw signaling in case a cpu waiter is requested or no compatible hardware signaling could be used.
The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization).
v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added. v9: Change compiler bitfields to flags, change return type of enable_signaling to bool. Rework dma_fence_wait. Added dma_fence_is_signaled and dma_fence_wait_timeout. s/dma// and change exports to non GPL. Added fence_is_signaled and fence_enable_sw_signaling calls, add ability to override default wait operation. v10: remove event_queue, use a custom list, export try_to_wake_up from scheduler. Remove fence lock and use a global spinlock instead, this should hopefully remove all the locking headaches I was having on trying to implement this. enable_signaling is called with this lock held. v11: Use atomic ops for flags, lifting the need for some spin_lock_irqsaves. However I kept the guarantee that after fence_signal returns, it is guaranteed that enable_signaling has either been called to completion, or will not be called any more.
Add contexts and seqno to base fence implementation. This allows you to wait for less fences, by testing for seqno + signaled, and then only wait on the later fence.
Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier. An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE spam, and another runtime option can turn it off at runtime. v12: Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context and fence->seqno members. v13: Fixup CONFIG_FENCE_TRACE kconfig description. Move fence_context_alloc to fence. Simplify fence_later. Kill priv member to fence_cb. v14: Remove priv argument from fence_add_callback, oops! v15: Remove priv from documentation. Explicitly include linux/atomic.h. v16: Add trace events. Import changes required by android syncpoints. v17: Use wake_up_state instead of try_to_wake_up. (Colin Cross) Fix up commit description for seqno_fence. (Rob Clark)
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- Documentation/DocBook/device-drivers.tmpl | 2 drivers/base/Kconfig | 9 + drivers/base/Makefile | 2 drivers/base/fence.c | 415 +++++++++++++++++++++++++++++ include/linux/fence.h | 329 +++++++++++++++++++++++ include/trace/events/fence.h | 125 +++++++++ 6 files changed, 881 insertions(+), 1 deletion(-) create mode 100644 drivers/base/fence.c create mode 100644 include/linux/fence.h create mode 100644 include/trace/events/fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index f5170082bdb3..7a0c9ddb4818 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -129,6 +129,8 @@ X!Edrivers/base/interface.c </sect1> <sect1><title>Device Drivers DMA Management</title> !Edrivers/base/dma-buf.c +!Edrivers/base/fence.c +!Iinclude/linux/fence.h !Edrivers/base/reservation.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig index ec36e7772e57..b50ad30151ae 100644 --- a/drivers/base/Kconfig +++ b/drivers/base/Kconfig @@ -200,6 +200,15 @@ config DMA_SHARED_BUFFER APIs extension; the file's descriptor can then be passed on to other driver.
+config FENCE_TRACE + bool "Enable verbose FENCE_TRACE messages" + depends on DMA_SHARED_BUFFER + help + Enable the FENCE_TRACE printks. This will add extra + spam to the console log, but will make it easier to diagnose + lockup related problems for dma-buffers shared across multiple + devices. + config DMA_CMA bool "DMA Contiguous Memory Allocator" depends on HAVE_DMA_CONTIGUOUS && CMA diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 04b314e0fa51..eb4864aee073 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_DMA_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o fence.o reservation.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/fence.c b/drivers/base/fence.c new file mode 100644 index 000000000000..12df2bf62034 --- /dev/null +++ b/drivers/base/fence.c @@ -0,0 +1,415 @@ +/* + * Fence mechanism for dma-buf and to allow for asynchronous dma access + * + * Copyright (C) 2012 Canonical Ltd + * Copyright (C) 2012 Texas Instruments + * + * Authors: + * Rob Clark robdclark@gmail.com + * Maarten Lankhorst maarten.lankhorst@canonical.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <linux/slab.h> +#include <linux/export.h> +#include <linux/atomic.h> +#include <linux/fence.h> + +#define CREATE_TRACE_POINTS +#include <trace/events/fence.h> + +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on); +EXPORT_TRACEPOINT_SYMBOL(fence_emit); + +/** + * fence context counter: each execution context should have its own + * fence context, this allows checking if fences belong to the same + * context or not. One device can have multiple separate contexts, + * and they're used if some engine can run independently of another. + */ +static atomic_t fence_context_counter = ATOMIC_INIT(0); + +/** + * fence_context_alloc - allocate an array of fence contexts + * @num: [in] amount of contexts to allocate + * + * This function will return the first index of the number of fences allocated. + * The fence context is used for setting fence->context to a unique number. + */ +unsigned fence_context_alloc(unsigned num) +{ + BUG_ON(!num); + return atomic_add_return(num, &fence_context_counter) - num; +} +EXPORT_SYMBOL(fence_context_alloc); + +int __fence_signal(struct fence *fence) +{ + struct fence_cb *cur, *tmp; + int ret = 0; + + if (WARN_ON(!fence)) + return -EINVAL; + + if (!ktime_to_ns(fence->timestamp)) { + fence->timestamp = ktime_get(); + smp_mb__before_clear_bit(); + } + + if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { + ret = -EINVAL; + + /* + * we might have raced with the unlocked fence_signal, + * still run through all callbacks + */ + } else + trace_fence_signaled(fence); + + list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { + list_del_init(&cur->node); + cur->func(fence, cur); + } + return ret; +} +EXPORT_SYMBOL(__fence_signal); + +/** + * fence_signal - signal completion of a fence + * @fence: the fence to signal + * + * Signal completion for software callbacks on a fence, this will unblock + * fence_wait() calls and run all the callbacks added with + * fence_add_callback(). Can be called multiple times, but since a fence + * can only go from unsignaled to signaled state, it will only be effective + * the first time. + */ +int fence_signal(struct fence *fence) +{ + unsigned long flags; + + if (!fence) + return -EINVAL; + + if (!ktime_to_ns(fence->timestamp)) { + fence->timestamp = ktime_get(); + smp_mb__before_clear_bit(); + } + + if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return -EINVAL; + + trace_fence_signaled(fence); + + if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) { + struct fence_cb *cur, *tmp; + + spin_lock_irqsave(fence->lock, flags); + list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { + list_del_init(&cur->node); + cur->func(fence, cur); + } + spin_unlock_irqrestore(fence->lock, flags); + } + return 0; +} +EXPORT_SYMBOL(fence_signal); + +/** + * fence_wait_timeout - sleep until the fence gets signaled + * or until timeout elapses + * @fence: [in] the fence to wait on + * @intr: [in] if true, do an interruptible wait + * @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT + * + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the + * remaining timeout in jiffies on success. Other error values may be + * returned on custom implementations. + * + * Performs a synchronous wait on this fence. It is assumed the caller + * directly or indirectly (buf-mgr between reservation and committing) + * holds a reference to the fence, otherwise the fence might be + * freed before return, resulting in undefined behavior. + */ +long +fence_wait_timeout(struct fence *fence, bool intr, signed long timeout) +{ + unsigned long ret; + + if (WARN_ON(timeout < 0)) + return -EINVAL; + + trace_fence_wait_start(fence); + ret = fence->ops->wait(fence, intr, timeout); + trace_fence_wait_end(fence); + return ret; +} +EXPORT_SYMBOL(fence_wait_timeout); + +void release_fence(struct kref *kref) +{ + struct fence *fence = + container_of(kref, struct fence, refcount); + + trace_fence_destroy(fence); + + BUG_ON(!list_empty(&fence->cb_list)); + + if (fence->ops->release) + fence->ops->release(fence); + else + kfree(fence); +} +EXPORT_SYMBOL(release_fence); + +/** + * fence_enable_sw_signaling - enable signaling on fence + * @fence: [in] the fence to enable + * + * this will request for sw signaling to be enabled, to make the fence + * complete as soon as possible + */ +void fence_enable_sw_signaling(struct fence *fence) +{ + unsigned long flags; + + if (!test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags) && + !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { + trace_fence_enable_signal(fence); + + spin_lock_irqsave(fence->lock, flags); + + if (!fence->ops->enable_signaling(fence)) + __fence_signal(fence); + + spin_unlock_irqrestore(fence->lock, flags); + } +} +EXPORT_SYMBOL(fence_enable_sw_signaling); + +/** + * fence_add_callback - add a callback to be called when the fence + * is signaled + * @fence: [in] the fence to wait on + * @cb: [in] the callback to register + * @func: [in] the function to call + * + * cb will be initialized by fence_add_callback, no initialization + * by the caller is required. Any number of callbacks can be registered + * to a fence, but a callback can only be registered to one fence at a time. + * + * Note that the callback can be called from an atomic context. If + * fence is already signaled, this function will return -ENOENT (and + * *not* call the callback) + * + * Add a software callback to the fence. Same restrictions apply to + * refcount as it does to fence_wait, however the caller doesn't need to + * keep a refcount to fence afterwards: when software access is enabled, + * the creator of the fence is required to keep the fence alive until + * after it signals with fence_signal. The callback itself can be called + * from irq context. + * + */ +int fence_add_callback(struct fence *fence, struct fence_cb *cb, + fence_func_t func) +{ + unsigned long flags; + int ret = 0; + bool was_set; + + if (WARN_ON(!fence || !func)) + return -EINVAL; + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { + INIT_LIST_HEAD(&cb->node); + return -ENOENT; + } + + spin_lock_irqsave(fence->lock, flags); + + was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags); + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + ret = -ENOENT; + else if (!was_set) { + trace_fence_enable_signal(fence); + + if (!fence->ops->enable_signaling(fence)) { + __fence_signal(fence); + ret = -ENOENT; + } + } + + if (!ret) { + cb->func = func; + list_add_tail(&cb->node, &fence->cb_list); + } else + INIT_LIST_HEAD(&cb->node); + spin_unlock_irqrestore(fence->lock, flags); + + return ret; +} +EXPORT_SYMBOL(fence_add_callback); + +/** + * fence_remove_callback - remove a callback from the signaling list + * @fence: [in] the fence to wait on + * @cb: [in] the callback to remove + * + * Remove a previously queued callback from the fence. This function returns + * true if the callback is succesfully removed, or false if the fence has + * already been signaled. + * + * *WARNING*: + * Cancelling a callback should only be done if you really know what you're + * doing, since deadlocks and race conditions could occur all too easily. For + * this reason, it should only ever be done on hardware lockup recovery, + * with a reference held to the fence. + */ +bool +fence_remove_callback(struct fence *fence, struct fence_cb *cb) +{ + unsigned long flags; + bool ret; + + spin_lock_irqsave(fence->lock, flags); + + ret = !list_empty(&cb->node); + if (ret) + list_del_init(&cb->node); + + spin_unlock_irqrestore(fence->lock, flags); + + return ret; +} +EXPORT_SYMBOL(fence_remove_callback); + +struct default_wait_cb { + struct fence_cb base; + struct task_struct *task; +}; + +static void +fence_default_wait_cb(struct fence *fence, struct fence_cb *cb) +{ + struct default_wait_cb *wait = + container_of(cb, struct default_wait_cb, base); + + wake_up_state(wait->task, TASK_NORMAL); +} + +/** + * fence_default_wait - default sleep until the fence gets signaled + * or until timeout elapses + * @fence: [in] the fence to wait on + * @intr: [in] if true, do an interruptible wait + * @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT + * + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the + * remaining timeout in jiffies on success. + */ +long +fence_default_wait(struct fence *fence, bool intr, signed long timeout) +{ + struct default_wait_cb cb; + unsigned long flags; + long ret = timeout; + bool was_set; + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return timeout; + + spin_lock_irqsave(fence->lock, flags); + + if (intr && signal_pending(current)) { + ret = -ERESTARTSYS; + goto out; + } + + was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags); + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + goto out; + + if (!was_set) { + trace_fence_enable_signal(fence); + + if (!fence->ops->enable_signaling(fence)) { + __fence_signal(fence); + goto out; + } + } + + cb.base.func = fence_default_wait_cb; + cb.task = current; + list_add(&cb.base.node, &fence->cb_list); + + while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { + if (intr) + __set_current_state(TASK_INTERRUPTIBLE); + else + __set_current_state(TASK_UNINTERRUPTIBLE); + spin_unlock_irqrestore(fence->lock, flags); + + ret = schedule_timeout(ret); + + spin_lock_irqsave(fence->lock, flags); + if (ret > 0 && intr && signal_pending(current)) + ret = -ERESTARTSYS; + } + + if (!list_empty(&cb.base.node)) + list_del(&cb.base.node); + __set_current_state(TASK_RUNNING); + +out: + spin_unlock_irqrestore(fence->lock, flags); + return ret; +} +EXPORT_SYMBOL(fence_default_wait); + +/** + * __fence_init - Initialize a custom fence. + * @fence: [in] the fence to initialize + * @ops: [in] the fence_ops for operations on this fence + * @lock: [in] the irqsafe spinlock to use for locking this fence + * @context: [in] the execution context this fence is run on + * @seqno: [in] a linear increasing sequence number for this context + * + * Initializes an allocated fence, the caller doesn't have to keep its + * refcount after committing with this fence, but it will need to hold a + * refcount again if fence_ops.enable_signaling gets called. This can + * be used for other implementing other types of fence. + * + * context and seqno are used for easy comparison between fences, allowing + * to check which fence is later by simply using fence_later. + */ +void +__fence_init(struct fence *fence, const struct fence_ops *ops, + spinlock_t *lock, unsigned context, unsigned seqno) +{ + BUG_ON(!lock); + BUG_ON(!ops || !ops->wait || !ops->enable_signaling || !ops->get_driver_name || !ops->get_timeline_name); + + kref_init(&fence->refcount); + fence->ops = ops; + INIT_LIST_HEAD(&fence->cb_list); + fence->lock = lock; + fence->context = context; + fence->seqno = seqno; + fence->flags = 0UL; + + trace_fence_init(fence); +} +EXPORT_SYMBOL(__fence_init); diff --git a/include/linux/fence.h b/include/linux/fence.h new file mode 100644 index 000000000000..a868eceb7cf5 --- /dev/null +++ b/include/linux/fence.h @@ -0,0 +1,329 @@ +/* + * Fence mechanism for dma-buf to allow for asynchronous dma access + * + * Copyright (C) 2012 Canonical Ltd + * Copyright (C) 2012 Texas Instruments + * + * Authors: + * Rob Clark robdclark@gmail.com + * Maarten Lankhorst maarten.lankhorst@canonical.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __LINUX_FENCE_H +#define __LINUX_FENCE_H + +#include <linux/err.h> +#include <linux/wait.h> +#include <linux/list.h> +#include <linux/bitops.h> +#include <linux/kref.h> +#include <linux/sched.h> +#include <linux/printk.h> + +struct fence; +struct fence_ops; +struct fence_cb; + +/** + * struct fence - software synchronization primitive + * @refcount: refcount for this fence + * @ops: fence_ops associated with this fence + * @cb_list: list of all callbacks to call + * @lock: spin_lock_irqsave used for locking + * @context: execution context this fence belongs to, returned by + * fence_context_alloc() + * @seqno: the sequence number of this fence inside the execution context, + * can be compared to decide which fence would be signaled later. + * @flags: A mask of FENCE_FLAG_* defined below + * @timestamp: Timestamp when the fence was signaled. + * @status: Optional, only valid if < 0, must be set before calling + * fence_signal, indicates that the fence has completed with an error. + * + * the flags member must be manipulated and read using the appropriate + * atomic ops (bit_*), so taking the spinlock will not be needed most + * of the time. + * + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called* + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the + * implementer of the fence for its own purposes. Can be used in different + * ways by different fence implementers, so do not rely on this. + * + * *) Since atomic bitops are used, this is not guaranteed to be the case. + * Particularly, if the bit was set, but fence_signal was called right + * before this bit was set, it would have been able to set the + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that + * after fence_signal was called, any enable_signaling call will have either + * been completed, or never called at all. + */ +struct fence { + struct kref refcount; + const struct fence_ops *ops; + struct list_head cb_list; + spinlock_t *lock; + unsigned context, seqno; + unsigned long flags; + ktime_t timestamp; + int status; +}; + +enum fence_flag_bits { + FENCE_FLAG_SIGNALED_BIT, + FENCE_FLAG_ENABLE_SIGNAL_BIT, + FENCE_FLAG_USER_BITS, /* must always be last member */ +}; + +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb); + +/** + * struct fence_cb - callback for fence_add_callback + * @node: used by fence_add_callback to append this struct to fence::cb_list + * @func: fence_func_t to call + * + * This struct will be initialized by fence_add_callback, additional + * data can be passed along by embedding fence_cb in another struct. + */ +struct fence_cb { + struct list_head node; + fence_func_t func; +}; + +/** + * struct fence_ops - operations implemented for fence + * @get_driver_name: returns the driver name. + * @get_timeline_name: return the name of the context this fence belongs to. + * @enable_signaling: enable software signaling of fence. + * @signaled: [optional] peek whether the fence is signaled, can be null. + * @wait: custom wait implementation, or fence_default_wait. + * @release: [optional] called on destruction of fence, can be null + * @fill_driver_data: [optional] callback to fill in free-form debug info + * Returns amount of bytes filled, or -errno. + * @fence_value_str: [optional] fills in the value of the fence as a string + * @timeline_value_str: [optional] fills in the current value of the timeline as a string + * + * Notes on enable_signaling: + * For fence implementations that have the capability for hw->hw + * signaling, they can implement this op to enable the necessary + * irqs, or insert commands into cmdstream, etc. This is called + * in the first wait() or add_callback() path to let the fence + * implementation know that there is another driver waiting on + * the signal (ie. hw->sw case). + * + * This function can be called called from atomic context, but not + * from irq context, so normal spinlocks can be used. + * + * A return value of false indicates the fence already passed, + * or some failure occured that made it impossible to enable + * signaling. True indicates succesful enabling. + * + * fence->status may be set in enable_signaling, but only when false is returned. + * + * Calling fence_signal before enable_signaling is called allows + * for a tiny race window in which enable_signaling is called during, + * before, or after fence_signal. To fight this, it is recommended + * that before enable_signaling returns true an extra reference is + * taken on the fence, to be released when the fence is signaled. + * This will mean fence_signal will still be called twice, but + * the second time will be a noop since it was already signaled. + * + * Notes on signaled: + * May set fence->status if returning true. + * + * Notes on wait: + * Must not be NULL, set to fence_default_wait for default implementation. + * the fence_default_wait implementation should work for any fence, as long + * as enable_signaling works correctly. + * + * Must return -ERESTARTSYS if the wait is intr = true and the wait was + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait + * timed out. Can also return other error values on custom implementations, + * which should be treated as if the fence is signaled. For example a hardware + * lockup could be reported like that. + * + * Notes on release: + * Can be NULL, this function allows additional commands to run on + * destruction of the fence. Can be called from irq context. + * If pointer is set to NULL, kfree will get called instead. + */ + +struct fence_ops { + const char *(*get_driver_name)(struct fence *fence); + const char *(*get_timeline_name)(struct fence *fence); + bool (*enable_signaling)(struct fence *fence); + bool (*signaled)(struct fence *fence); + long (*wait)(struct fence *fence, bool intr, signed long timeout); + void (*release)(struct fence *fence); + + int (*fill_driver_data)(struct fence *fence, void *data, int size); + void (*fence_value_str)(struct fence *fence, char *str, int size); + void (*timeline_value_str)(struct fence *fence, char *str, int size); +}; + +void __fence_init(struct fence *fence, const struct fence_ops *ops, + spinlock_t *lock, unsigned context, unsigned seqno); + +/** + * fence_get - increases refcount of the fence + * @fence: [in] fence to increase refcount of + */ +static inline void fence_get(struct fence *fence) +{ + if (WARN_ON(!fence)) + return; + kref_get(&fence->refcount); +} + +extern void release_fence(struct kref *kref); + +/** + * fence_put - decreases refcount of the fence + * @fence: [in] fence to reduce refcount of + */ +static inline void fence_put(struct fence *fence) +{ + if (WARN_ON(!fence)) + return; + kref_put(&fence->refcount, release_fence); +} + +int fence_signal(struct fence *fence); +int __fence_signal(struct fence *fence); +long fence_default_wait(struct fence *fence, bool intr, signed long timeout); +int fence_add_callback(struct fence *fence, struct fence_cb *cb, + fence_func_t func); +bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); +void fence_enable_sw_signaling(struct fence *fence); + +static inline bool +__fence_is_signaled(struct fence *fence) +{ + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return true; + + if (fence->ops->signaled && fence->ops->signaled(fence)) { + __fence_signal(fence); + return true; + } + + return false; +} + +/** + * fence_is_signaled - Return an indication if the fence is signaled yet. + * @fence: [in] the fence to check + * + * Returns true if the fence was already signaled, false if not. Since this + * function doesn't enable signaling, it is not guaranteed to ever return true + * If fence_add_callback, fence_wait or fence_enable_sw_signaling + * haven't been called before. + * + * It's recommended for seqno fences to call fence_signal when the + * operation is complete, it makes it possible to prevent issues from + * wraparound between time of issue and time of use by checking the return + * value of this function before calling hardware-specific wait instructions. + */ +static inline bool +fence_is_signaled(struct fence *fence) +{ + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return true; + + if (fence->ops->signaled && fence->ops->signaled(fence)) { + fence_signal(fence); + return true; + } + + return false; +} + +/** + * fence_later - return the chronologically later fence + * @f1: [in] the first fence from the same context + * @f2: [in] the second fence from the same context + * + * Returns NULL if both fences are signaled, otherwise the fence that would be + * signaled last. Both fences must be from the same context, since a seqno is + * not re-used across contexts. + */ +static inline struct fence *fence_later(struct fence *f1, struct fence *f2) +{ + BUG_ON(f1->context != f2->context); + + /* + * can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been + * set called if enable_signaling wasn't, and enabling that here is + * overkill. + */ + if (f2->seqno - f1->seqno <= INT_MAX) + return fence_is_signaled(f2) ? NULL : f2; + else + return fence_is_signaled(f1) ? NULL : f1; +} + +long fence_wait_timeout(struct fence *fence, bool intr, signed long timeout); + + +/** + * fence_wait - sleep until the fence gets signaled + * @fence: [in] the fence to wait on + * @intr: [in] if true, do an interruptible wait + * + * This function will return -ERESTARTSYS if interrupted by a signal, + * or 0 if the fence was signaled. Other error values may be + * returned on custom implementations. + * + * Performs a synchronous wait on this fence. It is assumed the caller + * directly or indirectly (buf-mgr between reservation and committing) + * holds a reference to the fence, otherwise the fence might be + * freed before return, resulting in undefined behavior. + */ +static inline long fence_wait(struct fence *fence, bool intr) +{ + long ret; + + /* Since fence_wait_timeout cannot timeout with + * MAX_SCHEDULE_TIMEOUT, only valid return values are + * -ERESTARTSYS and MAX_SCHEDULE_TIMEOUT. + */ + ret = fence_wait_timeout(fence, intr, MAX_SCHEDULE_TIMEOUT); + + return ret < 0 ? ret : 0; +} + +unsigned fence_context_alloc(unsigned num); + +#define FENCE_TRACE(f, fmt, args...) \ + do { \ + struct fence *__ff = (f); \ + if (config_enabled(CONFIG_FENCE_TRACE)) \ + pr_info("f %u#%u: " fmt, \ + __ff->context, __ff->seqno, ##args); \ + } while (0) + +#define FENCE_WARN(f, fmt, args...) \ + do { \ + struct fence *__ff = (f); \ + pr_warn("f %u#%u: " fmt, __ff->context, __ff->seqno, ##args); \ + } while (0) + +#define FENCE_ERR(f, fmt, args...) \ + do { \ + struct fence *__ff = (f); \ + pr_err("f %u#%u: " fmt, __ff->context, __ff->seqno, ##args); \ + } while (0) + +#endif /* __LINUX_FENCE_H */ diff --git a/include/trace/events/fence.h b/include/trace/events/fence.h new file mode 100644 index 000000000000..ce3d83230cff --- /dev/null +++ b/include/trace/events/fence.h @@ -0,0 +1,125 @@ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM fence + +#if !defined(_TRACE_FENCE_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_FENCE_H + +#include <linux/tracepoint.h> + +struct fence; + +TRACE_EVENT(fence_annotate_wait_on, + + /* fence: the fence waiting on f1, f1: the fence to be waited on. */ + TP_PROTO(struct fence *fence, struct fence *f1), + + TP_ARGS(fence, f1), + + TP_STRUCT__entry( + __string(driver, fence->ops->get_driver_name(fence)) + __string(timeline, fence->ops->get_driver_name(fence)) + __field(unsigned int, context) + __field(unsigned int, seqno) + + __string(waiting_driver, f1->ops->get_driver_name(f1)) + __string(waiting_timeline, f1->ops->get_timeline_name(f1)) + __field(unsigned int, waiting_context) + __field(unsigned int, waiting_seqno) + ), + + TP_fast_assign( + __assign_str(driver, fence->ops->get_driver_name(fence)) + __assign_str(timeline, fence->ops->get_timeline_name(fence)) + __entry->context = fence->context; + __entry->seqno = fence->seqno; + + __assign_str(waiting_driver, f1->ops->get_driver_name(f1)) + __assign_str(waiting_timeline, f1->ops->get_timeline_name(f1)) + __entry->waiting_context = f1->context; + __entry->waiting_seqno = f1->seqno; + + ), + + TP_printk("driver=%s timeline=%s context=%u seqno=%u " + "waits on driver=%s timeline=%s context=%u seqno=%u", + __get_str(driver), __get_str(timeline), __entry->context, __entry->seqno, + __get_str(waiting_driver), __get_str(waiting_timeline), __entry->waiting_context, __entry->waiting_seqno) +); + +DECLARE_EVENT_CLASS(fence, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence), + + TP_STRUCT__entry( + __string(driver, fence->ops->get_driver_name(fence)) + __string(timeline, fence->ops->get_timeline_name(fence)) + __field(unsigned int, context) + __field(unsigned int, seqno) + ), + + TP_fast_assign( + __assign_str(driver, fence->ops->get_driver_name(fence)) + __assign_str(timeline, fence->ops->get_timeline_name(fence)) + __entry->context = fence->context; + __entry->seqno = fence->seqno; + ), + + TP_printk("driver=%s timeline=%s context=%u seqno=%u", + __get_str(driver), __get_str(timeline), __entry->context, __entry->seqno) +); + +DEFINE_EVENT(fence, fence_emit, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_init, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_destroy, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_enable_signal, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_signaled, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_wait_start, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_wait_end, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +#endif /* _TRACE_FENCE_H */ + +/* This part must be outside protection */ +#include <trace/define_trace.h>
On Mon, Feb 17, 2014 at 10:55 AM, Maarten Lankhorst maarten.lankhorst@canonical.com wrote:
A fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace.
A driver must allocate a fence context for each execution ring that can run in parallel. The function for this takes an argument with how many contexts to allocate:
- fence_context_alloc()
A fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence:
- fence_signal()
To have a rough approximation whether a fence is fired, call:
- fence_is_signaled()
The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf.
The one pending on the fence can add an async callback:
- fence_add_callback()
The callback can optionally be cancelled with:
- fence_remove_callback()
To wait synchronously, optionally with a timeout:
- fence_wait()
- fence_wait_timeout()
When emitting a fence, call:
- trace_fence_emit()
To annotate that a fence is blocking on another fence, call:
- trace_fence_annotate_wait_on(fence, on_fence)
A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example:
fence = custom_get_fence(...); if ((seqno_fence = to_seqno_fence(fence)) != NULL) { dma_buf *fence_buf = seqno_fence->sync_buf; get_dma_buf(fence_buf);
... tell the hw the memory location to wait ... custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
} else { /* fall-back to sw sync * / fence_add_callback(fence, my_cb); }
On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way.
enable_signaling callback is used to provide sw signaling in case a cpu waiter is requested or no compatible hardware signaling could be used.
The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization).
v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added. v9: Change compiler bitfields to flags, change return type of enable_signaling to bool. Rework dma_fence_wait. Added dma_fence_is_signaled and dma_fence_wait_timeout. s/dma// and change exports to non GPL. Added fence_is_signaled and fence_enable_sw_signaling calls, add ability to override default wait operation. v10: remove event_queue, use a custom list, export try_to_wake_up from scheduler. Remove fence lock and use a global spinlock instead, this should hopefully remove all the locking headaches I was having on trying to implement this. enable_signaling is called with this lock held. v11: Use atomic ops for flags, lifting the need for some spin_lock_irqsaves. However I kept the guarantee that after fence_signal returns, it is guaranteed that enable_signaling has either been called to completion, or will not be called any more.
Add contexts and seqno to base fence implementation. This allows you to wait for less fences, by testing for seqno + signaled, and then only wait on the later fence. Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier. An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE spam, and another runtime option can turn it off at runtime.
v12: Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context and fence->seqno members. v13: Fixup CONFIG_FENCE_TRACE kconfig description. Move fence_context_alloc to fence. Simplify fence_later. Kill priv member to fence_cb. v14: Remove priv argument from fence_add_callback, oops! v15: Remove priv from documentation. Explicitly include linux/atomic.h. v16: Add trace events. Import changes required by android syncpoints. v17: Use wake_up_state instead of try_to_wake_up. (Colin Cross) Fix up commit description for seqno_fence. (Rob Clark)
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Reviewed-by: Rob Clark robdclark@gmail.com
Documentation/DocBook/device-drivers.tmpl | 2 drivers/base/Kconfig | 9 + drivers/base/Makefile | 2 drivers/base/fence.c | 415 +++++++++++++++++++++++++++++ include/linux/fence.h | 329 +++++++++++++++++++++++ include/trace/events/fence.h | 125 +++++++++ 6 files changed, 881 insertions(+), 1 deletion(-) create mode 100644 drivers/base/fence.c create mode 100644 include/linux/fence.h create mode 100644 include/trace/events/fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index f5170082bdb3..7a0c9ddb4818 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -129,6 +129,8 @@ X!Edrivers/base/interface.c </sect1> <sect1><title>Device Drivers DMA Management</title> !Edrivers/base/dma-buf.c +!Edrivers/base/fence.c +!Iinclude/linux/fence.h !Edrivers/base/reservation.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig index ec36e7772e57..b50ad30151ae 100644 --- a/drivers/base/Kconfig +++ b/drivers/base/Kconfig @@ -200,6 +200,15 @@ config DMA_SHARED_BUFFER APIs extension; the file's descriptor can then be passed on to other driver.
+config FENCE_TRACE
bool "Enable verbose FENCE_TRACE messages"
depends on DMA_SHARED_BUFFER
help
Enable the FENCE_TRACE printks. This will add extra
spam to the console log, but will make it easier to diagnose
lockup related problems for dma-buffers shared across multiple
devices.
config DMA_CMA bool "DMA Contiguous Memory Allocator" depends on HAVE_DMA_CONTIGUOUS && CMA diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 04b314e0fa51..eb4864aee073 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_DMA_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o fence.o reservation.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/fence.c b/drivers/base/fence.c new file mode 100644 index 000000000000..12df2bf62034 --- /dev/null +++ b/drivers/base/fence.c @@ -0,0 +1,415 @@ +/*
- Fence mechanism for dma-buf and to allow for asynchronous dma access
- Copyright (C) 2012 Canonical Ltd
- Copyright (C) 2012 Texas Instruments
- Authors:
- Rob Clark robdclark@gmail.com
- Maarten Lankhorst maarten.lankhorst@canonical.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/slab.h> +#include <linux/export.h> +#include <linux/atomic.h> +#include <linux/fence.h>
+#define CREATE_TRACE_POINTS +#include <trace/events/fence.h>
+EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on); +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
+/**
- fence context counter: each execution context should have its own
- fence context, this allows checking if fences belong to the same
- context or not. One device can have multiple separate contexts,
- and they're used if some engine can run independently of another.
- */
+static atomic_t fence_context_counter = ATOMIC_INIT(0);
+/**
- fence_context_alloc - allocate an array of fence contexts
- @num: [in] amount of contexts to allocate
- This function will return the first index of the number of fences allocated.
- The fence context is used for setting fence->context to a unique number.
- */
+unsigned fence_context_alloc(unsigned num) +{
BUG_ON(!num);
return atomic_add_return(num, &fence_context_counter) - num;
+} +EXPORT_SYMBOL(fence_context_alloc);
+int __fence_signal(struct fence *fence) +{
struct fence_cb *cur, *tmp;
int ret = 0;
if (WARN_ON(!fence))
return -EINVAL;
if (!ktime_to_ns(fence->timestamp)) {
fence->timestamp = ktime_get();
smp_mb__before_clear_bit();
}
if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
ret = -EINVAL;
/*
* we might have raced with the unlocked fence_signal,
* still run through all callbacks
*/
} else
trace_fence_signaled(fence);
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
list_del_init(&cur->node);
cur->func(fence, cur);
}
return ret;
+} +EXPORT_SYMBOL(__fence_signal);
+/**
- fence_signal - signal completion of a fence
- @fence: the fence to signal
- Signal completion for software callbacks on a fence, this will unblock
- fence_wait() calls and run all the callbacks added with
- fence_add_callback(). Can be called multiple times, but since a fence
- can only go from unsignaled to signaled state, it will only be effective
- the first time.
- */
+int fence_signal(struct fence *fence) +{
unsigned long flags;
if (!fence)
return -EINVAL;
if (!ktime_to_ns(fence->timestamp)) {
fence->timestamp = ktime_get();
smp_mb__before_clear_bit();
}
if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
return -EINVAL;
trace_fence_signaled(fence);
if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
struct fence_cb *cur, *tmp;
spin_lock_irqsave(fence->lock, flags);
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
list_del_init(&cur->node);
cur->func(fence, cur);
}
spin_unlock_irqrestore(fence->lock, flags);
}
return 0;
+} +EXPORT_SYMBOL(fence_signal);
+/**
- fence_wait_timeout - sleep until the fence gets signaled
- or until timeout elapses
- @fence: [in] the fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
- Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
- remaining timeout in jiffies on success. Other error values may be
- returned on custom implementations.
- Performs a synchronous wait on this fence. It is assumed the caller
- directly or indirectly (buf-mgr between reservation and committing)
- holds a reference to the fence, otherwise the fence might be
- freed before return, resulting in undefined behavior.
- */
+long +fence_wait_timeout(struct fence *fence, bool intr, signed long timeout) +{
unsigned long ret;
if (WARN_ON(timeout < 0))
return -EINVAL;
trace_fence_wait_start(fence);
ret = fence->ops->wait(fence, intr, timeout);
trace_fence_wait_end(fence);
return ret;
+} +EXPORT_SYMBOL(fence_wait_timeout);
+void release_fence(struct kref *kref) +{
struct fence *fence =
container_of(kref, struct fence, refcount);
trace_fence_destroy(fence);
BUG_ON(!list_empty(&fence->cb_list));
if (fence->ops->release)
fence->ops->release(fence);
else
kfree(fence);
+} +EXPORT_SYMBOL(release_fence);
+/**
- fence_enable_sw_signaling - enable signaling on fence
- @fence: [in] the fence to enable
- this will request for sw signaling to be enabled, to make the fence
- complete as soon as possible
- */
+void fence_enable_sw_signaling(struct fence *fence) +{
unsigned long flags;
if (!test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags) &&
!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
trace_fence_enable_signal(fence);
spin_lock_irqsave(fence->lock, flags);
if (!fence->ops->enable_signaling(fence))
__fence_signal(fence);
spin_unlock_irqrestore(fence->lock, flags);
}
+} +EXPORT_SYMBOL(fence_enable_sw_signaling);
+/**
- fence_add_callback - add a callback to be called when the fence
- is signaled
- @fence: [in] the fence to wait on
- @cb: [in] the callback to register
- @func: [in] the function to call
- cb will be initialized by fence_add_callback, no initialization
- by the caller is required. Any number of callbacks can be registered
- to a fence, but a callback can only be registered to one fence at a time.
- Note that the callback can be called from an atomic context. If
- fence is already signaled, this function will return -ENOENT (and
- *not* call the callback)
- Add a software callback to the fence. Same restrictions apply to
- refcount as it does to fence_wait, however the caller doesn't need to
- keep a refcount to fence afterwards: when software access is enabled,
- the creator of the fence is required to keep the fence alive until
- after it signals with fence_signal. The callback itself can be called
- from irq context.
- */
+int fence_add_callback(struct fence *fence, struct fence_cb *cb,
fence_func_t func)
+{
unsigned long flags;
int ret = 0;
bool was_set;
if (WARN_ON(!fence || !func))
return -EINVAL;
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
INIT_LIST_HEAD(&cb->node);
return -ENOENT;
}
spin_lock_irqsave(fence->lock, flags);
was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
ret = -ENOENT;
else if (!was_set) {
trace_fence_enable_signal(fence);
if (!fence->ops->enable_signaling(fence)) {
__fence_signal(fence);
ret = -ENOENT;
}
}
if (!ret) {
cb->func = func;
list_add_tail(&cb->node, &fence->cb_list);
} else
INIT_LIST_HEAD(&cb->node);
spin_unlock_irqrestore(fence->lock, flags);
return ret;
+} +EXPORT_SYMBOL(fence_add_callback);
+/**
- fence_remove_callback - remove a callback from the signaling list
- @fence: [in] the fence to wait on
- @cb: [in] the callback to remove
- Remove a previously queued callback from the fence. This function returns
- true if the callback is succesfully removed, or false if the fence has
- already been signaled.
- *WARNING*:
- Cancelling a callback should only be done if you really know what you're
- doing, since deadlocks and race conditions could occur all too easily. For
- this reason, it should only ever be done on hardware lockup recovery,
- with a reference held to the fence.
- */
+bool +fence_remove_callback(struct fence *fence, struct fence_cb *cb) +{
unsigned long flags;
bool ret;
spin_lock_irqsave(fence->lock, flags);
ret = !list_empty(&cb->node);
if (ret)
list_del_init(&cb->node);
spin_unlock_irqrestore(fence->lock, flags);
return ret;
+} +EXPORT_SYMBOL(fence_remove_callback);
+struct default_wait_cb {
struct fence_cb base;
struct task_struct *task;
+};
+static void +fence_default_wait_cb(struct fence *fence, struct fence_cb *cb) +{
struct default_wait_cb *wait =
container_of(cb, struct default_wait_cb, base);
wake_up_state(wait->task, TASK_NORMAL);
+}
+/**
- fence_default_wait - default sleep until the fence gets signaled
- or until timeout elapses
- @fence: [in] the fence to wait on
- @intr: [in] if true, do an interruptible wait
- @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
- Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
- remaining timeout in jiffies on success.
- */
+long +fence_default_wait(struct fence *fence, bool intr, signed long timeout) +{
struct default_wait_cb cb;
unsigned long flags;
long ret = timeout;
bool was_set;
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
return timeout;
spin_lock_irqsave(fence->lock, flags);
if (intr && signal_pending(current)) {
ret = -ERESTARTSYS;
goto out;
}
was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
goto out;
if (!was_set) {
trace_fence_enable_signal(fence);
if (!fence->ops->enable_signaling(fence)) {
__fence_signal(fence);
goto out;
}
}
cb.base.func = fence_default_wait_cb;
cb.task = current;
list_add(&cb.base.node, &fence->cb_list);
while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
if (intr)
__set_current_state(TASK_INTERRUPTIBLE);
else
__set_current_state(TASK_UNINTERRUPTIBLE);
spin_unlock_irqrestore(fence->lock, flags);
ret = schedule_timeout(ret);
spin_lock_irqsave(fence->lock, flags);
if (ret > 0 && intr && signal_pending(current))
ret = -ERESTARTSYS;
}
if (!list_empty(&cb.base.node))
list_del(&cb.base.node);
__set_current_state(TASK_RUNNING);
+out:
spin_unlock_irqrestore(fence->lock, flags);
return ret;
+} +EXPORT_SYMBOL(fence_default_wait);
+/**
- __fence_init - Initialize a custom fence.
- @fence: [in] the fence to initialize
- @ops: [in] the fence_ops for operations on this fence
- @lock: [in] the irqsafe spinlock to use for locking this fence
- @context: [in] the execution context this fence is run on
- @seqno: [in] a linear increasing sequence number for this context
- Initializes an allocated fence, the caller doesn't have to keep its
- refcount after committing with this fence, but it will need to hold a
- refcount again if fence_ops.enable_signaling gets called. This can
- be used for other implementing other types of fence.
- context and seqno are used for easy comparison between fences, allowing
- to check which fence is later by simply using fence_later.
- */
+void +__fence_init(struct fence *fence, const struct fence_ops *ops,
spinlock_t *lock, unsigned context, unsigned seqno)
+{
BUG_ON(!lock);
BUG_ON(!ops || !ops->wait || !ops->enable_signaling || !ops->get_driver_name || !ops->get_timeline_name);
kref_init(&fence->refcount);
fence->ops = ops;
INIT_LIST_HEAD(&fence->cb_list);
fence->lock = lock;
fence->context = context;
fence->seqno = seqno;
fence->flags = 0UL;
trace_fence_init(fence);
+} +EXPORT_SYMBOL(__fence_init); diff --git a/include/linux/fence.h b/include/linux/fence.h new file mode 100644 index 000000000000..a868eceb7cf5 --- /dev/null +++ b/include/linux/fence.h @@ -0,0 +1,329 @@ +/*
- Fence mechanism for dma-buf to allow for asynchronous dma access
- Copyright (C) 2012 Canonical Ltd
- Copyright (C) 2012 Texas Instruments
- Authors:
- Rob Clark robdclark@gmail.com
- Maarten Lankhorst maarten.lankhorst@canonical.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __LINUX_FENCE_H +#define __LINUX_FENCE_H
+#include <linux/err.h> +#include <linux/wait.h> +#include <linux/list.h> +#include <linux/bitops.h> +#include <linux/kref.h> +#include <linux/sched.h> +#include <linux/printk.h>
+struct fence; +struct fence_ops; +struct fence_cb;
+/**
- struct fence - software synchronization primitive
- @refcount: refcount for this fence
- @ops: fence_ops associated with this fence
- @cb_list: list of all callbacks to call
- @lock: spin_lock_irqsave used for locking
- @context: execution context this fence belongs to, returned by
fence_context_alloc()
- @seqno: the sequence number of this fence inside the execution context,
- can be compared to decide which fence would be signaled later.
- @flags: A mask of FENCE_FLAG_* defined below
- @timestamp: Timestamp when the fence was signaled.
- @status: Optional, only valid if < 0, must be set before calling
- fence_signal, indicates that the fence has completed with an error.
- the flags member must be manipulated and read using the appropriate
- atomic ops (bit_*), so taking the spinlock will not be needed most
- of the time.
- FENCE_FLAG_SIGNALED_BIT - fence is already signaled
- FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
- FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
- implementer of the fence for its own purposes. Can be used in different
- ways by different fence implementers, so do not rely on this.
- *) Since atomic bitops are used, this is not guaranteed to be the case.
- Particularly, if the bit was set, but fence_signal was called right
- before this bit was set, it would have been able to set the
- FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
- Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
- FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
- after fence_signal was called, any enable_signaling call will have either
- been completed, or never called at all.
- */
+struct fence {
struct kref refcount;
const struct fence_ops *ops;
struct list_head cb_list;
spinlock_t *lock;
unsigned context, seqno;
unsigned long flags;
ktime_t timestamp;
int status;
+};
+enum fence_flag_bits {
FENCE_FLAG_SIGNALED_BIT,
FENCE_FLAG_ENABLE_SIGNAL_BIT,
FENCE_FLAG_USER_BITS, /* must always be last member */
+};
+typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
+/**
- struct fence_cb - callback for fence_add_callback
- @node: used by fence_add_callback to append this struct to fence::cb_list
- @func: fence_func_t to call
- This struct will be initialized by fence_add_callback, additional
- data can be passed along by embedding fence_cb in another struct.
- */
+struct fence_cb {
struct list_head node;
fence_func_t func;
+};
+/**
- struct fence_ops - operations implemented for fence
- @get_driver_name: returns the driver name.
- @get_timeline_name: return the name of the context this fence belongs to.
- @enable_signaling: enable software signaling of fence.
- @signaled: [optional] peek whether the fence is signaled, can be null.
- @wait: custom wait implementation, or fence_default_wait.
- @release: [optional] called on destruction of fence, can be null
- @fill_driver_data: [optional] callback to fill in free-form debug info
- Returns amount of bytes filled, or -errno.
- @fence_value_str: [optional] fills in the value of the fence as a string
- @timeline_value_str: [optional] fills in the current value of the timeline as a string
- Notes on enable_signaling:
- For fence implementations that have the capability for hw->hw
- signaling, they can implement this op to enable the necessary
- irqs, or insert commands into cmdstream, etc. This is called
- in the first wait() or add_callback() path to let the fence
- implementation know that there is another driver waiting on
- the signal (ie. hw->sw case).
- This function can be called called from atomic context, but not
- from irq context, so normal spinlocks can be used.
- A return value of false indicates the fence already passed,
- or some failure occured that made it impossible to enable
- signaling. True indicates succesful enabling.
- fence->status may be set in enable_signaling, but only when false is returned.
- Calling fence_signal before enable_signaling is called allows
- for a tiny race window in which enable_signaling is called during,
- before, or after fence_signal. To fight this, it is recommended
- that before enable_signaling returns true an extra reference is
- taken on the fence, to be released when the fence is signaled.
- This will mean fence_signal will still be called twice, but
- the second time will be a noop since it was already signaled.
- Notes on signaled:
- May set fence->status if returning true.
- Notes on wait:
- Must not be NULL, set to fence_default_wait for default implementation.
- the fence_default_wait implementation should work for any fence, as long
- as enable_signaling works correctly.
- Must return -ERESTARTSYS if the wait is intr = true and the wait was
- interrupted, and remaining jiffies if fence has signaled, or 0 if wait
- timed out. Can also return other error values on custom implementations,
- which should be treated as if the fence is signaled. For example a hardware
- lockup could be reported like that.
- Notes on release:
- Can be NULL, this function allows additional commands to run on
- destruction of the fence. Can be called from irq context.
- If pointer is set to NULL, kfree will get called instead.
- */
+struct fence_ops {
const char *(*get_driver_name)(struct fence *fence);
const char *(*get_timeline_name)(struct fence *fence);
bool (*enable_signaling)(struct fence *fence);
bool (*signaled)(struct fence *fence);
long (*wait)(struct fence *fence, bool intr, signed long timeout);
void (*release)(struct fence *fence);
int (*fill_driver_data)(struct fence *fence, void *data, int size);
void (*fence_value_str)(struct fence *fence, char *str, int size);
void (*timeline_value_str)(struct fence *fence, char *str, int size);
+};
+void __fence_init(struct fence *fence, const struct fence_ops *ops,
spinlock_t *lock, unsigned context, unsigned seqno);
+/**
- fence_get - increases refcount of the fence
- @fence: [in] fence to increase refcount of
- */
+static inline void fence_get(struct fence *fence) +{
if (WARN_ON(!fence))
return;
kref_get(&fence->refcount);
+}
+extern void release_fence(struct kref *kref);
+/**
- fence_put - decreases refcount of the fence
- @fence: [in] fence to reduce refcount of
- */
+static inline void fence_put(struct fence *fence) +{
if (WARN_ON(!fence))
return;
kref_put(&fence->refcount, release_fence);
+}
+int fence_signal(struct fence *fence); +int __fence_signal(struct fence *fence); +long fence_default_wait(struct fence *fence, bool intr, signed long timeout); +int fence_add_callback(struct fence *fence, struct fence_cb *cb,
fence_func_t func);
+bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); +void fence_enable_sw_signaling(struct fence *fence);
+static inline bool +__fence_is_signaled(struct fence *fence) +{
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
return true;
if (fence->ops->signaled && fence->ops->signaled(fence)) {
__fence_signal(fence);
return true;
}
return false;
+}
+/**
- fence_is_signaled - Return an indication if the fence is signaled yet.
- @fence: [in] the fence to check
- Returns true if the fence was already signaled, false if not. Since this
- function doesn't enable signaling, it is not guaranteed to ever return true
- If fence_add_callback, fence_wait or fence_enable_sw_signaling
- haven't been called before.
- It's recommended for seqno fences to call fence_signal when the
- operation is complete, it makes it possible to prevent issues from
- wraparound between time of issue and time of use by checking the return
- value of this function before calling hardware-specific wait instructions.
- */
+static inline bool +fence_is_signaled(struct fence *fence) +{
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
return true;
if (fence->ops->signaled && fence->ops->signaled(fence)) {
fence_signal(fence);
return true;
}
return false;
+}
+/**
- fence_later - return the chronologically later fence
- @f1: [in] the first fence from the same context
- @f2: [in] the second fence from the same context
- Returns NULL if both fences are signaled, otherwise the fence that would be
- signaled last. Both fences must be from the same context, since a seqno is
- not re-used across contexts.
- */
+static inline struct fence *fence_later(struct fence *f1, struct fence *f2) +{
BUG_ON(f1->context != f2->context);
/*
* can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been
* set called if enable_signaling wasn't, and enabling that here is
* overkill.
*/
if (f2->seqno - f1->seqno <= INT_MAX)
return fence_is_signaled(f2) ? NULL : f2;
else
return fence_is_signaled(f1) ? NULL : f1;
+}
+long fence_wait_timeout(struct fence *fence, bool intr, signed long timeout);
+/**
- fence_wait - sleep until the fence gets signaled
- @fence: [in] the fence to wait on
- @intr: [in] if true, do an interruptible wait
- This function will return -ERESTARTSYS if interrupted by a signal,
- or 0 if the fence was signaled. Other error values may be
- returned on custom implementations.
- Performs a synchronous wait on this fence. It is assumed the caller
- directly or indirectly (buf-mgr between reservation and committing)
- holds a reference to the fence, otherwise the fence might be
- freed before return, resulting in undefined behavior.
- */
+static inline long fence_wait(struct fence *fence, bool intr) +{
long ret;
/* Since fence_wait_timeout cannot timeout with
* MAX_SCHEDULE_TIMEOUT, only valid return values are
* -ERESTARTSYS and MAX_SCHEDULE_TIMEOUT.
*/
ret = fence_wait_timeout(fence, intr, MAX_SCHEDULE_TIMEOUT);
return ret < 0 ? ret : 0;
+}
+unsigned fence_context_alloc(unsigned num);
+#define FENCE_TRACE(f, fmt, args...) \
do { \
struct fence *__ff = (f); \
if (config_enabled(CONFIG_FENCE_TRACE)) \
pr_info("f %u#%u: " fmt, \
__ff->context, __ff->seqno, ##args); \
} while (0)
+#define FENCE_WARN(f, fmt, args...) \
do { \
struct fence *__ff = (f); \
pr_warn("f %u#%u: " fmt, __ff->context, __ff->seqno, ##args); \
} while (0)
+#define FENCE_ERR(f, fmt, args...) \
do { \
struct fence *__ff = (f); \
pr_err("f %u#%u: " fmt, __ff->context, __ff->seqno, ##args); \
} while (0)
+#endif /* __LINUX_FENCE_H */ diff --git a/include/trace/events/fence.h b/include/trace/events/fence.h new file mode 100644 index 000000000000..ce3d83230cff --- /dev/null +++ b/include/trace/events/fence.h @@ -0,0 +1,125 @@ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM fence
+#if !defined(_TRACE_FENCE_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_FENCE_H
+#include <linux/tracepoint.h>
+struct fence;
+TRACE_EVENT(fence_annotate_wait_on,
/* fence: the fence waiting on f1, f1: the fence to be waited on. */
TP_PROTO(struct fence *fence, struct fence *f1),
TP_ARGS(fence, f1),
TP_STRUCT__entry(
__string(driver, fence->ops->get_driver_name(fence))
__string(timeline, fence->ops->get_driver_name(fence))
__field(unsigned int, context)
__field(unsigned int, seqno)
__string(waiting_driver, f1->ops->get_driver_name(f1))
__string(waiting_timeline, f1->ops->get_timeline_name(f1))
__field(unsigned int, waiting_context)
__field(unsigned int, waiting_seqno)
),
TP_fast_assign(
__assign_str(driver, fence->ops->get_driver_name(fence))
__assign_str(timeline, fence->ops->get_timeline_name(fence))
__entry->context = fence->context;
__entry->seqno = fence->seqno;
__assign_str(waiting_driver, f1->ops->get_driver_name(f1))
__assign_str(waiting_timeline, f1->ops->get_timeline_name(f1))
__entry->waiting_context = f1->context;
__entry->waiting_seqno = f1->seqno;
),
TP_printk("driver=%s timeline=%s context=%u seqno=%u "
"waits on driver=%s timeline=%s context=%u seqno=%u",
__get_str(driver), __get_str(timeline), __entry->context, __entry->seqno,
__get_str(waiting_driver), __get_str(waiting_timeline), __entry->waiting_context, __entry->waiting_seqno)
+);
+DECLARE_EVENT_CLASS(fence,
TP_PROTO(struct fence *fence),
TP_ARGS(fence),
TP_STRUCT__entry(
__string(driver, fence->ops->get_driver_name(fence))
__string(timeline, fence->ops->get_timeline_name(fence))
__field(unsigned int, context)
__field(unsigned int, seqno)
),
TP_fast_assign(
__assign_str(driver, fence->ops->get_driver_name(fence))
__assign_str(timeline, fence->ops->get_timeline_name(fence))
__entry->context = fence->context;
__entry->seqno = fence->seqno;
),
TP_printk("driver=%s timeline=%s context=%u seqno=%u",
__get_str(driver), __get_str(timeline), __entry->context, __entry->seqno)
+);
+DEFINE_EVENT(fence, fence_emit,
TP_PROTO(struct fence *fence),
TP_ARGS(fence)
+);
+DEFINE_EVENT(fence, fence_init,
TP_PROTO(struct fence *fence),
TP_ARGS(fence)
+);
+DEFINE_EVENT(fence, fence_destroy,
TP_PROTO(struct fence *fence),
TP_ARGS(fence)
+);
+DEFINE_EVENT(fence, fence_enable_signal,
TP_PROTO(struct fence *fence),
TP_ARGS(fence)
+);
+DEFINE_EVENT(fence, fence_signaled,
TP_PROTO(struct fence *fence),
TP_ARGS(fence)
+);
+DEFINE_EVENT(fence, fence_wait_start,
TP_PROTO(struct fence *fence),
TP_ARGS(fence)
+);
+DEFINE_EVENT(fence, fence_wait_end,
TP_PROTO(struct fence *fence),
TP_ARGS(fence)
+);
+#endif /* _TRACE_FENCE_H */
+/* This part must be outside protection */ +#include <trace/define_trace.h>
This type of fence can be used with hardware synchronization for simple hardware that can block execution until the condition (dma_buf[offset] - value) >= 0 has been met.
A software fallback still has to be provided in case the fence is used with a device that doesn't support this mechanism. It is useful to expose this for graphics cards that have an op to support this.
Some cards like i915 can export those, but don't have an option to wait, so they need the software fallback.
I extended the original patch by Rob Clark.
v1: Original v2: Renamed from bikeshed to seqno, moved into dma-fence.c since not much was left of the file. Lots of documentation added. v3: Use fence_ops instead of custom callbacks. Moved to own file to avoid circular dependency between dma-buf.h and fence.h v4: Add spinlock pointer to seqno_fence_init
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- Documentation/DocBook/device-drivers.tmpl | 1 drivers/base/fence.c | 50 +++++++++++++ include/linux/seqno-fence.h | 109 +++++++++++++++++++++++++++++ 3 files changed, 160 insertions(+) create mode 100644 include/linux/seqno-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 7a0c9ddb4818..8c85c20942c2 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -131,6 +131,7 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-buf.c !Edrivers/base/fence.c !Iinclude/linux/fence.h +!Iinclude/linux/seqno-fence.h !Edrivers/base/reservation.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c diff --git a/drivers/base/fence.c b/drivers/base/fence.c index 12df2bf62034..cd0937127a89 100644 --- a/drivers/base/fence.c +++ b/drivers/base/fence.c @@ -25,6 +25,7 @@ #include <linux/export.h> #include <linux/atomic.h> #include <linux/fence.h> +#include <linux/seqno-fence.h>
#define CREATE_TRACE_POINTS #include <trace/events/fence.h> @@ -413,3 +414,52 @@ __fence_init(struct fence *fence, const struct fence_ops *ops, trace_fence_init(fence); } EXPORT_SYMBOL(__fence_init); + +static const char *seqno_fence_get_driver_name(struct fence *fence) { + struct seqno_fence *seqno_fence = to_seqno_fence(fence); + return seqno_fence->ops->get_driver_name(fence); +} + +static const char *seqno_fence_get_timeline_name(struct fence *fence) { + struct seqno_fence *seqno_fence = to_seqno_fence(fence); + return seqno_fence->ops->get_timeline_name(fence); +} + +static bool seqno_enable_signaling(struct fence *fence) +{ + struct seqno_fence *seqno_fence = to_seqno_fence(fence); + return seqno_fence->ops->enable_signaling(fence); +} + +static bool seqno_signaled(struct fence *fence) +{ + struct seqno_fence *seqno_fence = to_seqno_fence(fence); + return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fence); +} + +static void seqno_release(struct fence *fence) +{ + struct seqno_fence *f = to_seqno_fence(fence); + + dma_buf_put(f->sync_buf); + if (f->ops->release) + f->ops->release(fence); + else + kfree(f); +} + +static long seqno_wait(struct fence *fence, bool intr, signed long timeout) +{ + struct seqno_fence *f = to_seqno_fence(fence); + return f->ops->wait(fence, intr, timeout); +} + +const struct fence_ops seqno_fence_ops = { + .get_driver_name = seqno_fence_get_driver_name, + .get_timeline_name = seqno_fence_get_timeline_name, + .enable_signaling = seqno_enable_signaling, + .signaled = seqno_signaled, + .wait = seqno_wait, + .release = seqno_release, +}; +EXPORT_SYMBOL(seqno_fence_ops); diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h new file mode 100644 index 000000000000..952f7909128c --- /dev/null +++ b/include/linux/seqno-fence.h @@ -0,0 +1,109 @@ +/* + * seqno-fence, using a dma-buf to synchronize fencing + * + * Copyright (C) 2012 Texas Instruments + * Copyright (C) 2012 Canonical Ltd + * Authors: + * Rob Clark robdclark@gmail.com + * Maarten Lankhorst maarten.lankhorst@canonical.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __LINUX_SEQNO_FENCE_H +#define __LINUX_SEQNO_FENCE_H + +#include <linux/fence.h> +#include <linux/dma-buf.h> + +struct seqno_fence { + struct fence base; + + const struct fence_ops *ops; + struct dma_buf *sync_buf; + uint32_t seqno_ofs; +}; + +extern const struct fence_ops seqno_fence_ops; + +/** + * to_seqno_fence - cast a fence to a seqno_fence + * @fence: fence to cast to a seqno_fence + * + * Returns NULL if the fence is not a seqno_fence, + * or the seqno_fence otherwise. + */ +static inline struct seqno_fence * +to_seqno_fence(struct fence *fence) +{ + if (fence->ops != &seqno_fence_ops) + return NULL; + return container_of(fence, struct seqno_fence, base); +} + +/** + * seqno_fence_init - initialize a seqno fence + * @fence: seqno_fence to initialize + * @lock: pointer to spinlock to use for fence + * @sync_buf: buffer containing the memory location to signal on + * @context: the execution context this fence is a part of + * @seqno_ofs: the offset within @sync_buf + * @seqno: the sequence # to signal on + * @ops: the fence_ops for operations on this seqno fence + * + * This function initializes a struct seqno_fence with passed parameters, + * and takes a reference on sync_buf which is released on fence destruction. + * + * A seqno_fence is a dma_fence which can complete in software when + * enable_signaling is called, but it also completes when + * (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true + * + * The seqno_fence will take a refcount on the sync_buf until it's + * destroyed, but actual lifetime of sync_buf may be longer if one of the + * callers take a reference to it. + * + * Certain hardware have instructions to insert this type of wait condition + * in the command stream, so no intervention from software would be needed. + * This type of fence can be destroyed before completed, however a reference + * on the sync_buf dma-buf can be taken. It is encouraged to re-use the same + * dma-buf for sync_buf, since mapping or unmapping the sync_buf to the + * device's vm can be expensive. + * + * It is recommended for creators of seqno_fence to call fence_signal + * before destruction. This will prevent possible issues from wraparound at + * time of issue vs time of check, since users can check fence_is_signaled + * before submitting instructions for the hardware to wait on the fence. + * However, when ops.enable_signaling is not called, it doesn't have to be + * done as soon as possible, just before there's any real danger of seqno + * wraparound. + */ +static inline void +seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock, + struct dma_buf *sync_buf, uint32_t context, uint32_t seqno_ofs, + uint32_t seqno, const struct fence_ops *ops) +{ + BUG_ON(!fence || !sync_buf || !ops); + BUG_ON(!ops->wait || !ops->enable_signaling || !ops->get_driver_name || !ops->get_timeline_name); + + /* + * ops is used in __fence_init for get_driver_name, so needs to be + * initialized first + */ + fence->ops = ops; + __fence_init(&fence->base, &seqno_fence_ops, lock, context, seqno); + get_dma_buf(sync_buf); + fence->sync_buf = sync_buf; + fence->seqno_ofs = seqno_ofs; +} + +#endif /* __LINUX_SEQNO_FENCE_H */
On Mon, Feb 17, 2014 at 10:56 AM, Maarten Lankhorst maarten.lankhorst@canonical.com wrote:
This type of fence can be used with hardware synchronization for simple hardware that can block execution until the condition (dma_buf[offset] - value) >= 0 has been met.
A software fallback still has to be provided in case the fence is used with a device that doesn't support this mechanism. It is useful to expose this for graphics cards that have an op to support this.
Some cards like i915 can export those, but don't have an option to wait, so they need the software fallback.
I extended the original patch by Rob Clark.
v1: Original v2: Renamed from bikeshed to seqno, moved into dma-fence.c since not much was left of the file. Lots of documentation added. v3: Use fence_ops instead of custom callbacks. Moved to own file to avoid circular dependency between dma-buf.h and fence.h v4: Add spinlock pointer to seqno_fence_init
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Reviewed-by: Rob Clark robdclark@gmail.com
Documentation/DocBook/device-drivers.tmpl | 1 drivers/base/fence.c | 50 +++++++++++++ include/linux/seqno-fence.h | 109 +++++++++++++++++++++++++++++ 3 files changed, 160 insertions(+) create mode 100644 include/linux/seqno-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 7a0c9ddb4818..8c85c20942c2 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -131,6 +131,7 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-buf.c !Edrivers/base/fence.c !Iinclude/linux/fence.h +!Iinclude/linux/seqno-fence.h !Edrivers/base/reservation.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c diff --git a/drivers/base/fence.c b/drivers/base/fence.c index 12df2bf62034..cd0937127a89 100644 --- a/drivers/base/fence.c +++ b/drivers/base/fence.c @@ -25,6 +25,7 @@ #include <linux/export.h> #include <linux/atomic.h> #include <linux/fence.h> +#include <linux/seqno-fence.h>
#define CREATE_TRACE_POINTS #include <trace/events/fence.h> @@ -413,3 +414,52 @@ __fence_init(struct fence *fence, const struct fence_ops *ops, trace_fence_init(fence); } EXPORT_SYMBOL(__fence_init);
+static const char *seqno_fence_get_driver_name(struct fence *fence) {
struct seqno_fence *seqno_fence = to_seqno_fence(fence);
return seqno_fence->ops->get_driver_name(fence);
+}
+static const char *seqno_fence_get_timeline_name(struct fence *fence) {
struct seqno_fence *seqno_fence = to_seqno_fence(fence);
return seqno_fence->ops->get_timeline_name(fence);
+}
+static bool seqno_enable_signaling(struct fence *fence) +{
struct seqno_fence *seqno_fence = to_seqno_fence(fence);
return seqno_fence->ops->enable_signaling(fence);
+}
+static bool seqno_signaled(struct fence *fence) +{
struct seqno_fence *seqno_fence = to_seqno_fence(fence);
return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fence);
+}
+static void seqno_release(struct fence *fence) +{
struct seqno_fence *f = to_seqno_fence(fence);
dma_buf_put(f->sync_buf);
if (f->ops->release)
f->ops->release(fence);
else
kfree(f);
+}
+static long seqno_wait(struct fence *fence, bool intr, signed long timeout) +{
struct seqno_fence *f = to_seqno_fence(fence);
return f->ops->wait(fence, intr, timeout);
+}
+const struct fence_ops seqno_fence_ops = {
.get_driver_name = seqno_fence_get_driver_name,
.get_timeline_name = seqno_fence_get_timeline_name,
.enable_signaling = seqno_enable_signaling,
.signaled = seqno_signaled,
.wait = seqno_wait,
.release = seqno_release,
+}; +EXPORT_SYMBOL(seqno_fence_ops); diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h new file mode 100644 index 000000000000..952f7909128c --- /dev/null +++ b/include/linux/seqno-fence.h @@ -0,0 +1,109 @@ +/*
- seqno-fence, using a dma-buf to synchronize fencing
- Copyright (C) 2012 Texas Instruments
- Copyright (C) 2012 Canonical Ltd
- Authors:
- Rob Clark robdclark@gmail.com
- Maarten Lankhorst maarten.lankhorst@canonical.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __LINUX_SEQNO_FENCE_H +#define __LINUX_SEQNO_FENCE_H
+#include <linux/fence.h> +#include <linux/dma-buf.h>
+struct seqno_fence {
struct fence base;
const struct fence_ops *ops;
struct dma_buf *sync_buf;
uint32_t seqno_ofs;
+};
+extern const struct fence_ops seqno_fence_ops;
+/**
- to_seqno_fence - cast a fence to a seqno_fence
- @fence: fence to cast to a seqno_fence
- Returns NULL if the fence is not a seqno_fence,
- or the seqno_fence otherwise.
- */
+static inline struct seqno_fence * +to_seqno_fence(struct fence *fence) +{
if (fence->ops != &seqno_fence_ops)
return NULL;
return container_of(fence, struct seqno_fence, base);
+}
+/**
- seqno_fence_init - initialize a seqno fence
- @fence: seqno_fence to initialize
- @lock: pointer to spinlock to use for fence
- @sync_buf: buffer containing the memory location to signal on
- @context: the execution context this fence is a part of
- @seqno_ofs: the offset within @sync_buf
- @seqno: the sequence # to signal on
- @ops: the fence_ops for operations on this seqno fence
- This function initializes a struct seqno_fence with passed parameters,
- and takes a reference on sync_buf which is released on fence destruction.
- A seqno_fence is a dma_fence which can complete in software when
- enable_signaling is called, but it also completes when
- (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true
- The seqno_fence will take a refcount on the sync_buf until it's
- destroyed, but actual lifetime of sync_buf may be longer if one of the
- callers take a reference to it.
- Certain hardware have instructions to insert this type of wait condition
- in the command stream, so no intervention from software would be needed.
- This type of fence can be destroyed before completed, however a reference
- on the sync_buf dma-buf can be taken. It is encouraged to re-use the same
- dma-buf for sync_buf, since mapping or unmapping the sync_buf to the
- device's vm can be expensive.
- It is recommended for creators of seqno_fence to call fence_signal
- before destruction. This will prevent possible issues from wraparound at
- time of issue vs time of check, since users can check fence_is_signaled
- before submitting instructions for the hardware to wait on the fence.
- However, when ops.enable_signaling is not called, it doesn't have to be
- done as soon as possible, just before there's any real danger of seqno
- wraparound.
- */
+static inline void +seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock,
struct dma_buf *sync_buf, uint32_t context, uint32_t seqno_ofs,
uint32_t seqno, const struct fence_ops *ops)
+{
BUG_ON(!fence || !sync_buf || !ops);
BUG_ON(!ops->wait || !ops->enable_signaling || !ops->get_driver_name || !ops->get_timeline_name);
/*
* ops is used in __fence_init for get_driver_name, so needs to be
* initialized first
*/
fence->ops = ops;
__fence_init(&fence->base, &seqno_fence_ops, lock, context, seqno);
get_dma_buf(sync_buf);
fence->sync_buf = sync_buf;
fence->seqno_ofs = seqno_ofs;
+}
+#endif /* __LINUX_SEQNO_FENCE_H */
Am 17.02.2014 16:56, schrieb Maarten Lankhorst:
This type of fence can be used with hardware synchronization for simple hardware that can block execution until the condition (dma_buf[offset] - value) >= 0 has been met.
Can't we make that just "dma_buf[offset] != 0" instead? As far as I know this way it would match the definition M$ uses in their WDDM specification and so make it much more likely that hardware supports it.
Apart from that I still don't like the idea of leaking a drivers IRQ context outside of the driver, but without a proper GPU scheduler there probably isn't much alternative.
Christian.
A software fallback still has to be provided in case the fence is used with a device that doesn't support this mechanism. It is useful to expose this for graphics cards that have an op to support this.
Some cards like i915 can export those, but don't have an option to wait, so they need the software fallback.
I extended the original patch by Rob Clark.
v1: Original v2: Renamed from bikeshed to seqno, moved into dma-fence.c since not much was left of the file. Lots of documentation added. v3: Use fence_ops instead of custom callbacks. Moved to own file to avoid circular dependency between dma-buf.h and fence.h v4: Add spinlock pointer to seqno_fence_init
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Documentation/DocBook/device-drivers.tmpl | 1 drivers/base/fence.c | 50 +++++++++++++ include/linux/seqno-fence.h | 109 +++++++++++++++++++++++++++++ 3 files changed, 160 insertions(+) create mode 100644 include/linux/seqno-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 7a0c9ddb4818..8c85c20942c2 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -131,6 +131,7 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-buf.c !Edrivers/base/fence.c !Iinclude/linux/fence.h +!Iinclude/linux/seqno-fence.h !Edrivers/base/reservation.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c diff --git a/drivers/base/fence.c b/drivers/base/fence.c index 12df2bf62034..cd0937127a89 100644 --- a/drivers/base/fence.c +++ b/drivers/base/fence.c @@ -25,6 +25,7 @@ #include <linux/export.h> #include <linux/atomic.h> #include <linux/fence.h> +#include <linux/seqno-fence.h> #define CREATE_TRACE_POINTS #include <trace/events/fence.h> @@ -413,3 +414,52 @@ __fence_init(struct fence *fence, const struct fence_ops *ops, trace_fence_init(fence); } EXPORT_SYMBOL(__fence_init);
+static const char *seqno_fence_get_driver_name(struct fence *fence) {
- struct seqno_fence *seqno_fence = to_seqno_fence(fence);
- return seqno_fence->ops->get_driver_name(fence);
+}
+static const char *seqno_fence_get_timeline_name(struct fence *fence) {
- struct seqno_fence *seqno_fence = to_seqno_fence(fence);
- return seqno_fence->ops->get_timeline_name(fence);
+}
+static bool seqno_enable_signaling(struct fence *fence) +{
- struct seqno_fence *seqno_fence = to_seqno_fence(fence);
- return seqno_fence->ops->enable_signaling(fence);
+}
+static bool seqno_signaled(struct fence *fence) +{
- struct seqno_fence *seqno_fence = to_seqno_fence(fence);
- return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fence);
+}
+static void seqno_release(struct fence *fence) +{
- struct seqno_fence *f = to_seqno_fence(fence);
- dma_buf_put(f->sync_buf);
- if (f->ops->release)
f->ops->release(fence);
- else
kfree(f);
+}
+static long seqno_wait(struct fence *fence, bool intr, signed long timeout) +{
- struct seqno_fence *f = to_seqno_fence(fence);
- return f->ops->wait(fence, intr, timeout);
+}
+const struct fence_ops seqno_fence_ops = {
- .get_driver_name = seqno_fence_get_driver_name,
- .get_timeline_name = seqno_fence_get_timeline_name,
- .enable_signaling = seqno_enable_signaling,
- .signaled = seqno_signaled,
- .wait = seqno_wait,
- .release = seqno_release,
+}; +EXPORT_SYMBOL(seqno_fence_ops); diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h new file mode 100644 index 000000000000..952f7909128c --- /dev/null +++ b/include/linux/seqno-fence.h @@ -0,0 +1,109 @@ +/*
- seqno-fence, using a dma-buf to synchronize fencing
- Copyright (C) 2012 Texas Instruments
- Copyright (C) 2012 Canonical Ltd
- Authors:
- Rob Clark robdclark@gmail.com
- Maarten Lankhorst maarten.lankhorst@canonical.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __LINUX_SEQNO_FENCE_H +#define __LINUX_SEQNO_FENCE_H
+#include <linux/fence.h> +#include <linux/dma-buf.h>
+struct seqno_fence {
- struct fence base;
- const struct fence_ops *ops;
- struct dma_buf *sync_buf;
- uint32_t seqno_ofs;
+};
+extern const struct fence_ops seqno_fence_ops;
+/**
- to_seqno_fence - cast a fence to a seqno_fence
- @fence: fence to cast to a seqno_fence
- Returns NULL if the fence is not a seqno_fence,
- or the seqno_fence otherwise.
- */
+static inline struct seqno_fence * +to_seqno_fence(struct fence *fence) +{
- if (fence->ops != &seqno_fence_ops)
return NULL;
- return container_of(fence, struct seqno_fence, base);
+}
+/**
- seqno_fence_init - initialize a seqno fence
- @fence: seqno_fence to initialize
- @lock: pointer to spinlock to use for fence
- @sync_buf: buffer containing the memory location to signal on
- @context: the execution context this fence is a part of
- @seqno_ofs: the offset within @sync_buf
- @seqno: the sequence # to signal on
- @ops: the fence_ops for operations on this seqno fence
- This function initializes a struct seqno_fence with passed parameters,
- and takes a reference on sync_buf which is released on fence destruction.
- A seqno_fence is a dma_fence which can complete in software when
- enable_signaling is called, but it also completes when
- (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true
- The seqno_fence will take a refcount on the sync_buf until it's
- destroyed, but actual lifetime of sync_buf may be longer if one of the
- callers take a reference to it.
- Certain hardware have instructions to insert this type of wait condition
- in the command stream, so no intervention from software would be needed.
- This type of fence can be destroyed before completed, however a reference
- on the sync_buf dma-buf can be taken. It is encouraged to re-use the same
- dma-buf for sync_buf, since mapping or unmapping the sync_buf to the
- device's vm can be expensive.
- It is recommended for creators of seqno_fence to call fence_signal
- before destruction. This will prevent possible issues from wraparound at
- time of issue vs time of check, since users can check fence_is_signaled
- before submitting instructions for the hardware to wait on the fence.
- However, when ops.enable_signaling is not called, it doesn't have to be
- done as soon as possible, just before there's any real danger of seqno
- wraparound.
- */
+static inline void +seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock,
struct dma_buf *sync_buf, uint32_t context, uint32_t seqno_ofs,
uint32_t seqno, const struct fence_ops *ops)
+{
- BUG_ON(!fence || !sync_buf || !ops);
- BUG_ON(!ops->wait || !ops->enable_signaling || !ops->get_driver_name || !ops->get_timeline_name);
- /*
* ops is used in __fence_init for get_driver_name, so needs to be
* initialized first
*/
- fence->ops = ops;
- __fence_init(&fence->base, &seqno_fence_ops, lock, context, seqno);
- get_dma_buf(sync_buf);
- fence->sync_buf = sync_buf;
- fence->seqno_ofs = seqno_ofs;
+}
+#endif /* __LINUX_SEQNO_FENCE_H */
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Mon, Feb 17, 2014 at 11:56 AM, Christian König deathsimple@vodafone.de wrote:
Am 17.02.2014 16:56, schrieb Maarten Lankhorst:
This type of fence can be used with hardware synchronization for simple hardware that can block execution until the condition (dma_buf[offset] - value) >= 0 has been met.
Can't we make that just "dma_buf[offset] != 0" instead? As far as I know this way it would match the definition M$ uses in their WDDM specification and so make it much more likely that hardware supports it.
well 'buf[offset] >= value' at least means the same slot can be used for multiple operations (with increasing values of 'value').. not sure if that is something people care about.
=value seems to be possible with adreno and radeon. I'm not really sure about others (although I presume it as least supported for nv desktop stuff). For hw that cannot do >=value, we can either have a different fence implementation which uses the !=0 approach. Or change seqno-fence implementation later if needed. But if someone has hw that can do !=0 but not >=value, speak up now ;-)
Apart from that I still don't like the idea of leaking a drivers IRQ context outside of the driver, but without a proper GPU scheduler there probably isn't much alternative.
I guess it will be not uncommon scenario for gpu device to just need to kick display device to write a few registers for a page flip.. probably best not to schedule a worker just for this (unless the signalled device otherwise needs to). I think it is better in this case to give the signalee some rope to hang themselves, and make it the responsibility of the callback to kick things off to a worker if needed.
BR, -R
Christian.
A software fallback still has to be provided in case the fence is used with a device that doesn't support this mechanism. It is useful to expose this for graphics cards that have an op to support this.
Some cards like i915 can export those, but don't have an option to wait, so they need the software fallback.
I extended the original patch by Rob Clark.
v1: Original v2: Renamed from bikeshed to seqno, moved into dma-fence.c since not much was left of the file. Lots of documentation added. v3: Use fence_ops instead of custom callbacks. Moved to own file to avoid circular dependency between dma-buf.h and fence.h v4: Add spinlock pointer to seqno_fence_init
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Documentation/DocBook/device-drivers.tmpl | 1 drivers/base/fence.c | 50 +++++++++++++ include/linux/seqno-fence.h | 109 +++++++++++++++++++++++++++++ 3 files changed, 160 insertions(+) create mode 100644 include/linux/seqno-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index 7a0c9ddb4818..8c85c20942c2 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -131,6 +131,7 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-buf.c !Edrivers/base/fence.c !Iinclude/linux/fence.h +!Iinclude/linux/seqno-fence.h !Edrivers/base/reservation.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c diff --git a/drivers/base/fence.c b/drivers/base/fence.c index 12df2bf62034..cd0937127a89 100644 --- a/drivers/base/fence.c +++ b/drivers/base/fence.c @@ -25,6 +25,7 @@ #include <linux/export.h> #include <linux/atomic.h> #include <linux/fence.h> +#include <linux/seqno-fence.h> #define CREATE_TRACE_POINTS #include <trace/events/fence.h> @@ -413,3 +414,52 @@ __fence_init(struct fence *fence, const struct fence_ops *ops, trace_fence_init(fence); } EXPORT_SYMBOL(__fence_init);
+static const char *seqno_fence_get_driver_name(struct fence *fence) {
struct seqno_fence *seqno_fence = to_seqno_fence(fence);
return seqno_fence->ops->get_driver_name(fence);
+}
+static const char *seqno_fence_get_timeline_name(struct fence *fence) {
struct seqno_fence *seqno_fence = to_seqno_fence(fence);
return seqno_fence->ops->get_timeline_name(fence);
+}
+static bool seqno_enable_signaling(struct fence *fence) +{
struct seqno_fence *seqno_fence = to_seqno_fence(fence);
return seqno_fence->ops->enable_signaling(fence);
+}
+static bool seqno_signaled(struct fence *fence) +{
struct seqno_fence *seqno_fence = to_seqno_fence(fence);
return seqno_fence->ops->signaled &&
seqno_fence->ops->signaled(fence); +}
+static void seqno_release(struct fence *fence) +{
struct seqno_fence *f = to_seqno_fence(fence);
dma_buf_put(f->sync_buf);
if (f->ops->release)
f->ops->release(fence);
else
kfree(f);
+}
+static long seqno_wait(struct fence *fence, bool intr, signed long timeout) +{
struct seqno_fence *f = to_seqno_fence(fence);
return f->ops->wait(fence, intr, timeout);
+}
+const struct fence_ops seqno_fence_ops = {
.get_driver_name = seqno_fence_get_driver_name,
.get_timeline_name = seqno_fence_get_timeline_name,
.enable_signaling = seqno_enable_signaling,
.signaled = seqno_signaled,
.wait = seqno_wait,
.release = seqno_release,
+}; +EXPORT_SYMBOL(seqno_fence_ops); diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h new file mode 100644 index 000000000000..952f7909128c --- /dev/null +++ b/include/linux/seqno-fence.h @@ -0,0 +1,109 @@ +/*
- seqno-fence, using a dma-buf to synchronize fencing
- Copyright (C) 2012 Texas Instruments
- Copyright (C) 2012 Canonical Ltd
- Authors:
- Rob Clark robdclark@gmail.com
- Maarten Lankhorst maarten.lankhorst@canonical.com
- This program is free software; you can redistribute it and/or modify
it
- under the terms of the GNU General Public License version 2 as
published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but
WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for
- more details.
- You should have received a copy of the GNU General Public License
along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __LINUX_SEQNO_FENCE_H +#define __LINUX_SEQNO_FENCE_H
+#include <linux/fence.h> +#include <linux/dma-buf.h>
+struct seqno_fence {
struct fence base;
const struct fence_ops *ops;
struct dma_buf *sync_buf;
uint32_t seqno_ofs;
+};
+extern const struct fence_ops seqno_fence_ops;
+/**
- to_seqno_fence - cast a fence to a seqno_fence
- @fence: fence to cast to a seqno_fence
- Returns NULL if the fence is not a seqno_fence,
- or the seqno_fence otherwise.
- */
+static inline struct seqno_fence * +to_seqno_fence(struct fence *fence) +{
if (fence->ops != &seqno_fence_ops)
return NULL;
return container_of(fence, struct seqno_fence, base);
+}
+/**
- seqno_fence_init - initialize a seqno fence
- @fence: seqno_fence to initialize
- @lock: pointer to spinlock to use for fence
- @sync_buf: buffer containing the memory location to signal on
- @context: the execution context this fence is a part of
- @seqno_ofs: the offset within @sync_buf
- @seqno: the sequence # to signal on
- @ops: the fence_ops for operations on this seqno fence
- This function initializes a struct seqno_fence with passed parameters,
- and takes a reference on sync_buf which is released on fence
destruction.
- A seqno_fence is a dma_fence which can complete in software when
- enable_signaling is called, but it also completes when
- (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true
- The seqno_fence will take a refcount on the sync_buf until it's
- destroyed, but actual lifetime of sync_buf may be longer if one of the
- callers take a reference to it.
- Certain hardware have instructions to insert this type of wait
condition
- in the command stream, so no intervention from software would be
needed.
- This type of fence can be destroyed before completed, however a
reference
- on the sync_buf dma-buf can be taken. It is encouraged to re-use the
same
- dma-buf for sync_buf, since mapping or unmapping the sync_buf to the
- device's vm can be expensive.
- It is recommended for creators of seqno_fence to call fence_signal
- before destruction. This will prevent possible issues from wraparound
at
- time of issue vs time of check, since users can check
fence_is_signaled
- before submitting instructions for the hardware to wait on the fence.
- However, when ops.enable_signaling is not called, it doesn't have to
be
- done as soon as possible, just before there's any real danger of seqno
- wraparound.
- */
+static inline void +seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock,
struct dma_buf *sync_buf, uint32_t context, uint32_t
seqno_ofs,
uint32_t seqno, const struct fence_ops *ops)
+{
BUG_ON(!fence || !sync_buf || !ops);
BUG_ON(!ops->wait || !ops->enable_signaling ||
!ops->get_driver_name || !ops->get_timeline_name);
/*
* ops is used in __fence_init for get_driver_name, so needs to be
* initialized first
*/
fence->ops = ops;
__fence_init(&fence->base, &seqno_fence_ops, lock, context,
seqno);
get_dma_buf(sync_buf);
fence->sync_buf = sync_buf;
fence->seqno_ofs = seqno_ofs;
+}
+#endif /* __LINUX_SEQNO_FENCE_H */
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
This allows reservation objects to be used in dma-buf. it's required for implementing polling support on the fences that belong to a dma-buf.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Acked-by: Mauro Carvalho Chehab m.chehab@samsung.com #drivers/media/v4l2-core/ --- drivers/base/dma-buf.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/drm_prime.c | 8 +++++++- drivers/gpu/drm/exynos/exynos_drm_dmabuf.c | 2 +- drivers/gpu/drm/i915/i915_gem_dmabuf.c | 2 +- drivers/gpu/drm/nouveau/nouveau_drm.c | 1 + drivers/gpu/drm/nouveau/nouveau_gem.h | 1 + drivers/gpu/drm/nouveau/nouveau_prime.c | 7 +++++++ drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 +- drivers/gpu/drm/radeon/radeon_drv.c | 2 ++ drivers/gpu/drm/radeon/radeon_prime.c | 8 ++++++++ drivers/gpu/drm/ttm/ttm_object.c | 2 +- drivers/media/v4l2-core/videobuf2-dma-contig.c | 2 +- include/drm/drmP.h | 2 ++ include/linux/dma-buf.h | 9 ++++++--- 14 files changed, 59 insertions(+), 11 deletions(-)
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 1e16cbd61da2..85e792c2c909 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -25,10 +25,12 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h> +#include <linux/fence.h> #include <linux/anon_inodes.h> #include <linux/export.h> #include <linux/debugfs.h> #include <linux/seq_file.h> +#include <linux/reservation.h>
static inline int is_dma_buf_file(struct file *);
@@ -56,6 +58,9 @@ static int dma_buf_release(struct inode *inode, struct file *file) list_del(&dmabuf->list_node); mutex_unlock(&db_list.lock);
+ if (dmabuf->resv == (struct reservation_object*)&dmabuf[1]) + reservation_object_fini(dmabuf->resv); + kfree(dmabuf); return 0; } @@ -128,6 +133,7 @@ static inline int is_dma_buf_file(struct file *file) * @size: [in] Size of the buffer * @flags: [in] mode flags for the file. * @exp_name: [in] name of the exporting module - useful for debugging. + * @resv: [in] reservation-object, NULL to allocate default one. * * Returns, on success, a newly created dma_buf object, which wraps the * supplied private data and operations for dma_buf_ops. On either missing @@ -135,10 +141,17 @@ static inline int is_dma_buf_file(struct file *file) * */ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, - size_t size, int flags, const char *exp_name) + size_t size, int flags, const char *exp_name, + struct reservation_object *resv) { struct dma_buf *dmabuf; struct file *file; + size_t alloc_size = sizeof(struct dma_buf); + if (!resv) + alloc_size += sizeof(struct reservation_object); + else + /* prevent &dma_buf[1] == dma_buf->resv */ + alloc_size += 1;
if (WARN_ON(!priv || !ops || !ops->map_dma_buf @@ -150,7 +163,7 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, return ERR_PTR(-EINVAL); }
- dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL); + dmabuf = kzalloc(alloc_size, GFP_KERNEL); if (dmabuf == NULL) return ERR_PTR(-ENOMEM);
@@ -158,6 +171,11 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, dmabuf->ops = ops; dmabuf->size = size; dmabuf->exp_name = exp_name; + if (!resv) { + resv = (struct reservation_object*)&dmabuf[1]; + reservation_object_init(resv); + } + dmabuf->resv = resv;
file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags); if (IS_ERR(file)) { diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 56805c39c906..a13e90245adf 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -318,7 +318,13 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { struct dma_buf *drm_gem_prime_export(struct drm_device *dev, struct drm_gem_object *obj, int flags) { - return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, flags); + struct reservation_object *robj = NULL; + + if (dev->driver->gem_prime_res_obj) + robj = dev->driver->gem_prime_res_obj(obj); + + return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, + flags, robj); } EXPORT_SYMBOL(drm_gem_prime_export);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c index 59827cc5e770..b5e89f46326e 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c +++ b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c @@ -187,7 +187,7 @@ struct dma_buf *exynos_dmabuf_prime_export(struct drm_device *drm_dev, struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
return dma_buf_export(obj, &exynos_dmabuf_ops, - exynos_gem_obj->base.size, flags); + exynos_gem_obj->base.size, flags, NULL); }
struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev, diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c index 9bb533e0d762..ea66f40e95b3 100644 --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c @@ -233,7 +233,7 @@ static const struct dma_buf_ops i915_dmabuf_ops = { struct dma_buf *i915_gem_prime_export(struct drm_device *dev, struct drm_gem_object *gem_obj, int flags) { - return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags); + return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags, NULL); }
static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index 78c8e7146d56..2a15c8e8d199 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -816,6 +816,7 @@ driver = { .gem_prime_export = drm_gem_prime_export, .gem_prime_import = drm_gem_prime_import, .gem_prime_pin = nouveau_gem_prime_pin, + .gem_prime_res_obj = nouveau_gem_prime_res_obj, .gem_prime_unpin = nouveau_gem_prime_unpin, .gem_prime_get_sg_table = nouveau_gem_prime_get_sg_table, .gem_prime_import_sg_table = nouveau_gem_prime_import_sg_table, diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h index 7caca057bc38..ddab762d81fe 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.h +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h @@ -35,6 +35,7 @@ extern int nouveau_gem_ioctl_info(struct drm_device *, void *, struct drm_file *);
extern int nouveau_gem_prime_pin(struct drm_gem_object *); +struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *); extern void nouveau_gem_prime_unpin(struct drm_gem_object *); extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *); extern struct drm_gem_object *nouveau_gem_prime_import_sg_table( diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index 51a2cb102b44..1f51008e4d26 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -102,3 +102,10 @@ void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
nouveau_bo_unpin(nvbo); } + +struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *obj) +{ + struct nouveau_bo *nvbo = nouveau_gem_object(obj); + + return nvbo->bo.resv; +} diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index 4fcca8d42796..a2dbfb1737b4 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -171,7 +171,7 @@ static struct dma_buf_ops omap_dmabuf_ops = { struct dma_buf *omap_gem_prime_export(struct drm_device *dev, struct drm_gem_object *obj, int flags) { - return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags); + return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags, NULL); }
struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev, diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c index 84a1bbb75f91..c15c1a1996fc 100644 --- a/drivers/gpu/drm/radeon/radeon_drv.c +++ b/drivers/gpu/drm/radeon/radeon_drv.c @@ -128,6 +128,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sg); int radeon_gem_prime_pin(struct drm_gem_object *obj); void radeon_gem_prime_unpin(struct drm_gem_object *obj); +struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *); void *radeon_gem_prime_vmap(struct drm_gem_object *obj); void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); extern long radeon_kms_compat_ioctl(struct file *filp, unsigned int cmd, @@ -561,6 +562,7 @@ static struct drm_driver kms_driver = { .gem_prime_import = drm_gem_prime_import, .gem_prime_pin = radeon_gem_prime_pin, .gem_prime_unpin = radeon_gem_prime_unpin, + .gem_prime_res_obj = radeon_gem_prime_res_obj, .gem_prime_get_sg_table = radeon_gem_prime_get_sg_table, .gem_prime_import_sg_table = radeon_gem_prime_import_sg_table, .gem_prime_vmap = radeon_gem_prime_vmap, diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index 20074560fc25..28d71070c389 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -103,3 +103,11 @@ void radeon_gem_prime_unpin(struct drm_gem_object *obj) radeon_bo_unpin(bo); radeon_bo_unreserve(bo); } + + +struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *obj) +{ + struct radeon_bo *bo = gem_to_radeon_bo(obj); + + return bo->tbo.resv; +} diff --git a/drivers/gpu/drm/ttm/ttm_object.c b/drivers/gpu/drm/ttm/ttm_object.c index 53b51c4e671a..d06f29bd2124 100644 --- a/drivers/gpu/drm/ttm/ttm_object.c +++ b/drivers/gpu/drm/ttm/ttm_object.c @@ -649,7 +649,7 @@ int ttm_prime_handle_to_fd(struct ttm_object_file *tfile, }
dma_buf = dma_buf_export(prime, &tdev->ops, - prime->size, flags); + prime->size, flags, NULL); if (IS_ERR(dma_buf)) { ret = PTR_ERR(dma_buf); ttm_mem_global_free(tdev->mem_glob, diff --git a/drivers/media/v4l2-core/videobuf2-dma-contig.c b/drivers/media/v4l2-core/videobuf2-dma-contig.c index 33d3871d1e13..93bd2230ab0b 100644 --- a/drivers/media/v4l2-core/videobuf2-dma-contig.c +++ b/drivers/media/v4l2-core/videobuf2-dma-contig.c @@ -404,7 +404,7 @@ static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv, unsigned long flags) if (WARN_ON(!buf->sgt_base)) return NULL;
- dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags); + dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags, NULL); if (IS_ERR(dbuf)) return NULL;
diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 04a7f31301f8..902d3abdd9f9 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -82,6 +82,7 @@ struct drm_device;
struct device_node; struct videomode; +struct reservation_object;
#include <drm/drm_os_linux.h> #include <drm/drm_hashtab.h> @@ -959,6 +960,7 @@ struct drm_driver { /* low-level interface used by drm_gem_prime_{import,export} */ int (*gem_prime_pin)(struct drm_gem_object *obj); void (*gem_prime_unpin)(struct drm_gem_object *obj); + struct reservation_object *(*gem_prime_res_obj)(struct drm_gem_object *); struct sg_table *(*gem_prime_get_sg_table)(struct drm_gem_object *obj); struct drm_gem_object *(*gem_prime_import_sg_table)( struct drm_device *dev, size_t size, diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index dfac5ed31120..34cfbac52c03 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -115,6 +115,7 @@ struct dma_buf_ops { * @exp_name: name of the exporter; useful for debugging. * @list_node: node for dma_buf accounting and debugging. * @priv: exporter specific private data for this buffer object. + * @resv: reservation object linked to this dma-buf */ struct dma_buf { size_t size; @@ -128,6 +129,7 @@ struct dma_buf { const char *exp_name; struct list_head list_node; void *priv; + struct reservation_object *resv; };
/** @@ -168,10 +170,11 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *dmabuf_attach);
struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, - size_t size, int flags, const char *); + size_t size, int flags, const char *, + struct reservation_object *);
-#define dma_buf_export(priv, ops, size, flags) \ - dma_buf_export_named(priv, ops, size, flags, __FILE__) +#define dma_buf_export(priv, ops, size, flags, resv) \ + dma_buf_export_named(priv, ops, size, flags, __FILE__, resv)
int dma_buf_fd(struct dma_buf *dmabuf, int flags); struct dma_buf *dma_buf_get(int fd);
On Mon, Feb 17, 2014 at 10:56 AM, Maarten Lankhorst maarten.lankhorst@canonical.com wrote:
This allows reservation objects to be used in dma-buf. it's required for implementing polling support on the fences that belong to a dma-buf.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Acked-by: Mauro Carvalho Chehab m.chehab@samsung.com #drivers/media/v4l2-core/
drivers/base/dma-buf.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/drm_prime.c | 8 +++++++- drivers/gpu/drm/exynos/exynos_drm_dmabuf.c | 2 +- drivers/gpu/drm/i915/i915_gem_dmabuf.c | 2 +- drivers/gpu/drm/nouveau/nouveau_drm.c | 1 + drivers/gpu/drm/nouveau/nouveau_gem.h | 1 + drivers/gpu/drm/nouveau/nouveau_prime.c | 7 +++++++ drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 +- drivers/gpu/drm/radeon/radeon_drv.c | 2 ++ drivers/gpu/drm/radeon/radeon_prime.c | 8 ++++++++ drivers/gpu/drm/ttm/ttm_object.c | 2 +- drivers/media/v4l2-core/videobuf2-dma-contig.c | 2 +- include/drm/drmP.h | 2 ++ include/linux/dma-buf.h | 9 ++++++--- 14 files changed, 59 insertions(+), 11 deletions(-)
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 1e16cbd61da2..85e792c2c909 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -25,10 +25,12 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h> +#include <linux/fence.h> #include <linux/anon_inodes.h> #include <linux/export.h> #include <linux/debugfs.h> #include <linux/seq_file.h> +#include <linux/reservation.h>
static inline int is_dma_buf_file(struct file *);
@@ -56,6 +58,9 @@ static int dma_buf_release(struct inode *inode, struct file *file) list_del(&dmabuf->list_node); mutex_unlock(&db_list.lock);
if (dmabuf->resv == (struct reservation_object*)&dmabuf[1])
reservation_object_fini(dmabuf->resv);
kfree(dmabuf); return 0;
} @@ -128,6 +133,7 @@ static inline int is_dma_buf_file(struct file *file)
- @size: [in] Size of the buffer
- @flags: [in] mode flags for the file.
- @exp_name: [in] name of the exporting module - useful for debugging.
- @resv: [in] reservation-object, NULL to allocate default one.
- Returns, on success, a newly created dma_buf object, which wraps the
- supplied private data and operations for dma_buf_ops. On either missing
@@ -135,10 +141,17 @@ static inline int is_dma_buf_file(struct file *file)
*/ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
size_t size, int flags, const char *exp_name)
size_t size, int flags, const char *exp_name,
struct reservation_object *resv)
{ struct dma_buf *dmabuf; struct file *file;
size_t alloc_size = sizeof(struct dma_buf);
if (!resv)
alloc_size += sizeof(struct reservation_object);
else
/* prevent &dma_buf[1] == dma_buf->resv */
alloc_size += 1; if (WARN_ON(!priv || !ops || !ops->map_dma_buf
@@ -150,7 +163,7 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, return ERR_PTR(-EINVAL); }
dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL);
dmabuf = kzalloc(alloc_size, GFP_KERNEL); if (dmabuf == NULL) return ERR_PTR(-ENOMEM);
@@ -158,6 +171,11 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, dmabuf->ops = ops; dmabuf->size = size; dmabuf->exp_name = exp_name;
if (!resv) {
resv = (struct reservation_object*)&dmabuf[1];
reservation_object_init(resv);
}
dmabuf->resv = resv; file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags); if (IS_ERR(file)) {
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 56805c39c906..a13e90245adf 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -318,7 +318,13 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { struct dma_buf *drm_gem_prime_export(struct drm_device *dev, struct drm_gem_object *obj, int flags) {
return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, flags);
struct reservation_object *robj = NULL;
if (dev->driver->gem_prime_res_obj)
robj = dev->driver->gem_prime_res_obj(obj);
well, you could hook up msm_gem_prime_res_obj too (since I already have a resv obj in 'struct msm_gem_object' ;-)
That said, I wonder if maybe we just want to promote the 'struct reservation_object' ptr into 'struct drm_gem_object' so we can have a common get_prime_res_obj fxn for everyone using GEM?
Anyways, that only matters within drivers/gpu/drm so easy enough to change it later.. so for the drm/fence/reservation/dmabuf bits:
Reviewed-by: Rob Clark robdclark@gmail.com
return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size,
flags, robj);
} EXPORT_SYMBOL(drm_gem_prime_export);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c index 59827cc5e770..b5e89f46326e 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c +++ b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c @@ -187,7 +187,7 @@ struct dma_buf *exynos_dmabuf_prime_export(struct drm_device *drm_dev, struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
return dma_buf_export(obj, &exynos_dmabuf_ops,
exynos_gem_obj->base.size, flags);
exynos_gem_obj->base.size, flags, NULL);
}
struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev, diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c index 9bb533e0d762..ea66f40e95b3 100644 --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c @@ -233,7 +233,7 @@ static const struct dma_buf_ops i915_dmabuf_ops = { struct dma_buf *i915_gem_prime_export(struct drm_device *dev, struct drm_gem_object *gem_obj, int flags) {
return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags);
return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags, NULL);
}
static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index 78c8e7146d56..2a15c8e8d199 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -816,6 +816,7 @@ driver = { .gem_prime_export = drm_gem_prime_export, .gem_prime_import = drm_gem_prime_import, .gem_prime_pin = nouveau_gem_prime_pin,
.gem_prime_res_obj = nouveau_gem_prime_res_obj, .gem_prime_unpin = nouveau_gem_prime_unpin, .gem_prime_get_sg_table = nouveau_gem_prime_get_sg_table, .gem_prime_import_sg_table = nouveau_gem_prime_import_sg_table,
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h index 7caca057bc38..ddab762d81fe 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.h +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h @@ -35,6 +35,7 @@ extern int nouveau_gem_ioctl_info(struct drm_device *, void *, struct drm_file *);
extern int nouveau_gem_prime_pin(struct drm_gem_object *); +struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *); extern void nouveau_gem_prime_unpin(struct drm_gem_object *); extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *); extern struct drm_gem_object *nouveau_gem_prime_import_sg_table( diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index 51a2cb102b44..1f51008e4d26 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -102,3 +102,10 @@ void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
nouveau_bo_unpin(nvbo);
}
+struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *obj) +{
struct nouveau_bo *nvbo = nouveau_gem_object(obj);
return nvbo->bo.resv;
+} diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index 4fcca8d42796..a2dbfb1737b4 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -171,7 +171,7 @@ static struct dma_buf_ops omap_dmabuf_ops = { struct dma_buf *omap_gem_prime_export(struct drm_device *dev, struct drm_gem_object *obj, int flags) {
return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags);
return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags, NULL);
}
struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev, diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c index 84a1bbb75f91..c15c1a1996fc 100644 --- a/drivers/gpu/drm/radeon/radeon_drv.c +++ b/drivers/gpu/drm/radeon/radeon_drv.c @@ -128,6 +128,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sg); int radeon_gem_prime_pin(struct drm_gem_object *obj); void radeon_gem_prime_unpin(struct drm_gem_object *obj); +struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *); void *radeon_gem_prime_vmap(struct drm_gem_object *obj); void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); extern long radeon_kms_compat_ioctl(struct file *filp, unsigned int cmd, @@ -561,6 +562,7 @@ static struct drm_driver kms_driver = { .gem_prime_import = drm_gem_prime_import, .gem_prime_pin = radeon_gem_prime_pin, .gem_prime_unpin = radeon_gem_prime_unpin,
.gem_prime_res_obj = radeon_gem_prime_res_obj, .gem_prime_get_sg_table = radeon_gem_prime_get_sg_table, .gem_prime_import_sg_table = radeon_gem_prime_import_sg_table, .gem_prime_vmap = radeon_gem_prime_vmap,
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index 20074560fc25..28d71070c389 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -103,3 +103,11 @@ void radeon_gem_prime_unpin(struct drm_gem_object *obj) radeon_bo_unpin(bo); radeon_bo_unreserve(bo); }
+struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *obj) +{
struct radeon_bo *bo = gem_to_radeon_bo(obj);
return bo->tbo.resv;
+} diff --git a/drivers/gpu/drm/ttm/ttm_object.c b/drivers/gpu/drm/ttm/ttm_object.c index 53b51c4e671a..d06f29bd2124 100644 --- a/drivers/gpu/drm/ttm/ttm_object.c +++ b/drivers/gpu/drm/ttm/ttm_object.c @@ -649,7 +649,7 @@ int ttm_prime_handle_to_fd(struct ttm_object_file *tfile, }
dma_buf = dma_buf_export(prime, &tdev->ops,
prime->size, flags);
prime->size, flags, NULL); if (IS_ERR(dma_buf)) { ret = PTR_ERR(dma_buf); ttm_mem_global_free(tdev->mem_glob,
diff --git a/drivers/media/v4l2-core/videobuf2-dma-contig.c b/drivers/media/v4l2-core/videobuf2-dma-contig.c index 33d3871d1e13..93bd2230ab0b 100644 --- a/drivers/media/v4l2-core/videobuf2-dma-contig.c +++ b/drivers/media/v4l2-core/videobuf2-dma-contig.c @@ -404,7 +404,7 @@ static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv, unsigned long flags) if (WARN_ON(!buf->sgt_base)) return NULL;
dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags);
dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags, NULL); if (IS_ERR(dbuf)) return NULL;
diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 04a7f31301f8..902d3abdd9f9 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -82,6 +82,7 @@ struct drm_device;
struct device_node; struct videomode; +struct reservation_object;
#include <drm/drm_os_linux.h> #include <drm/drm_hashtab.h> @@ -959,6 +960,7 @@ struct drm_driver { /* low-level interface used by drm_gem_prime_{import,export} */ int (*gem_prime_pin)(struct drm_gem_object *obj); void (*gem_prime_unpin)(struct drm_gem_object *obj);
struct reservation_object *(*gem_prime_res_obj)(struct drm_gem_object *); struct sg_table *(*gem_prime_get_sg_table)(struct drm_gem_object *obj); struct drm_gem_object *(*gem_prime_import_sg_table)( struct drm_device *dev, size_t size,
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index dfac5ed31120..34cfbac52c03 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -115,6 +115,7 @@ struct dma_buf_ops {
- @exp_name: name of the exporter; useful for debugging.
- @list_node: node for dma_buf accounting and debugging.
- @priv: exporter specific private data for this buffer object.
*/
- @resv: reservation object linked to this dma-buf
struct dma_buf { size_t size; @@ -128,6 +129,7 @@ struct dma_buf { const char *exp_name; struct list_head list_node; void *priv;
struct reservation_object *resv;
};
/** @@ -168,10 +170,11 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *dmabuf_attach);
struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
size_t size, int flags, const char *);
size_t size, int flags, const char *,
struct reservation_object *);
-#define dma_buf_export(priv, ops, size, flags) \
dma_buf_export_named(priv, ops, size, flags, __FILE__)
+#define dma_buf_export(priv, ops, size, flags, resv) \
dma_buf_export_named(priv, ops, size, flags, __FILE__, resv)
int dma_buf_fd(struct dma_buf *dmabuf, int flags); struct dma_buf *dma_buf_get(int fd);
-- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
On 02/17/2014 04:56 PM, Maarten Lankhorst wrote:
This allows reservation objects to be used in dma-buf. it's required for implementing polling support on the fences that belong to a dma-buf.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Acked-by: Mauro Carvalho Chehab m.chehab@samsung.com #drivers/media/v4l2-core/
For the TTM part:
Acked-by: Thomas Hellstrom thellstrom@vmware.com
Android syncpoints can be mapped to a timeline. This removes the need to maintain a separate api for synchronization. I've left the android trace events in place, but the core fence events should already be sufficient for debugging.
v2: - Call fence_remove_callback in sync_fence_free if not all fences have fired. v3: - Merge Colin Cross' bugfixes, and the android fence merge optimization. v4: - Merge with the upstream fixes.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- drivers/staging/android/Kconfig | 1 drivers/staging/android/Makefile | 2 drivers/staging/android/sw_sync.c | 4 drivers/staging/android/sync.c | 892 +++++++++++----------------------- drivers/staging/android/sync.h | 80 ++- drivers/staging/android/sync_debug.c | 245 +++++++++ drivers/staging/android/trace/sync.h | 12 7 files changed, 592 insertions(+), 644 deletions(-) create mode 100644 drivers/staging/android/sync_debug.c
diff --git a/drivers/staging/android/Kconfig b/drivers/staging/android/Kconfig index b91c758883bf..ecc8194242b5 100644 --- a/drivers/staging/android/Kconfig +++ b/drivers/staging/android/Kconfig @@ -77,6 +77,7 @@ config SYNC bool "Synchronization framework" default n select ANON_INODES + select DMA_SHARED_BUFFER ---help--- This option enables the framework for synchronization between multiple drivers. Sync implementations can take advantage of hardware diff --git a/drivers/staging/android/Makefile b/drivers/staging/android/Makefile index 0a01e1914905..517ad5ffa429 100644 --- a/drivers/staging/android/Makefile +++ b/drivers/staging/android/Makefile @@ -9,5 +9,5 @@ obj-$(CONFIG_ANDROID_TIMED_OUTPUT) += timed_output.o obj-$(CONFIG_ANDROID_TIMED_GPIO) += timed_gpio.o obj-$(CONFIG_ANDROID_LOW_MEMORY_KILLER) += lowmemorykiller.o obj-$(CONFIG_ANDROID_INTF_ALARM_DEV) += alarm-dev.o -obj-$(CONFIG_SYNC) += sync.o +obj-$(CONFIG_SYNC) += sync.o sync_debug.o obj-$(CONFIG_SW_SYNC) += sw_sync.o diff --git a/drivers/staging/android/sw_sync.c b/drivers/staging/android/sw_sync.c index f24493ac65e3..a76db3ff87cb 100644 --- a/drivers/staging/android/sw_sync.c +++ b/drivers/staging/android/sw_sync.c @@ -50,7 +50,7 @@ static struct sync_pt *sw_sync_pt_dup(struct sync_pt *sync_pt) { struct sw_sync_pt *pt = (struct sw_sync_pt *) sync_pt; struct sw_sync_timeline *obj = - (struct sw_sync_timeline *)sync_pt->parent; + (struct sw_sync_timeline *)sync_pt_parent(sync_pt);
return (struct sync_pt *) sw_sync_pt_create(obj, pt->value); } @@ -59,7 +59,7 @@ static int sw_sync_pt_has_signaled(struct sync_pt *sync_pt) { struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt; struct sw_sync_timeline *obj = - (struct sw_sync_timeline *)sync_pt->parent; + (struct sw_sync_timeline *)sync_pt_parent(sync_pt);
return sw_sync_cmp(obj->value, pt->value) >= 0; } diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c index 3d05f662110b..8e77cd73b739 100644 --- a/drivers/staging/android/sync.c +++ b/drivers/staging/android/sync.c @@ -31,22 +31,13 @@ #define CREATE_TRACE_POINTS #include "trace/sync.h"
-static void sync_fence_signal_pt(struct sync_pt *pt); -static int _sync_pt_has_signaled(struct sync_pt *pt); -static void sync_fence_free(struct kref *kref); -static void sync_dump(void); - -static LIST_HEAD(sync_timeline_list_head); -static DEFINE_SPINLOCK(sync_timeline_list_lock); - -static LIST_HEAD(sync_fence_list_head); -static DEFINE_SPINLOCK(sync_fence_list_lock); +static const struct fence_ops android_fence_ops; +static const struct file_operations sync_fence_fops;
struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops, int size, const char *name) { struct sync_timeline *obj; - unsigned long flags;
if (size < sizeof(struct sync_timeline)) return NULL; @@ -57,17 +48,14 @@ struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops,
kref_init(&obj->kref); obj->ops = ops; + obj->context = fence_context_alloc(1); strlcpy(obj->name, name, sizeof(obj->name));
INIT_LIST_HEAD(&obj->child_list_head); - spin_lock_init(&obj->child_list_lock); - INIT_LIST_HEAD(&obj->active_list_head); - spin_lock_init(&obj->active_list_lock); + spin_lock_init(&obj->child_list_lock);
- spin_lock_irqsave(&sync_timeline_list_lock, flags); - list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head); - spin_unlock_irqrestore(&sync_timeline_list_lock, flags); + sync_timeline_debug_add(obj);
return obj; } @@ -77,11 +65,8 @@ static void sync_timeline_free(struct kref *kref) { struct sync_timeline *obj = container_of(kref, struct sync_timeline, kref); - unsigned long flags;
- spin_lock_irqsave(&sync_timeline_list_lock, flags); - list_del(&obj->sync_timeline_list); - spin_unlock_irqrestore(&sync_timeline_list_lock, flags); + sync_timeline_debug_remove(obj);
if (obj->ops->release_obj) obj->ops->release_obj(obj); @@ -89,6 +74,16 @@ static void sync_timeline_free(struct kref *kref) kfree(obj); }
+static void sync_timeline_get(struct sync_timeline *obj) +{ + kref_get(&obj->kref); +} + +static void sync_timeline_put(struct sync_timeline *obj) +{ + kref_put(&obj->kref, sync_timeline_free); +} + void sync_timeline_destroy(struct sync_timeline *obj) { obj->destroyed = true; @@ -98,75 +93,30 @@ void sync_timeline_destroy(struct sync_timeline *obj) * signal any children that their parent is going away. */ sync_timeline_signal(obj); - - kref_put(&obj->kref, sync_timeline_free); + sync_timeline_put(obj); } EXPORT_SYMBOL(sync_timeline_destroy);
-static void sync_timeline_add_pt(struct sync_timeline *obj, struct sync_pt *pt) -{ - unsigned long flags; - - pt->parent = obj; - - spin_lock_irqsave(&obj->child_list_lock, flags); - list_add_tail(&pt->child_list, &obj->child_list_head); - spin_unlock_irqrestore(&obj->child_list_lock, flags); -} - -static void sync_timeline_remove_pt(struct sync_pt *pt) -{ - struct sync_timeline *obj = pt->parent; - unsigned long flags; - - spin_lock_irqsave(&obj->active_list_lock, flags); - if (!list_empty(&pt->active_list)) - list_del_init(&pt->active_list); - spin_unlock_irqrestore(&obj->active_list_lock, flags); - - spin_lock_irqsave(&obj->child_list_lock, flags); - if (!list_empty(&pt->child_list)) - list_del_init(&pt->child_list); - - spin_unlock_irqrestore(&obj->child_list_lock, flags); -} - void sync_timeline_signal(struct sync_timeline *obj) { unsigned long flags; LIST_HEAD(signaled_pts); - struct list_head *pos, *n; + struct sync_pt *pt, *next;
trace_sync_timeline(obj);
- spin_lock_irqsave(&obj->active_list_lock, flags); - - list_for_each_safe(pos, n, &obj->active_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, active_list); - - if (_sync_pt_has_signaled(pt)) { - list_del_init(pos); - list_add(&pt->signaled_list, &signaled_pts); - kref_get(&pt->fence->kref); - } - } - - spin_unlock_irqrestore(&obj->active_list_lock, flags); - - list_for_each_safe(pos, n, &signaled_pts) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, signaled_list); - - list_del_init(pos); - sync_fence_signal_pt(pt); - kref_put(&pt->fence->kref, sync_fence_free); + spin_lock_irqsave(&obj->child_list_lock, flags); + list_for_each_entry_safe(pt, next, &obj->active_list_head, active_list) { + if (__fence_is_signaled(&pt->base)) + list_del(&pt->active_list); } + spin_unlock_irqrestore(&obj->child_list_lock, flags); } EXPORT_SYMBOL(sync_timeline_signal);
-struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size) +struct sync_pt *sync_pt_create(struct sync_timeline *obj, int size) { + unsigned long flags; struct sync_pt *pt;
if (size < sizeof(struct sync_pt)) @@ -176,87 +126,27 @@ struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size) if (pt == NULL) return NULL;
+ spin_lock_irqsave(&obj->child_list_lock, flags); + sync_timeline_get(obj); + __fence_init(&pt->base, &android_fence_ops, &obj->child_list_lock, obj->context, ++obj->value); + list_add_tail(&pt->child_list, &obj->child_list_head); INIT_LIST_HEAD(&pt->active_list); - kref_get(&parent->kref); - sync_timeline_add_pt(parent, pt); - + spin_unlock_irqrestore(&obj->child_list_lock, flags); return pt; } EXPORT_SYMBOL(sync_pt_create);
void sync_pt_free(struct sync_pt *pt) { - if (pt->parent->ops->free_pt) - pt->parent->ops->free_pt(pt); - - sync_timeline_remove_pt(pt); - - kref_put(&pt->parent->kref, sync_timeline_free); - - kfree(pt); + fence_put(&pt->base); } EXPORT_SYMBOL(sync_pt_free);
-/* call with pt->parent->active_list_lock held */ -static int _sync_pt_has_signaled(struct sync_pt *pt) -{ - int old_status = pt->status; - - if (!pt->status) - pt->status = pt->parent->ops->has_signaled(pt); - - if (!pt->status && pt->parent->destroyed) - pt->status = -ENOENT; - - if (pt->status != old_status) - pt->timestamp = ktime_get(); - - return pt->status; -} - -static struct sync_pt *sync_pt_dup(struct sync_pt *pt) -{ - return pt->parent->ops->dup(pt); -} - -/* Adds a sync pt to the active queue. Called when added to a fence */ -static void sync_pt_activate(struct sync_pt *pt) -{ - struct sync_timeline *obj = pt->parent; - unsigned long flags; - int err; - - spin_lock_irqsave(&obj->active_list_lock, flags); - - err = _sync_pt_has_signaled(pt); - if (err != 0) - goto out; - - list_add_tail(&pt->active_list, &obj->active_list_head); - -out: - spin_unlock_irqrestore(&obj->active_list_lock, flags); -} - -static int sync_fence_release(struct inode *inode, struct file *file); -static unsigned int sync_fence_poll(struct file *file, poll_table *wait); -static long sync_fence_ioctl(struct file *file, unsigned int cmd, - unsigned long arg); - - -static const struct file_operations sync_fence_fops = { - .release = sync_fence_release, - .poll = sync_fence_poll, - .unlocked_ioctl = sync_fence_ioctl, - .compat_ioctl = sync_fence_ioctl, -}; - -static struct sync_fence *sync_fence_alloc(const char *name) +static struct sync_fence *sync_fence_alloc(int size, const char *name) { struct sync_fence *fence; - unsigned long flags;
- fence = kzalloc(sizeof(struct sync_fence), GFP_KERNEL); + fence = kzalloc(size, GFP_KERNEL); if (fence == NULL) return NULL;
@@ -268,16 +158,8 @@ static struct sync_fence *sync_fence_alloc(const char *name) kref_init(&fence->kref); strlcpy(fence->name, name, sizeof(fence->name));
- INIT_LIST_HEAD(&fence->pt_list_head); - INIT_LIST_HEAD(&fence->waiter_list_head); - spin_lock_init(&fence->waiter_list_lock); - init_waitqueue_head(&fence->wq);
- spin_lock_irqsave(&sync_fence_list_lock, flags); - list_add_tail(&fence->sync_fence_list, &sync_fence_list_head); - spin_unlock_irqrestore(&sync_fence_list_lock, flags); - return fence;
err: @@ -285,119 +167,40 @@ err: return NULL; }
+static void fence_check_cb_func(struct fence *f, struct fence_cb *cb) +{ + struct sync_fence_cb *check = container_of(cb, struct sync_fence_cb, cb); + struct sync_fence *fence = check->fence; + + // TODO: Add a fence->status member and check it + if (atomic_dec_and_test(&fence->status)) + wake_up_all(&fence->wq); +} + /* TODO: implement a create which takes more that one sync_pt */ struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt) { struct sync_fence *fence;
- if (pt->fence) - return NULL; - - fence = sync_fence_alloc(name); + fence = sync_fence_alloc(offsetof(struct sync_fence, cbs[1]), name); if (fence == NULL) return NULL;
- pt->fence = fence; - list_add(&pt->pt_list, &fence->pt_list_head); - sync_pt_activate(pt); + fence->num_fences = 1; + atomic_set(&fence->status, 1);
- /* - * signal the fence in case pt was activated before - * sync_pt_activate(pt) was called - */ - sync_fence_signal_pt(pt); + fence_get(&pt->base); + fence->cbs[0].sync_pt = &pt->base; + fence->cbs[0].fence = fence; + if (fence_add_callback(&pt->base, &fence->cbs[0].cb, fence_check_cb_func)) + atomic_dec(&fence->status); + + sync_fence_debug_add(fence);
return fence; } EXPORT_SYMBOL(sync_fence_create);
-static int sync_fence_copy_pts(struct sync_fence *dst, struct sync_fence *src) -{ - struct list_head *pos; - - list_for_each(pos, &src->pt_list_head) { - struct sync_pt *orig_pt = - container_of(pos, struct sync_pt, pt_list); - struct sync_pt *new_pt = sync_pt_dup(orig_pt); - - if (new_pt == NULL) - return -ENOMEM; - - new_pt->fence = dst; - list_add(&new_pt->pt_list, &dst->pt_list_head); - } - - return 0; -} - -static int sync_fence_merge_pts(struct sync_fence *dst, struct sync_fence *src) -{ - struct list_head *src_pos, *dst_pos, *n; - - list_for_each(src_pos, &src->pt_list_head) { - struct sync_pt *src_pt = - container_of(src_pos, struct sync_pt, pt_list); - bool collapsed = false; - - list_for_each_safe(dst_pos, n, &dst->pt_list_head) { - struct sync_pt *dst_pt = - container_of(dst_pos, struct sync_pt, pt_list); - /* collapse two sync_pts on the same timeline - * to a single sync_pt that will signal at - * the later of the two - */ - if (dst_pt->parent == src_pt->parent) { - if (dst_pt->parent->ops->compare(dst_pt, src_pt) - == -1) { - struct sync_pt *new_pt = - sync_pt_dup(src_pt); - if (new_pt == NULL) - return -ENOMEM; - - new_pt->fence = dst; - list_replace(&dst_pt->pt_list, - &new_pt->pt_list); - sync_pt_free(dst_pt); - } - collapsed = true; - break; - } - } - - if (!collapsed) { - struct sync_pt *new_pt = sync_pt_dup(src_pt); - - if (new_pt == NULL) - return -ENOMEM; - - new_pt->fence = dst; - list_add(&new_pt->pt_list, &dst->pt_list_head); - } - } - - return 0; -} - -static void sync_fence_detach_pts(struct sync_fence *fence) -{ - struct list_head *pos, *n; - - list_for_each_safe(pos, n, &fence->pt_list_head) { - struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list); - sync_timeline_remove_pt(pt); - } -} - -static void sync_fence_free_pts(struct sync_fence *fence) -{ - struct list_head *pos, *n; - - list_for_each_safe(pos, n, &fence->pt_list_head) { - struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list); - sync_pt_free(pt); - } -} - struct sync_fence *sync_fence_fdget(int fd) { struct file *file = fget(fd); @@ -428,197 +231,148 @@ void sync_fence_install(struct sync_fence *fence, int fd) } EXPORT_SYMBOL(sync_fence_install);
-static int sync_fence_get_status(struct sync_fence *fence) -{ - struct list_head *pos; - int status = 1; +static void sync_fence_add_pt(struct sync_fence *fence, int *i, struct fence *pt) { + fence->cbs[*i].sync_pt = pt; + fence->cbs[*i].fence = fence;
- list_for_each(pos, &fence->pt_list_head) { - struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list); - int pt_status = pt->status; - - if (pt_status < 0) { - status = pt_status; - break; - } else if (status == 1) { - status = pt_status; - } + if (!fence_add_callback(pt, &fence->cbs[*i].cb, fence_check_cb_func)) { + fence_get(pt); + (*i)++; } - - return status; }
struct sync_fence *sync_fence_merge(const char *name, struct sync_fence *a, struct sync_fence *b) { + int num_fences = a->num_fences + b->num_fences; struct sync_fence *fence; - struct list_head *pos; - int err; + int i, i_a, i_b;
- fence = sync_fence_alloc(name); + fence = sync_fence_alloc(offsetof(struct sync_fence, cbs[num_fences]), name); if (fence == NULL) return NULL;
- err = sync_fence_copy_pts(fence, a); - if (err < 0) - goto err; + atomic_set(&fence->status, num_fences);
- err = sync_fence_merge_pts(fence, b); - if (err < 0) - goto err; + /* + * Assume sync_fence a and b are both ordered and have no + * duplicates with the same context. + * + * If a sync_fence can only be created with sync_fence_merge + * and sync_fence_create, this is a reasonable assumption. + */ + for (i = i_a = i_b = 0; i_a < a->num_fences && i_b < b->num_fences; ) { + struct fence *pt_a = a->cbs[i_a].sync_pt; + struct fence *pt_b = b->cbs[i_b].sync_pt; + + if (pt_a->context < pt_b->context) { + sync_fence_add_pt(fence, &i, pt_a);
- list_for_each(pos, &fence->pt_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, pt_list); - sync_pt_activate(pt); + i_a++; + } else if (pt_a->context > pt_b->context) { + sync_fence_add_pt(fence, &i, pt_b); + + i_b++; + } else { + if (pt_a->seqno - pt_b->seqno <= INT_MAX) + sync_fence_add_pt(fence, &i, pt_a); + else + sync_fence_add_pt(fence, &i, pt_b); + + i_a++; + i_b++; + } }
- /* - * signal the fence in case one of it's pts were activated before - * they were activated - */ - sync_fence_signal_pt(list_first_entry(&fence->pt_list_head, - struct sync_pt, - pt_list)); + for (; i_a < a->num_fences; i_a++) + sync_fence_add_pt(fence, &i, a->cbs[i_a].sync_pt); + + for (; i_b < b->num_fences; i_b++) + sync_fence_add_pt(fence, &i, b->cbs[i_b].sync_pt); + + if (num_fences > i) + atomic_sub(num_fences - i, &fence->status); + fence->num_fences = i;
+ sync_fence_debug_add(fence); return fence; -err: - sync_fence_free_pts(fence); - kfree(fence); - return NULL; } EXPORT_SYMBOL(sync_fence_merge);
-static void sync_fence_signal_pt(struct sync_pt *pt) +int sync_fence_wake_up_wq(wait_queue_t *curr, unsigned mode, + int wake_flags, void *key) { - LIST_HEAD(signaled_waiters); - struct sync_fence *fence = pt->fence; - struct list_head *pos; - struct list_head *n; - unsigned long flags; - int status; - - status = sync_fence_get_status(fence); - - spin_lock_irqsave(&fence->waiter_list_lock, flags); - /* - * this should protect against two threads racing on the signaled - * false -> true transition - */ - if (status && !fence->status) { - list_for_each_safe(pos, n, &fence->waiter_list_head) - list_move(pos, &signaled_waiters); - - fence->status = status; - } else { - status = 0; - } - spin_unlock_irqrestore(&fence->waiter_list_lock, flags); - - if (status) { - list_for_each_safe(pos, n, &signaled_waiters) { - struct sync_fence_waiter *waiter = - container_of(pos, struct sync_fence_waiter, - waiter_list); + struct sync_fence_waiter *wait = container_of(curr, struct sync_fence_waiter, work); + list_del_init(&wait->work.task_list);
- list_del(pos); - waiter->callback(fence, waiter); - } - wake_up(&fence->wq); - } + wait->callback(wait->work.private, wait); + return 1; }
int sync_fence_wait_async(struct sync_fence *fence, struct sync_fence_waiter *waiter) { + int err = atomic_read(&fence->status); unsigned long flags; - int err = 0;
- spin_lock_irqsave(&fence->waiter_list_lock, flags); + if (err < 0) + return err;
- if (fence->status) { - err = fence->status; - goto out; - } + if (!err) + return 1;
- list_add_tail(&waiter->waiter_list, &fence->waiter_list_head); -out: - spin_unlock_irqrestore(&fence->waiter_list_lock, flags); + init_waitqueue_func_entry(&waiter->work, sync_fence_wake_up_wq); + waiter->work.private = fence;
- return err; + spin_lock_irqsave(&fence->wq.lock, flags); + err = atomic_read(&fence->status); + if (err > 0) + __add_wait_queue_tail(&fence->wq, &waiter->work); + spin_unlock_irqrestore(&fence->wq.lock, flags); + + if (err < 0) + return err; + + return !err; } EXPORT_SYMBOL(sync_fence_wait_async);
int sync_fence_cancel_async(struct sync_fence *fence, struct sync_fence_waiter *waiter) { - struct list_head *pos; - struct list_head *n; unsigned long flags; - int ret = -ENOENT; + int ret = 0;
- spin_lock_irqsave(&fence->waiter_list_lock, flags); - /* - * Make sure waiter is still in waiter_list because it is possible for - * the waiter to be removed from the list while the callback is still - * pending. - */ - list_for_each_safe(pos, n, &fence->waiter_list_head) { - struct sync_fence_waiter *list_waiter = - container_of(pos, struct sync_fence_waiter, - waiter_list); - if (list_waiter == waiter) { - list_del(pos); - ret = 0; - break; - } - } - spin_unlock_irqrestore(&fence->waiter_list_lock, flags); + spin_lock_irqsave(&fence->wq.lock, flags); + if (!list_empty(&waiter->work.task_list)) + list_del_init(&waiter->work.task_list); + else + ret = -ENOENT; + spin_unlock_irqrestore(&fence->wq.lock, flags); return ret; } EXPORT_SYMBOL(sync_fence_cancel_async);
-static bool sync_fence_check(struct sync_fence *fence) -{ - /* - * Make sure that reads to fence->status are ordered with the - * wait queue event triggering - */ - smp_rmb(); - return fence->status != 0; -} - int sync_fence_wait(struct sync_fence *fence, long timeout) { - int err = 0; - struct sync_pt *pt; - - trace_sync_wait(fence, 1); - list_for_each_entry(pt, &fence->pt_list_head, pt_list) - trace_sync_pt(pt); + long ret; + int i;
- if (timeout > 0) { + if (timeout < 0) + timeout = MAX_SCHEDULE_TIMEOUT; + else timeout = msecs_to_jiffies(timeout); - err = wait_event_interruptible_timeout(fence->wq, - sync_fence_check(fence), - timeout); - } else if (timeout < 0) { - err = wait_event_interruptible(fence->wq, - sync_fence_check(fence)); - } - trace_sync_wait(fence, 0); - - if (err < 0) - return err;
- if (fence->status < 0) { - pr_info("fence error %d on [%p]\n", fence->status, fence); - sync_dump(); - return fence->status; - } + trace_sync_wait(fence, 1); + for (i = 0; i < fence->num_fences; ++i) + trace_sync_pt(fence->cbs[i].sync_pt); + ret = wait_event_interruptible_timeout(fence->wq, atomic_read(&fence->status) <= 0, timeout); + trace_sync_wait(fence, 0);
- if (fence->status == 0) { - if (timeout > 0) { + if (ret < 0) + return ret; + else if (ret == 0) { + if (timeout) { pr_info("fence timeout on [%p] after %dms\n", fence, jiffies_to_msecs(timeout)); sync_dump(); @@ -626,15 +380,132 @@ int sync_fence_wait(struct sync_fence *fence, long timeout) return -ETIME; }
- return 0; + ret = atomic_read(&fence->status); + if (ret) { + pr_info("fence error %ld on [%p]\n", ret, fence); + sync_dump(); + } + return ret; } EXPORT_SYMBOL(sync_fence_wait);
+static const char *android_fence_get_driver_name(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + return parent->ops->driver_name; +} + +static const char *android_fence_get_timeline_name(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + return parent->name; +} + +static void android_fence_release(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + unsigned long flags; + + spin_lock_irqsave(fence->lock, flags); + list_del(&pt->child_list); + if (WARN_ON_ONCE(!list_empty(&pt->active_list))) + list_del(&pt->active_list); + spin_unlock_irqrestore(fence->lock, flags); + + if (parent->ops->free_pt) + parent->ops->free_pt(pt); + + sync_timeline_put(parent); + kfree(pt); +} + +static bool android_fence_signaled(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + int ret; + + ret = parent->ops->has_signaled(pt); + if (ret < 0) + fence->status = ret; + return ret; +} + +static bool android_fence_enable_signaling(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + if (android_fence_signaled(fence)) + return false; + + list_add_tail(&pt->active_list, &parent->active_list_head); + return true; +} + +static int android_fence_fill_driver_data(struct fence *fence, void *data, int size) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + if (!parent->ops->fill_driver_data) + return 0; + return parent->ops->fill_driver_data(pt, data, size); +} + +static void android_fence_value_str(struct fence *fence, char *str, int size) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + if (!parent->ops->pt_value_str) { + if (size) + *str = 0; + return; + } + parent->ops->pt_value_str(pt, str, size); +} + +static void android_fence_timeline_value_str(struct fence *fence, char *str, int size) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + if (!parent->ops->timeline_value_str) { + if (size) + *str = 0; + return; + } + parent->ops->timeline_value_str(parent, str, size); +} + +static const struct fence_ops android_fence_ops = { + .get_driver_name = android_fence_get_driver_name, + .get_timeline_name = android_fence_get_timeline_name, + .enable_signaling = android_fence_enable_signaling, + .signaled = android_fence_signaled, + .wait = fence_default_wait, + .release = android_fence_release, + .fill_driver_data = android_fence_fill_driver_data, + .fence_value_str = android_fence_value_str, + .timeline_value_str = android_fence_timeline_value_str, +}; + static void sync_fence_free(struct kref *kref) { struct sync_fence *fence = container_of(kref, struct sync_fence, kref); + int i, status = atomic_read(&fence->status);
- sync_fence_free_pts(fence); + for (i = 0; i < fence->num_fences; ++i) { + if (status) + fence_remove_callback(fence->cbs[i].sync_pt, &fence->cbs[i].cb); + fence_put(fence->cbs[i].sync_pt); + }
kfree(fence); } @@ -642,44 +513,25 @@ static void sync_fence_free(struct kref *kref) static int sync_fence_release(struct inode *inode, struct file *file) { struct sync_fence *fence = file->private_data; - unsigned long flags; - - /* - * We need to remove all ways to access this fence before droping - * our ref. - * - * start with its membership in the global fence list - */ - spin_lock_irqsave(&sync_fence_list_lock, flags); - list_del(&fence->sync_fence_list); - spin_unlock_irqrestore(&sync_fence_list_lock, flags);
- /* - * remove its pts from their parents so that sync_timeline_signal() - * can't reference the fence. - */ - sync_fence_detach_pts(fence); + sync_fence_debug_remove(fence);
kref_put(&fence->kref, sync_fence_free); - return 0; }
static unsigned int sync_fence_poll(struct file *file, poll_table *wait) { struct sync_fence *fence = file->private_data; + int status;
poll_wait(file, &fence->wq, wait);
- /* - * Make sure that reads to fence->status are ordered with the - * wait queue event triggering - */ - smp_rmb(); + status = atomic_read(&fence->status);
- if (fence->status == 1) + if (!status) return POLLIN; - else if (fence->status < 0) + else if (status < 0) return POLLERR; else return 0; @@ -744,7 +596,7 @@ err_put_fd: return err; }
-static int sync_fill_pt_info(struct sync_pt *pt, void *data, int size) +static int sync_fill_pt_info(struct fence *fence, void *data, int size) { struct sync_pt_info *info = data; int ret; @@ -754,20 +606,23 @@ static int sync_fill_pt_info(struct sync_pt *pt, void *data, int size)
info->len = sizeof(struct sync_pt_info);
- if (pt->parent->ops->fill_driver_data) { - ret = pt->parent->ops->fill_driver_data(pt, info->driver_data, - size - sizeof(*info)); + if (fence->ops->fill_driver_data) { + ret = fence->ops->fill_driver_data(fence, info->driver_data, + size - sizeof(*info)); if (ret < 0) return ret;
info->len += ret; }
- strlcpy(info->obj_name, pt->parent->name, sizeof(info->obj_name)); - strlcpy(info->driver_name, pt->parent->ops->driver_name, + strlcpy(info->obj_name, fence->ops->get_timeline_name(fence), sizeof(info->obj_name)); + strlcpy(info->driver_name, fence->ops->get_driver_name(fence), sizeof(info->driver_name)); - info->status = pt->status; - info->timestamp_ns = ktime_to_ns(pt->timestamp); + if (fence_is_signaled(fence)) + info->status = fence->status >= 0 ? 1 : fence->status; + else + info->status = 0; + info->timestamp_ns = ktime_to_ns(fence->timestamp);
return info->len; } @@ -776,10 +631,9 @@ static long sync_fence_ioctl_fence_info(struct sync_fence *fence, unsigned long arg) { struct sync_fence_info_data *data; - struct list_head *pos; __u32 size; __u32 len = 0; - int ret; + int ret, i;
if (copy_from_user(&size, (void __user *)arg, sizeof(size))) return -EFAULT; @@ -795,12 +649,14 @@ static long sync_fence_ioctl_fence_info(struct sync_fence *fence, return -ENOMEM;
strlcpy(data->name, fence->name, sizeof(data->name)); - data->status = fence->status; + data->status = atomic_read(&fence->status); + if (data->status >= 0) + data->status = !data->status; + len = sizeof(struct sync_fence_info_data);
- list_for_each(pos, &fence->pt_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, pt_list); + for (i = 0; i < fence->num_fences; ++i) { + struct fence *pt = fence->cbs[i].sync_pt;
ret = sync_fill_pt_info(pt, (u8 *)data + len, size - len);
@@ -842,176 +698,10 @@ static long sync_fence_ioctl(struct file *file, unsigned int cmd, } }
-#ifdef CONFIG_DEBUG_FS -static const char *sync_status_str(int status) -{ - if (status > 0) - return "signaled"; - else if (status == 0) - return "active"; - else - return "error"; -} - -static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence) -{ - int status = pt->status; - seq_printf(s, " %s%spt %s", - fence ? pt->parent->name : "", - fence ? "_" : "", - sync_status_str(status)); - if (pt->status) { - struct timeval tv = ktime_to_timeval(pt->timestamp); - seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec); - } - - if (pt->parent->ops->timeline_value_str && - pt->parent->ops->pt_value_str) { - char value[64]; - pt->parent->ops->pt_value_str(pt, value, sizeof(value)); - seq_printf(s, ": %s", value); - if (fence) { - pt->parent->ops->timeline_value_str(pt->parent, value, - sizeof(value)); - seq_printf(s, " / %s", value); - } - } else if (pt->parent->ops->print_pt) { - seq_puts(s, ": "); - pt->parent->ops->print_pt(s, pt); - } - - seq_puts(s, "\n"); -} - -static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj) -{ - struct list_head *pos; - unsigned long flags; - - seq_printf(s, "%s %s", obj->name, obj->ops->driver_name); - - if (obj->ops->timeline_value_str) { - char value[64]; - obj->ops->timeline_value_str(obj, value, sizeof(value)); - seq_printf(s, ": %s", value); - } else if (obj->ops->print_obj) { - seq_puts(s, ": "); - obj->ops->print_obj(s, obj); - } - - seq_puts(s, "\n"); - - spin_lock_irqsave(&obj->child_list_lock, flags); - list_for_each(pos, &obj->child_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, child_list); - sync_print_pt(s, pt, false); - } - spin_unlock_irqrestore(&obj->child_list_lock, flags); -} - -static void sync_print_fence(struct seq_file *s, struct sync_fence *fence) -{ - struct list_head *pos; - unsigned long flags; - - seq_printf(s, "[%p] %s: %s\n", fence, fence->name, - sync_status_str(fence->status)); - - list_for_each(pos, &fence->pt_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, pt_list); - sync_print_pt(s, pt, true); - } - - spin_lock_irqsave(&fence->waiter_list_lock, flags); - list_for_each(pos, &fence->waiter_list_head) { - struct sync_fence_waiter *waiter = - container_of(pos, struct sync_fence_waiter, - waiter_list); - - seq_printf(s, "waiter %pF\n", waiter->callback); - } - spin_unlock_irqrestore(&fence->waiter_list_lock, flags); -} - -static int sync_debugfs_show(struct seq_file *s, void *unused) -{ - unsigned long flags; - struct list_head *pos; - - seq_puts(s, "objs:\n--------------\n"); - - spin_lock_irqsave(&sync_timeline_list_lock, flags); - list_for_each(pos, &sync_timeline_list_head) { - struct sync_timeline *obj = - container_of(pos, struct sync_timeline, - sync_timeline_list); - - sync_print_obj(s, obj); - seq_puts(s, "\n"); - } - spin_unlock_irqrestore(&sync_timeline_list_lock, flags); - - seq_puts(s, "fences:\n--------------\n"); - - spin_lock_irqsave(&sync_fence_list_lock, flags); - list_for_each(pos, &sync_fence_list_head) { - struct sync_fence *fence = - container_of(pos, struct sync_fence, sync_fence_list); - - sync_print_fence(s, fence); - seq_puts(s, "\n"); - } - spin_unlock_irqrestore(&sync_fence_list_lock, flags); - return 0; -} - -static int sync_debugfs_open(struct inode *inode, struct file *file) -{ - return single_open(file, sync_debugfs_show, inode->i_private); -} - -static const struct file_operations sync_debugfs_fops = { - .open = sync_debugfs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, +static const struct file_operations sync_fence_fops = { + .release = sync_fence_release, + .poll = sync_fence_poll, + .unlocked_ioctl = sync_fence_ioctl, + .compat_ioctl = sync_fence_ioctl, };
-static __init int sync_debugfs_init(void) -{ - debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops); - return 0; -} -late_initcall(sync_debugfs_init); - -#define DUMP_CHUNK 256 -static char sync_dump_buf[64 * 1024]; -static void sync_dump(void) -{ - struct seq_file s = { - .buf = sync_dump_buf, - .size = sizeof(sync_dump_buf) - 1, - }; - int i; - - sync_debugfs_show(&s, NULL); - - for (i = 0; i < s.count; i += DUMP_CHUNK) { - if ((s.count - i) > DUMP_CHUNK) { - char c = s.buf[i + DUMP_CHUNK]; - s.buf[i + DUMP_CHUNK] = 0; - pr_cont("%s", s.buf + i); - s.buf[i + DUMP_CHUNK] = c; - } else { - s.buf[s.count] = 0; - pr_cont("%s", s.buf + i); - } - } -} -#else -static void sync_dump(void) -{ -} -#endif diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h index 62e2255b1c1e..6036dbdc8e6f 100644 --- a/drivers/staging/android/sync.h +++ b/drivers/staging/android/sync.h @@ -21,6 +21,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/wait.h> +#include <linux/fence.h>
struct sync_timeline; struct sync_pt; @@ -40,8 +41,6 @@ struct sync_fence; * -1 if a will signal before b * @free_pt: called before sync_pt is freed * @release_obj: called before sync_timeline is freed - * @print_obj: deprecated - * @print_pt: deprecated * @fill_driver_data: write implementation specific driver data to data. * should return an error if there is not enough room * as specified by size. This information is returned @@ -67,13 +66,6 @@ struct sync_timeline_ops { /* optional */ void (*release_obj)(struct sync_timeline *sync_timeline);
- /* deprecated */ - void (*print_obj)(struct seq_file *s, - struct sync_timeline *sync_timeline); - - /* deprecated */ - void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt); - /* optional */ int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
@@ -104,42 +96,48 @@ struct sync_timeline {
/* protected by child_list_lock */ bool destroyed; + int context, value;
struct list_head child_list_head; spinlock_t child_list_lock;
struct list_head active_list_head; - spinlock_t active_list_lock;
+#ifdef CONFIG_DEBUG_FS struct list_head sync_timeline_list; +#endif };
/** * struct sync_pt - sync point - * @parent: sync_timeline to which this sync_pt belongs + * @fence: base fence class * @child_list: membership in sync_timeline.child_list_head * @active_list: membership in sync_timeline.active_list_head +<<<<<<< current * @signaled_list: membership in temporary signaled_list on stack * @fence: sync_fence to which the sync_pt belongs * @pt_list: membership in sync_fence.pt_list_head * @status: 1: signaled, 0:active, <0: error * @timestamp: time which sync_pt status transitioned from active to * signaled or error. +======= +>>>>>>> patched */ struct sync_pt { - struct sync_timeline *parent; - struct list_head child_list; + struct fence base;
+ struct list_head child_list; struct list_head active_list; - struct list_head signaled_list; - - struct sync_fence *fence; - struct list_head pt_list; +};
- /* protected by parent->active_list_lock */ - int status; +static inline struct sync_timeline *sync_pt_parent(struct sync_pt *pt) { + return container_of(pt->base.lock, struct sync_timeline, child_list_lock); +}
- ktime_t timestamp; +struct sync_fence_cb { + struct fence_cb cb; + struct fence *sync_pt; + struct sync_fence *fence; };
/** @@ -149,9 +147,7 @@ struct sync_pt { * @name: name of sync_fence. Useful for debugging * @pt_list_head: list of sync_pts in the fence. immutable once fence * is created - * @waiter_list_head: list of asynchronous waiters on this fence - * @waiter_list_lock: lock protecting @waiter_list_head and @status - * @status: 1: signaled, 0:active, <0: error + * @status: 0: signaled, >0:active, <0: error * * @wq: wait queue for fence signaling * @sync_fence_list: membership in global fence list @@ -160,17 +156,15 @@ struct sync_fence { struct file *file; struct kref kref; char name[32]; - - /* this list is immutable once the fence is created */ - struct list_head pt_list_head; - - struct list_head waiter_list_head; - spinlock_t waiter_list_lock; /* also protects status */ - int status; +#ifdef CONFIG_DEBUG_FS + struct list_head sync_fence_list; +#endif + int num_fences;
wait_queue_head_t wq; + atomic_t status;
- struct list_head sync_fence_list; + struct sync_fence_cb cbs[]; };
struct sync_fence_waiter; @@ -184,14 +178,14 @@ typedef void (*sync_callback_t)(struct sync_fence *fence, * @callback_data: pointer to pass to @callback */ struct sync_fence_waiter { - struct list_head waiter_list; - - sync_callback_t callback; + wait_queue_t work; + sync_callback_t callback; };
static inline void sync_fence_waiter_init(struct sync_fence_waiter *waiter, sync_callback_t callback) { + INIT_LIST_HEAD(&waiter->work.task_list); waiter->callback = callback; }
@@ -423,4 +417,22 @@ struct sync_fence_info_data { #define SYNC_IOC_FENCE_INFO _IOWR(SYNC_IOC_MAGIC, 2,\ struct sync_fence_info_data)
+#ifdef CONFIG_DEBUG_FS + +extern void sync_timeline_debug_add(struct sync_timeline *obj); +extern void sync_timeline_debug_remove(struct sync_timeline *obj); +extern void sync_fence_debug_add(struct sync_fence *fence); +extern void sync_fence_debug_remove(struct sync_fence *fence); +extern void sync_dump(void); + +#else +# define sync_timeline_debug_add(obj) +# define sync_timeline_debug_remove(obj) +# define sync_fence_debug_add(fence) +# define sync_fence_debug_remove(fence) +# define sync_dump() +#endif +int sync_fence_wake_up_wq(wait_queue_t *curr, unsigned mode, + int wake_flags, void *key); + #endif /* _LINUX_SYNC_H */ diff --git a/drivers/staging/android/sync_debug.c b/drivers/staging/android/sync_debug.c new file mode 100644 index 000000000000..2ef6496c7cd0 --- /dev/null +++ b/drivers/staging/android/sync_debug.c @@ -0,0 +1,245 @@ +/* + * drivers/base/sync.c + * + * Copyright (C) 2012 Google, Inc. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#include <linux/debugfs.h> +#include <linux/export.h> +#include <linux/file.h> +#include <linux/fs.h> +#include <linux/kernel.h> +#include <linux/poll.h> +#include <linux/sched.h> +#include <linux/seq_file.h> +#include <linux/slab.h> +#include <linux/uaccess.h> +#include <linux/anon_inodes.h> +#include "sync.h" + +#ifdef CONFIG_DEBUG_FS + +static LIST_HEAD(sync_timeline_list_head); +static DEFINE_SPINLOCK(sync_timeline_list_lock); +static LIST_HEAD(sync_fence_list_head); +static DEFINE_SPINLOCK(sync_fence_list_lock); + +void sync_timeline_debug_add(struct sync_timeline *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&sync_timeline_list_lock, flags); + list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head); + spin_unlock_irqrestore(&sync_timeline_list_lock, flags); +} + +void sync_timeline_debug_remove(struct sync_timeline *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&sync_timeline_list_lock, flags); + list_del(&obj->sync_timeline_list); + spin_unlock_irqrestore(&sync_timeline_list_lock, flags); +} + +void sync_fence_debug_add(struct sync_fence *fence) +{ + unsigned long flags; + + spin_lock_irqsave(&sync_fence_list_lock, flags); + list_add_tail(&fence->sync_fence_list, &sync_fence_list_head); + spin_unlock_irqrestore(&sync_fence_list_lock, flags); +} + +void sync_fence_debug_remove(struct sync_fence *fence) +{ + unsigned long flags; + + spin_lock_irqsave(&sync_fence_list_lock, flags); + list_del(&fence->sync_fence_list); + spin_unlock_irqrestore(&sync_fence_list_lock, flags); +} + +static const char *sync_status_str(int status) +{ + if (status == 0) + return "signaled"; + else if (status > 0) + return "active"; + else + return "error"; +} + +static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence) +{ + int status = 1; + struct sync_timeline *parent = sync_pt_parent(pt); + + if (__fence_is_signaled(&pt->base)) + status = pt->base.status; + + seq_printf(s, " %s%spt %s", + fence ? parent->name : "", + fence ? "_" : "", + sync_status_str(status)); + + if (status <= 0) { + struct timeval tv = ktime_to_timeval(pt->base.timestamp); + seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec); + } + + if (parent->ops->timeline_value_str && + parent->ops->pt_value_str) { + char value[64]; + parent->ops->pt_value_str(pt, value, sizeof(value)); + seq_printf(s, ": %s", value); + if (fence) { + parent->ops->timeline_value_str(parent, value, + sizeof(value)); + seq_printf(s, " / %s", value); + } + } + + seq_puts(s, "\n"); +} + +static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj) +{ + struct list_head *pos; + unsigned long flags; + + seq_printf(s, "%s %s", obj->name, obj->ops->driver_name); + + if (obj->ops->timeline_value_str) { + char value[64]; + obj->ops->timeline_value_str(obj, value, sizeof(value)); + seq_printf(s, ": %s", value); + } + + seq_puts(s, "\n"); + + spin_lock_irqsave(&obj->child_list_lock, flags); + list_for_each(pos, &obj->child_list_head) { + struct sync_pt *pt = + container_of(pos, struct sync_pt, child_list); + sync_print_pt(s, pt, false); + } + spin_unlock_irqrestore(&obj->child_list_lock, flags); +} + +static void sync_print_fence(struct seq_file *s, struct sync_fence *fence) +{ + wait_queue_t *pos; + unsigned long flags; + int i; + + seq_printf(s, "[%p] %s: %s\n", fence, fence->name, + sync_status_str(atomic_read(&fence->status))); + + for (i = 0; i < fence->num_fences; ++i) { + struct sync_pt *pt = + container_of(fence->cbs[i].sync_pt, struct sync_pt, base); + sync_print_pt(s, pt, true); + } + + spin_lock_irqsave(&fence->wq.lock, flags); + list_for_each_entry(pos, &fence->wq.task_list, task_list) { + struct sync_fence_waiter *waiter; + + if (pos->func != &sync_fence_wake_up_wq) + continue; + + waiter = container_of(pos, struct sync_fence_waiter, work); + + seq_printf(s, "waiter %pF\n", waiter->callback); + } + spin_unlock_irqrestore(&fence->wq.lock, flags); +} + +static int sync_debugfs_show(struct seq_file *s, void *unused) +{ + unsigned long flags; + struct list_head *pos; + + seq_puts(s, "objs:\n--------------\n"); + + spin_lock_irqsave(&sync_timeline_list_lock, flags); + list_for_each(pos, &sync_timeline_list_head) { + struct sync_timeline *obj = + container_of(pos, struct sync_timeline, + sync_timeline_list); + + sync_print_obj(s, obj); + seq_puts(s, "\n"); + } + spin_unlock_irqrestore(&sync_timeline_list_lock, flags); + + seq_puts(s, "fences:\n--------------\n"); + + spin_lock_irqsave(&sync_fence_list_lock, flags); + list_for_each(pos, &sync_fence_list_head) { + struct sync_fence *fence = + container_of(pos, struct sync_fence, sync_fence_list); + + sync_print_fence(s, fence); + seq_puts(s, "\n"); + } + spin_unlock_irqrestore(&sync_fence_list_lock, flags); + return 0; +} + +static int sync_debugfs_open(struct inode *inode, struct file *file) +{ + return single_open(file, sync_debugfs_show, inode->i_private); +} + +static const struct file_operations sync_debugfs_fops = { + .open = sync_debugfs_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + +static __init int sync_debugfs_init(void) +{ + debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops); + return 0; +} +late_initcall(sync_debugfs_init); + +#define DUMP_CHUNK 256 +static char sync_dump_buf[64 * 1024]; +void sync_dump(void) +{ + struct seq_file s = { + .buf = sync_dump_buf, + .size = sizeof(sync_dump_buf) - 1, + }; + int i; + + sync_debugfs_show(&s, NULL); + + for (i = 0; i < s.count; i += DUMP_CHUNK) { + if ((s.count - i) > DUMP_CHUNK) { + char c = s.buf[i + DUMP_CHUNK]; + s.buf[i + DUMP_CHUNK] = 0; + pr_cont("%s", s.buf + i); + s.buf[i + DUMP_CHUNK] = c; + } else { + s.buf[s.count] = 0; + pr_cont("%s", s.buf + i); + } + } +} + +#endif diff --git a/drivers/staging/android/trace/sync.h b/drivers/staging/android/trace/sync.h index 95462359ba57..77edb977a7bf 100644 --- a/drivers/staging/android/trace/sync.h +++ b/drivers/staging/android/trace/sync.h @@ -45,7 +45,7 @@ TRACE_EVENT(sync_wait,
TP_fast_assign( __assign_str(name, fence->name); - __entry->status = fence->status; + __entry->status = atomic_read(&fence->status); __entry->begin = begin; ),
@@ -54,19 +54,19 @@ TRACE_EVENT(sync_wait, );
TRACE_EVENT(sync_pt, - TP_PROTO(struct sync_pt *pt), + TP_PROTO(struct fence *pt),
TP_ARGS(pt),
TP_STRUCT__entry( - __string(timeline, pt->parent->name) + __string(timeline, pt->ops->get_timeline_name(pt)) __array(char, value, 32) ),
TP_fast_assign( - __assign_str(timeline, pt->parent->name); - if (pt->parent->ops->pt_value_str) { - pt->parent->ops->pt_value_str(pt, __entry->value, + __assign_str(timeline, pt->ops->get_timeline_name(pt)); + if (pt->ops->fence_value_str) { + pt->ops->fence_value_str(pt, __entry->value, sizeof(__entry->value)); } else { __entry->value[0] = '\0';
On 02/17/2014 04:57 PM, Maarten Lankhorst wrote:
Android syncpoints can be mapped to a timeline. This removes the need to maintain a separate api for synchronization. I've left the android trace events in place, but the core fence events should already be sufficient for debugging.
v2:
- Call fence_remove_callback in sync_fence_free if not all fences have fired.
v3:
- Merge Colin Cross' bugfixes, and the android fence merge optimization.
v4:
- Merge with the upstream fixes.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
drivers/staging/android/Kconfig | 1 drivers/staging/android/Makefile | 2 drivers/staging/android/sw_sync.c | 4 drivers/staging/android/sync.c | 892 +++++++++++----------------------- drivers/staging/android/sync.h | 80 ++- drivers/staging/android/sync_debug.c | 245 +++++++++ drivers/staging/android/trace/sync.h | 12 7 files changed, 592 insertions(+), 644 deletions(-) create mode 100644 drivers/staging/android/sync_debug.c
diff --git a/drivers/staging/android/Kconfig b/drivers/staging/android/Kconfig index b91c758883bf..ecc8194242b5 100644 --- a/drivers/staging/android/Kconfig +++ b/drivers/staging/android/Kconfig @@ -77,6 +77,7 @@ config SYNC bool "Synchronization framework" default n select ANON_INODES
- select DMA_SHARED_BUFFER ---help--- This option enables the framework for synchronization between multiple drivers. Sync implementations can take advantage of hardware
diff --git a/drivers/staging/android/Makefile b/drivers/staging/android/Makefile index 0a01e1914905..517ad5ffa429 100644 --- a/drivers/staging/android/Makefile +++ b/drivers/staging/android/Makefile @@ -9,5 +9,5 @@ obj-$(CONFIG_ANDROID_TIMED_OUTPUT) += timed_output.o obj-$(CONFIG_ANDROID_TIMED_GPIO) += timed_gpio.o obj-$(CONFIG_ANDROID_LOW_MEMORY_KILLER) += lowmemorykiller.o obj-$(CONFIG_ANDROID_INTF_ALARM_DEV) += alarm-dev.o -obj-$(CONFIG_SYNC) += sync.o +obj-$(CONFIG_SYNC) += sync.o sync_debug.o obj-$(CONFIG_SW_SYNC) += sw_sync.o diff --git a/drivers/staging/android/sw_sync.c b/drivers/staging/android/sw_sync.c index f24493ac65e3..a76db3ff87cb 100644 --- a/drivers/staging/android/sw_sync.c +++ b/drivers/staging/android/sw_sync.c @@ -50,7 +50,7 @@ static struct sync_pt *sw_sync_pt_dup(struct sync_pt *sync_pt) { struct sw_sync_pt *pt = (struct sw_sync_pt *) sync_pt; struct sw_sync_timeline *obj =
(struct sw_sync_timeline *)sync_pt->parent;
(struct sw_sync_timeline *)sync_pt_parent(sync_pt);
return (struct sync_pt *) sw_sync_pt_create(obj, pt->value); } @@ -59,7 +59,7 @@ static int sw_sync_pt_has_signaled(struct sync_pt *sync_pt) { struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt; struct sw_sync_timeline *obj =
(struct sw_sync_timeline *)sync_pt->parent;
(struct sw_sync_timeline *)sync_pt_parent(sync_pt);
return sw_sync_cmp(obj->value, pt->value) >= 0; } diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c index 3d05f662110b..8e77cd73b739 100644 --- a/drivers/staging/android/sync.c +++ b/drivers/staging/android/sync.c @@ -31,22 +31,13 @@ #define CREATE_TRACE_POINTS #include "trace/sync.h" -static void sync_fence_signal_pt(struct sync_pt *pt); -static int _sync_pt_has_signaled(struct sync_pt *pt); -static void sync_fence_free(struct kref *kref); -static void sync_dump(void);
-static LIST_HEAD(sync_timeline_list_head); -static DEFINE_SPINLOCK(sync_timeline_list_lock);
-static LIST_HEAD(sync_fence_list_head); -static DEFINE_SPINLOCK(sync_fence_list_lock); +static const struct fence_ops android_fence_ops; +static const struct file_operations sync_fence_fops; struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops, int size, const char *name) { struct sync_timeline *obj;
- unsigned long flags;
if (size < sizeof(struct sync_timeline)) return NULL; @@ -57,17 +48,14 @@ struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops, kref_init(&obj->kref); obj->ops = ops;
- obj->context = fence_context_alloc(1); strlcpy(obj->name, name, sizeof(obj->name));
INIT_LIST_HEAD(&obj->child_list_head);
- spin_lock_init(&obj->child_list_lock);
- INIT_LIST_HEAD(&obj->active_list_head);
- spin_lock_init(&obj->active_list_lock);
- spin_lock_init(&obj->child_list_lock);
- spin_lock_irqsave(&sync_timeline_list_lock, flags);
- list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head);
- spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
- sync_timeline_debug_add(obj);
return obj; } @@ -77,11 +65,8 @@ static void sync_timeline_free(struct kref *kref) { struct sync_timeline *obj = container_of(kref, struct sync_timeline, kref);
- unsigned long flags;
- spin_lock_irqsave(&sync_timeline_list_lock, flags);
- list_del(&obj->sync_timeline_list);
- spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
- sync_timeline_debug_remove(obj);
if (obj->ops->release_obj) obj->ops->release_obj(obj); @@ -89,6 +74,16 @@ static void sync_timeline_free(struct kref *kref) kfree(obj); } +static void sync_timeline_get(struct sync_timeline *obj) +{
- kref_get(&obj->kref);
+}
+static void sync_timeline_put(struct sync_timeline *obj) +{
- kref_put(&obj->kref, sync_timeline_free);
+}
void sync_timeline_destroy(struct sync_timeline *obj) { obj->destroyed = true; @@ -98,75 +93,30 @@ void sync_timeline_destroy(struct sync_timeline *obj) * signal any children that their parent is going away. */ sync_timeline_signal(obj);
- kref_put(&obj->kref, sync_timeline_free);
- sync_timeline_put(obj);
} EXPORT_SYMBOL(sync_timeline_destroy); -static void sync_timeline_add_pt(struct sync_timeline *obj, struct sync_pt *pt) -{
- unsigned long flags;
- pt->parent = obj;
- spin_lock_irqsave(&obj->child_list_lock, flags);
- list_add_tail(&pt->child_list, &obj->child_list_head);
- spin_unlock_irqrestore(&obj->child_list_lock, flags);
-}
-static void sync_timeline_remove_pt(struct sync_pt *pt) -{
- struct sync_timeline *obj = pt->parent;
- unsigned long flags;
- spin_lock_irqsave(&obj->active_list_lock, flags);
- if (!list_empty(&pt->active_list))
list_del_init(&pt->active_list);
- spin_unlock_irqrestore(&obj->active_list_lock, flags);
- spin_lock_irqsave(&obj->child_list_lock, flags);
- if (!list_empty(&pt->child_list))
list_del_init(&pt->child_list);
- spin_unlock_irqrestore(&obj->child_list_lock, flags);
-}
void sync_timeline_signal(struct sync_timeline *obj) { unsigned long flags; LIST_HEAD(signaled_pts);
- struct list_head *pos, *n;
- struct sync_pt *pt, *next;
trace_sync_timeline(obj);
- spin_lock_irqsave(&obj->active_list_lock, flags);
- list_for_each_safe(pos, n, &obj->active_list_head) {
struct sync_pt *pt =
container_of(pos, struct sync_pt, active_list);
if (_sync_pt_has_signaled(pt)) {
list_del_init(pos);
list_add(&pt->signaled_list, &signaled_pts);
kref_get(&pt->fence->kref);
}
- }
- spin_unlock_irqrestore(&obj->active_list_lock, flags);
- list_for_each_safe(pos, n, &signaled_pts) {
struct sync_pt *pt =
container_of(pos, struct sync_pt, signaled_list);
list_del_init(pos);
sync_fence_signal_pt(pt);
kref_put(&pt->fence->kref, sync_fence_free);
- spin_lock_irqsave(&obj->child_list_lock, flags);
- list_for_each_entry_safe(pt, next, &obj->active_list_head, active_list) {
if (__fence_is_signaled(&pt->base))
}list_del(&pt->active_list);
- spin_unlock_irqrestore(&obj->child_list_lock, flags);
} EXPORT_SYMBOL(sync_timeline_signal); -struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size) +struct sync_pt *sync_pt_create(struct sync_timeline *obj, int size) {
- unsigned long flags; struct sync_pt *pt;
if (size < sizeof(struct sync_pt)) @@ -176,87 +126,27 @@ struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size) if (pt == NULL) return NULL;
- spin_lock_irqsave(&obj->child_list_lock, flags);
- sync_timeline_get(obj);
- __fence_init(&pt->base, &android_fence_ops, &obj->child_list_lock, obj->context, ++obj->value);
- list_add_tail(&pt->child_list, &obj->child_list_head); INIT_LIST_HEAD(&pt->active_list);
- kref_get(&parent->kref);
- sync_timeline_add_pt(parent, pt);
- spin_unlock_irqrestore(&obj->child_list_lock, flags); return pt;
} EXPORT_SYMBOL(sync_pt_create); void sync_pt_free(struct sync_pt *pt) {
- if (pt->parent->ops->free_pt)
pt->parent->ops->free_pt(pt);
- sync_timeline_remove_pt(pt);
- kref_put(&pt->parent->kref, sync_timeline_free);
- kfree(pt);
- fence_put(&pt->base);
} EXPORT_SYMBOL(sync_pt_free); -/* call with pt->parent->active_list_lock held */ -static int _sync_pt_has_signaled(struct sync_pt *pt) -{
- int old_status = pt->status;
- if (!pt->status)
pt->status = pt->parent->ops->has_signaled(pt);
- if (!pt->status && pt->parent->destroyed)
pt->status = -ENOENT;
- if (pt->status != old_status)
pt->timestamp = ktime_get();
- return pt->status;
-}
-static struct sync_pt *sync_pt_dup(struct sync_pt *pt) -{
- return pt->parent->ops->dup(pt);
-}
-/* Adds a sync pt to the active queue. Called when added to a fence */ -static void sync_pt_activate(struct sync_pt *pt) -{
- struct sync_timeline *obj = pt->parent;
- unsigned long flags;
- int err;
- spin_lock_irqsave(&obj->active_list_lock, flags);
- err = _sync_pt_has_signaled(pt);
- if (err != 0)
goto out;
- list_add_tail(&pt->active_list, &obj->active_list_head);
-out:
- spin_unlock_irqrestore(&obj->active_list_lock, flags);
-}
-static int sync_fence_release(struct inode *inode, struct file *file); -static unsigned int sync_fence_poll(struct file *file, poll_table *wait); -static long sync_fence_ioctl(struct file *file, unsigned int cmd,
unsigned long arg);
-static const struct file_operations sync_fence_fops = {
- .release = sync_fence_release,
- .poll = sync_fence_poll,
- .unlocked_ioctl = sync_fence_ioctl,
- .compat_ioctl = sync_fence_ioctl,
-};
-static struct sync_fence *sync_fence_alloc(const char *name) +static struct sync_fence *sync_fence_alloc(int size, const char *name) { struct sync_fence *fence;
- unsigned long flags;
- fence = kzalloc(sizeof(struct sync_fence), GFP_KERNEL);
- fence = kzalloc(size, GFP_KERNEL); if (fence == NULL) return NULL;
@@ -268,16 +158,8 @@ static struct sync_fence *sync_fence_alloc(const char *name) kref_init(&fence->kref); strlcpy(fence->name, name, sizeof(fence->name));
- INIT_LIST_HEAD(&fence->pt_list_head);
- INIT_LIST_HEAD(&fence->waiter_list_head);
- spin_lock_init(&fence->waiter_list_lock);
- init_waitqueue_head(&fence->wq);
- spin_lock_irqsave(&sync_fence_list_lock, flags);
- list_add_tail(&fence->sync_fence_list, &sync_fence_list_head);
- spin_unlock_irqrestore(&sync_fence_list_lock, flags);
- return fence;
err: @@ -285,119 +167,40 @@ err: return NULL; } +static void fence_check_cb_func(struct fence *f, struct fence_cb *cb) +{
- struct sync_fence_cb *check = container_of(cb, struct sync_fence_cb, cb);
- struct sync_fence *fence = check->fence;
- // TODO: Add a fence->status member and check it
Hmm, C++ / C99 style comments makes checkpatch.pl complain. Did you run this series through checkpatch?
/Thomas
op 19-02-14 14:56, Thomas Hellstrom schreef:
+static void fence_check_cb_func(struct fence *f, struct fence_cb *cb) +{
- struct sync_fence_cb *check = container_of(cb, struct sync_fence_cb, cb);
- struct sync_fence *fence = check->fence;
- // TODO: Add a fence->status member and check it
Hmm, C++ / C99 style comments makes checkpatch.pl complain. Did you run this series through checkpatch?
/Thomas
Actually I used c99 here because it shouldn't have been in the sent patch. ;-)
Right below that comment I use fence->status, so the right thing to do was to zap the comment.
Thanks for catching it!
~Maarten\
On Mon, Feb 17, 2014 at 04:57:19PM +0100, Maarten Lankhorst wrote:
Android syncpoints can be mapped to a timeline. This removes the need to maintain a separate api for synchronization. I've left the android trace events in place, but the core fence events should already be sufficient for debugging.
v2:
- Call fence_remove_callback in sync_fence_free if not all fences have fired.
v3:
- Merge Colin Cross' bugfixes, and the android fence merge optimization.
v4:
- Merge with the upstream fixes.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Snipped everything but headers - Ian Lister from our android team is signed up to have a more in-depth look at proper integration with android syncpoints. Adding him to cc.
diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h index 62e2255b1c1e..6036dbdc8e6f 100644 --- a/drivers/staging/android/sync.h +++ b/drivers/staging/android/sync.h @@ -21,6 +21,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/wait.h> +#include <linux/fence.h>
struct sync_timeline; struct sync_pt; @@ -40,8 +41,6 @@ struct sync_fence;
- -1 if a will signal before b
- @free_pt: called before sync_pt is freed
- @release_obj: called before sync_timeline is freed
- @print_obj: deprecated
- @print_pt: deprecated
- @fill_driver_data: write implementation specific driver data to data.
- should return an error if there is not enough room
- as specified by size. This information is returned
@@ -67,13 +66,6 @@ struct sync_timeline_ops { /* optional */ void (*release_obj)(struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_obj)(struct seq_file *s,
- struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt);
- /* optional */ int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
@@ -104,42 +96,48 @@ struct sync_timeline {
/* protected by child_list_lock */ bool destroyed;
int context, value;
struct list_head child_list_head; spinlock_t child_list_lock;
struct list_head active_list_head;
- spinlock_t active_list_lock;
+#ifdef CONFIG_DEBUG_FS struct list_head sync_timeline_list; +#endif };
/**
- struct sync_pt - sync point
- @parent: sync_timeline to which this sync_pt belongs
- @fence: base fence class
- @child_list: membership in sync_timeline.child_list_head
- @active_list: membership in sync_timeline.active_list_head
+<<<<<<< current
- @signaled_list: membership in temporary signaled_list on stack
- @fence: sync_fence to which the sync_pt belongs
- @pt_list: membership in sync_fence.pt_list_head
- @status: 1: signaled, 0:active, <0: error
- @timestamp: time which sync_pt status transitioned from active to
- signaled or error.
+======= +>>>>>>> patched
Conflict markers ...
*/ struct sync_pt {
- struct sync_timeline *parent;
- struct list_head child_list;
- struct fence base;
Hm, embedding feels wrong, since that still means that I'll need to implement two kinds of fences in i915 - one using the seqno fence to make dma-buf sync work, and one to implmenent sync_pt to make the android folks happy.
If I can dream I think we should have a pointer to an underlying fence here, i.e. a struct sync_pt would just be a userspace interface wrapper to do explicit syncing using native fences, instead of implicit syncing like with dma-bufs. But this is all drive-by comments from a very cursory high-level look. I might be full of myself again ;-) -Daniel
- struct list_head child_list; struct list_head active_list;
- struct list_head signaled_list;
- struct sync_fence *fence;
- struct list_head pt_list;
+};
- /* protected by parent->active_list_lock */
- int status;
+static inline struct sync_timeline *sync_pt_parent(struct sync_pt *pt) {
- return container_of(pt->base.lock, struct sync_timeline, child_list_lock);
+}
- ktime_t timestamp;
+struct sync_fence_cb {
- struct fence_cb cb;
- struct fence *sync_pt;
- struct sync_fence *fence;
};
/** @@ -149,9 +147,7 @@ struct sync_pt {
- @name: name of sync_fence. Useful for debugging
- @pt_list_head: list of sync_pts in the fence. immutable once fence
- is created
- @waiter_list_head: list of asynchronous waiters on this fence
- @waiter_list_lock: lock protecting @waiter_list_head and @status
- @status: 1: signaled, 0:active, <0: error
- @status: 0: signaled, >0:active, <0: error
- @wq: wait queue for fence signaling
- @sync_fence_list: membership in global fence list
@@ -160,17 +156,15 @@ struct sync_fence { struct file *file; struct kref kref; char name[32];
- /* this list is immutable once the fence is created */
- struct list_head pt_list_head;
- struct list_head waiter_list_head;
- spinlock_t waiter_list_lock; /* also protects status */
- int status;
+#ifdef CONFIG_DEBUG_FS
- struct list_head sync_fence_list;
+#endif
int num_fences;
wait_queue_head_t wq;
atomic_t status;
- struct list_head sync_fence_list;
- struct sync_fence_cb cbs[];
};
struct sync_fence_waiter; @@ -184,14 +178,14 @@ typedef void (*sync_callback_t)(struct sync_fence *fence,
- @callback_data: pointer to pass to @callback
*/ struct sync_fence_waiter {
- struct list_head waiter_list;
- sync_callback_t callback;
- wait_queue_t work;
- sync_callback_t callback;
};
static inline void sync_fence_waiter_init(struct sync_fence_waiter *waiter, sync_callback_t callback) {
- INIT_LIST_HEAD(&waiter->work.task_list); waiter->callback = callback;
}
@@ -423,4 +417,22 @@ struct sync_fence_info_data { #define SYNC_IOC_FENCE_INFO _IOWR(SYNC_IOC_MAGIC, 2,\ struct sync_fence_info_data)
+#ifdef CONFIG_DEBUG_FS
+extern void sync_timeline_debug_add(struct sync_timeline *obj); +extern void sync_timeline_debug_remove(struct sync_timeline *obj); +extern void sync_fence_debug_add(struct sync_fence *fence); +extern void sync_fence_debug_remove(struct sync_fence *fence); +extern void sync_dump(void);
+#else +# define sync_timeline_debug_add(obj) +# define sync_timeline_debug_remove(obj) +# define sync_fence_debug_add(fence) +# define sync_fence_debug_remove(fence) +# define sync_dump() +#endif +int sync_fence_wake_up_wq(wait_queue_t *curr, unsigned mode,
- int wake_flags, void *key);
#endif /* _LINUX_SYNC_H */
op 03-03-14 22:11, Daniel Vetter schreef:
On Mon, Feb 17, 2014 at 04:57:19PM +0100, Maarten Lankhorst wrote:
Android syncpoints can be mapped to a timeline. This removes the need to maintain a separate api for synchronization. I've left the android trace events in place, but the core fence events should already be sufficient for debugging.
v2:
- Call fence_remove_callback in sync_fence_free if not all fences have fired.
v3:
- Merge Colin Cross' bugfixes, and the android fence merge optimization.
v4:
- Merge with the upstream fixes.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Snipped everything but headers - Ian Lister from our android team is signed up to have a more in-depth look at proper integration with android syncpoints. Adding him to cc.
diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h index 62e2255b1c1e..6036dbdc8e6f 100644 --- a/drivers/staging/android/sync.h +++ b/drivers/staging/android/sync.h @@ -21,6 +21,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/wait.h> +#include <linux/fence.h>
struct sync_timeline; struct sync_pt; @@ -40,8 +41,6 @@ struct sync_fence;
- -1 if a will signal before b
- @free_pt: called before sync_pt is freed
- @release_obj: called before sync_timeline is freed
- @print_obj: deprecated
- @print_pt: deprecated
- @fill_driver_data: write implementation specific driver data to data.
- should return an error if there is not enough room
- as specified by size. This information is returned
@@ -67,13 +66,6 @@ struct sync_timeline_ops { /* optional */ void (*release_obj)(struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_obj)(struct seq_file *s,
- struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt);
- /* optional */ int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
@@ -104,42 +96,48 @@ struct sync_timeline {
/* protected by child_list_lock */ bool destroyed;
int context, value;
struct list_head child_list_head; spinlock_t child_list_lock;
struct list_head active_list_head;
- spinlock_t active_list_lock;
+#ifdef CONFIG_DEBUG_FS struct list_head sync_timeline_list; +#endif };
/**
- struct sync_pt - sync point
- @parent: sync_timeline to which this sync_pt belongs
- @fence: base fence class
- @child_list: membership in sync_timeline.child_list_head
- @active_list: membership in sync_timeline.active_list_head
+<<<<<<< current
- @signaled_list: membership in temporary signaled_list on stack
- @fence: sync_fence to which the sync_pt belongs
- @pt_list: membership in sync_fence.pt_list_head
- @status: 1: signaled, 0:active, <0: error
- @timestamp: time which sync_pt status transitioned from active to
- signaled or error.
+======= +>>>>>>> patched
Conflict markers ...
Oops.
*/ struct sync_pt {
- struct sync_timeline *parent;
- struct list_head child_list;
- struct fence base;
Hm, embedding feels wrong, since that still means that I'll need to implement two kinds of fences in i915 - one using the seqno fence to make dma-buf sync work, and one to implmenent sync_pt to make the android folks happy.
If I can dream I think we should have a pointer to an underlying fence here, i.e. a struct sync_pt would just be a userspace interface wrapper to do explicit syncing using native fences, instead of implicit syncing like with dma-bufs. But this is all drive-by comments from a very cursory high-level look. I might be full of myself again ;-) -Daniel
No, the idea is that because android syncpoint is simply another type of dma-fence, that if you deal with normal fences then android can automatically be handled too. The userspace fence api android exposes could be very easily made to work for dma-fence, just pass a dma-fence to sync_fence_create. So exposing dma-fence would probably work for android too.
~Maarten
On Tue, Mar 04, 2014 at 08:50:38AM +0100, Maarten Lankhorst wrote:
op 03-03-14 22:11, Daniel Vetter schreef:
On Mon, Feb 17, 2014 at 04:57:19PM +0100, Maarten Lankhorst wrote:
Android syncpoints can be mapped to a timeline. This removes the need to maintain a separate api for synchronization. I've left the android trace events in place, but the core fence events should already be sufficient for debugging.
v2:
- Call fence_remove_callback in sync_fence_free if not all fences have fired.
v3:
- Merge Colin Cross' bugfixes, and the android fence merge optimization.
v4:
- Merge with the upstream fixes.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Snipped everything but headers - Ian Lister from our android team is signed up to have a more in-depth look at proper integration with android syncpoints. Adding him to cc.
diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h index 62e2255b1c1e..6036dbdc8e6f 100644 --- a/drivers/staging/android/sync.h +++ b/drivers/staging/android/sync.h @@ -21,6 +21,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/wait.h> +#include <linux/fence.h>
struct sync_timeline; struct sync_pt; @@ -40,8 +41,6 @@ struct sync_fence;
- -1 if a will signal before b
- @free_pt: called before sync_pt is freed
- @release_obj: called before sync_timeline is freed
- @print_obj: deprecated
- @print_pt: deprecated
- @fill_driver_data: write implementation specific driver data to data.
- should return an error if there is not enough room
- as specified by size. This information is returned
@@ -67,13 +66,6 @@ struct sync_timeline_ops { /* optional */ void (*release_obj)(struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_obj)(struct seq_file *s,
- struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt);
- /* optional */ int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
@@ -104,42 +96,48 @@ struct sync_timeline {
/* protected by child_list_lock */ bool destroyed;
int context, value;
struct list_head child_list_head; spinlock_t child_list_lock;
struct list_head active_list_head;
- spinlock_t active_list_lock;
+#ifdef CONFIG_DEBUG_FS struct list_head sync_timeline_list; +#endif };
/**
- struct sync_pt - sync point
- @parent: sync_timeline to which this sync_pt belongs
- @fence: base fence class
- @child_list: membership in sync_timeline.child_list_head
- @active_list: membership in sync_timeline.active_list_head
+<<<<<<< current
- @signaled_list: membership in temporary signaled_list on stack
- @fence: sync_fence to which the sync_pt belongs
- @pt_list: membership in sync_fence.pt_list_head
- @status: 1: signaled, 0:active, <0: error
- @timestamp: time which sync_pt status transitioned from active to
- signaled or error.
+======= +>>>>>>> patched
Conflict markers ...
Oops.
*/ struct sync_pt {
- struct sync_timeline *parent;
- struct list_head child_list;
- struct fence base;
Hm, embedding feels wrong, since that still means that I'll need to implement two kinds of fences in i915 - one using the seqno fence to make dma-buf sync work, and one to implmenent sync_pt to make the android folks happy.
If I can dream I think we should have a pointer to an underlying fence here, i.e. a struct sync_pt would just be a userspace interface wrapper to do explicit syncing using native fences, instead of implicit syncing like with dma-bufs. But this is all drive-by comments from a very cursory high-level look. I might be full of myself again ;-) -Daniel
No, the idea is that because android syncpoint is simply another type of dma-fence, that if you deal with normal fences then android can automatically be handled too. The userspace fence api android exposes could be very easily made to work for dma-fence, just pass a dma-fence to sync_fence_create. So exposing dma-fence would probably work for android too.
Hm, then why do we still have struct sync_pt around? Since it's just the internal bit, with the userspace facing object being struct sync_fence, I'd opt to shuffle any useful features into the core struct fence. -Daniel
op 04-03-14 09:14, Daniel Vetter schreef:
On Tue, Mar 04, 2014 at 08:50:38AM +0100, Maarten Lankhorst wrote:
op 03-03-14 22:11, Daniel Vetter schreef:
On Mon, Feb 17, 2014 at 04:57:19PM +0100, Maarten Lankhorst wrote:
Android syncpoints can be mapped to a timeline. This removes the need to maintain a separate api for synchronization. I've left the android trace events in place, but the core fence events should already be sufficient for debugging.
v2:
- Call fence_remove_callback in sync_fence_free if not all fences have fired.
v3:
- Merge Colin Cross' bugfixes, and the android fence merge optimization.
v4:
- Merge with the upstream fixes.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Snipped everything but headers - Ian Lister from our android team is signed up to have a more in-depth look at proper integration with android syncpoints. Adding him to cc.
diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h index 62e2255b1c1e..6036dbdc8e6f 100644 --- a/drivers/staging/android/sync.h +++ b/drivers/staging/android/sync.h @@ -21,6 +21,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/wait.h> +#include <linux/fence.h>
struct sync_timeline; struct sync_pt; @@ -40,8 +41,6 @@ struct sync_fence;
- -1 if a will signal before b
- @free_pt: called before sync_pt is freed
- @release_obj: called before sync_timeline is freed
- @print_obj: deprecated
- @print_pt: deprecated
- @fill_driver_data: write implementation specific driver data to data.
- should return an error if there is not enough room
- as specified by size. This information is returned
@@ -67,13 +66,6 @@ struct sync_timeline_ops { /* optional */ void (*release_obj)(struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_obj)(struct seq_file *s,
- struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt);
- /* optional */ int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
@@ -104,42 +96,48 @@ struct sync_timeline {
/* protected by child_list_lock */ bool destroyed;
int context, value;
struct list_head child_list_head; spinlock_t child_list_lock;
struct list_head active_list_head;
- spinlock_t active_list_lock;
+#ifdef CONFIG_DEBUG_FS struct list_head sync_timeline_list; +#endif };
/**
- struct sync_pt - sync point
- @parent: sync_timeline to which this sync_pt belongs
- @fence: base fence class
- @child_list: membership in sync_timeline.child_list_head
- @active_list: membership in sync_timeline.active_list_head
+<<<<<<< current
- @signaled_list: membership in temporary signaled_list on stack
- @fence: sync_fence to which the sync_pt belongs
- @pt_list: membership in sync_fence.pt_list_head
- @status: 1: signaled, 0:active, <0: error
- @timestamp: time which sync_pt status transitioned from active to
- signaled or error.
+======= +>>>>>>> patched
Conflict markers ...
Oops.
*/ struct sync_pt {
- struct sync_timeline *parent;
- struct list_head child_list;
- struct fence base;
Hm, embedding feels wrong, since that still means that I'll need to implement two kinds of fences in i915 - one using the seqno fence to make dma-buf sync work, and one to implmenent sync_pt to make the android folks happy.
If I can dream I think we should have a pointer to an underlying fence here, i.e. a struct sync_pt would just be a userspace interface wrapper to do explicit syncing using native fences, instead of implicit syncing like with dma-bufs. But this is all drive-by comments from a very cursory high-level look. I might be full of myself again ;-) -Daniel
No, the idea is that because android syncpoint is simply another type of dma-fence, that if you deal with normal fences then android can automatically be handled too. The userspace fence api android exposes could be very easily made to work for dma-fence, just pass a dma-fence to sync_fence_create. So exposing dma-fence would probably work for android too.
Hm, then why do we still have struct sync_pt around? Since it's just the internal bit, with the userspace facing object being struct sync_fence, I'd opt to shuffle any useful features into the core struct fence. -Daniel
To keep compatibility with the android api. I think that gradually converting them is going to be more useful than to force all drivers to use a new api all at once. They could keep android syncpoint api for exporting, as long as they accept dma-fence for importing/waiting.
~Maarten
On Tue, Mar 04, 2014 at 09:20:58AM +0100, Maarten Lankhorst wrote:
op 04-03-14 09:14, Daniel Vetter schreef:
On Tue, Mar 04, 2014 at 08:50:38AM +0100, Maarten Lankhorst wrote:
op 03-03-14 22:11, Daniel Vetter schreef:
On Mon, Feb 17, 2014 at 04:57:19PM +0100, Maarten Lankhorst wrote:
Android syncpoints can be mapped to a timeline. This removes the need to maintain a separate api for synchronization. I've left the android trace events in place, but the core fence events should already be sufficient for debugging.
v2:
- Call fence_remove_callback in sync_fence_free if not all fences have fired.
v3:
- Merge Colin Cross' bugfixes, and the android fence merge optimization.
v4:
- Merge with the upstream fixes.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Snipped everything but headers - Ian Lister from our android team is signed up to have a more in-depth look at proper integration with android syncpoints. Adding him to cc.
diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h index 62e2255b1c1e..6036dbdc8e6f 100644 --- a/drivers/staging/android/sync.h +++ b/drivers/staging/android/sync.h @@ -21,6 +21,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/wait.h> +#include <linux/fence.h>
struct sync_timeline; struct sync_pt; @@ -40,8 +41,6 @@ struct sync_fence;
- -1 if a will signal before b
- @free_pt: called before sync_pt is freed
- @release_obj: called before sync_timeline is freed
- @print_obj: deprecated
- @print_pt: deprecated
- @fill_driver_data: write implementation specific driver data to data.
- should return an error if there is not enough room
- as specified by size. This information is returned
@@ -67,13 +66,6 @@ struct sync_timeline_ops { /* optional */ void (*release_obj)(struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_obj)(struct seq_file *s,
- struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt);
- /* optional */ int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
@@ -104,42 +96,48 @@ struct sync_timeline {
/* protected by child_list_lock */ bool destroyed;
int context, value;
struct list_head child_list_head; spinlock_t child_list_lock;
struct list_head active_list_head;
- spinlock_t active_list_lock;
+#ifdef CONFIG_DEBUG_FS struct list_head sync_timeline_list; +#endif };
/**
- struct sync_pt - sync point
- @parent: sync_timeline to which this sync_pt belongs
- @fence: base fence class
- @child_list: membership in sync_timeline.child_list_head
- @active_list: membership in sync_timeline.active_list_head
+<<<<<<< current
- @signaled_list: membership in temporary signaled_list on stack
- @fence: sync_fence to which the sync_pt belongs
- @pt_list: membership in sync_fence.pt_list_head
- @status: 1: signaled, 0:active, <0: error
- @timestamp: time which sync_pt status transitioned from active to
- signaled or error.
+======= +>>>>>>> patched
Conflict markers ...
Oops.
*/ struct sync_pt {
- struct sync_timeline *parent;
- struct list_head child_list;
- struct fence base;
Hm, embedding feels wrong, since that still means that I'll need to implement two kinds of fences in i915 - one using the seqno fence to make dma-buf sync work, and one to implmenent sync_pt to make the android folks happy.
If I can dream I think we should have a pointer to an underlying fence here, i.e. a struct sync_pt would just be a userspace interface wrapper to do explicit syncing using native fences, instead of implicit syncing like with dma-bufs. But this is all drive-by comments from a very cursory high-level look. I might be full of myself again ;-) -Daniel
No, the idea is that because android syncpoint is simply another type of dma-fence, that if you deal with normal fences then android can automatically be handled too. The userspace fence api android exposes could be very easily made to work for dma-fence, just pass a dma-fence to sync_fence_create. So exposing dma-fence would probably work for android too.
Hm, then why do we still have struct sync_pt around? Since it's just the internal bit, with the userspace facing object being struct sync_fence, I'd opt to shuffle any useful features into the core struct fence. -Daniel
To keep compatibility with the android api. I think that gradually converting them is going to be more useful than to force all drivers to use a new api all at once. They could keep android syncpoint api for exporting, as long as they accept dma-fence for importing/waiting.
We don't have any users of the android sync_pt stuff (outside of the framework itself). So any considerations for existing drivers for upstreaming are imo moot. At least for the in-kernel interfaces used. For the actual userspace interface I guess keeping the android syncpt ioctls as-is has value, at least if we conclude that their not badly broken. In which case we need to fix that before moving it out of staging.
Or are we just talking past each another here? -Daniel
op 04-03-14 11:00, Daniel Vetter schreef:
On Tue, Mar 04, 2014 at 09:20:58AM +0100, Maarten Lankhorst wrote:
op 04-03-14 09:14, Daniel Vetter schreef:
On Tue, Mar 04, 2014 at 08:50:38AM +0100, Maarten Lankhorst wrote:
op 03-03-14 22:11, Daniel Vetter schreef:
On Mon, Feb 17, 2014 at 04:57:19PM +0100, Maarten Lankhorst wrote:
Android syncpoints can be mapped to a timeline. This removes the need to maintain a separate api for synchronization. I've left the android trace events in place, but the core fence events should already be sufficient for debugging.
v2:
- Call fence_remove_callback in sync_fence_free if not all fences have fired.
v3:
- Merge Colin Cross' bugfixes, and the android fence merge optimization.
v4:
- Merge with the upstream fixes.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Snipped everything but headers - Ian Lister from our android team is signed up to have a more in-depth look at proper integration with android syncpoints. Adding him to cc.
diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h index 62e2255b1c1e..6036dbdc8e6f 100644 --- a/drivers/staging/android/sync.h +++ b/drivers/staging/android/sync.h @@ -21,6 +21,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/wait.h> +#include <linux/fence.h>
struct sync_timeline; struct sync_pt; @@ -40,8 +41,6 @@ struct sync_fence;
- -1 if a will signal before b
- @free_pt: called before sync_pt is freed
- @release_obj: called before sync_timeline is freed
- @print_obj: deprecated
- @print_pt: deprecated
- @fill_driver_data: write implementation specific driver data to data.
- should return an error if there is not enough room
- as specified by size. This information is returned
@@ -67,13 +66,6 @@ struct sync_timeline_ops { /* optional */ void (*release_obj)(struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_obj)(struct seq_file *s,
- struct sync_timeline *sync_timeline);
- /* deprecated */
- void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt);
- /* optional */ int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
@@ -104,42 +96,48 @@ struct sync_timeline {
/* protected by child_list_lock */ bool destroyed;
int context, value;
struct list_head child_list_head; spinlock_t child_list_lock;
struct list_head active_list_head;
- spinlock_t active_list_lock;
+#ifdef CONFIG_DEBUG_FS struct list_head sync_timeline_list; +#endif };
/**
- struct sync_pt - sync point
- @parent: sync_timeline to which this sync_pt belongs
- @fence: base fence class
- @child_list: membership in sync_timeline.child_list_head
- @active_list: membership in sync_timeline.active_list_head
+<<<<<<< current
- @signaled_list: membership in temporary signaled_list on stack
- @fence: sync_fence to which the sync_pt belongs
- @pt_list: membership in sync_fence.pt_list_head
- @status: 1: signaled, 0:active, <0: error
- @timestamp: time which sync_pt status transitioned from active to
- signaled or error.
+======= +>>>>>>> patched
Conflict markers ...
Oops.
*/ struct sync_pt {
- struct sync_timeline *parent;
- struct list_head child_list;
- struct fence base;
Hm, embedding feels wrong, since that still means that I'll need to implement two kinds of fences in i915 - one using the seqno fence to make dma-buf sync work, and one to implmenent sync_pt to make the android folks happy.
If I can dream I think we should have a pointer to an underlying fence here, i.e. a struct sync_pt would just be a userspace interface wrapper to do explicit syncing using native fences, instead of implicit syncing like with dma-bufs. But this is all drive-by comments from a very cursory high-level look. I might be full of myself again ;-) -Daniel
No, the idea is that because android syncpoint is simply another type of dma-fence, that if you deal with normal fences then android can automatically be handled too. The userspace fence api android exposes could be very easily made to work for dma-fence, just pass a dma-fence to sync_fence_create. So exposing dma-fence would probably work for android too.
Hm, then why do we still have struct sync_pt around? Since it's just the internal bit, with the userspace facing object being struct sync_fence, I'd opt to shuffle any useful features into the core struct fence. -Daniel
To keep compatibility with the android api. I think that gradually converting them is going to be more useful than to force all drivers to use a new api all at once. They could keep android syncpoint api for exporting, as long as they accept dma-fence for importing/waiting.
We don't have any users of the android sync_pt stuff (outside of the framework itself). So any considerations for existing drivers for upstreaming are imo moot. At least for the in-kernel interfaces used. For the actual userspace interface I guess keeping the android syncpt ioctls as-is has value, at least if we conclude that their not badly broken. In which case we need to fix that before moving it out of staging.
Any driver has to implement sync_timeline_ops though, which interacts with sync_pt? And creating a userspace fence requires a sync_pt, although it could very trivially be changed to require a fence too.
~Maarten
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- include/linux/reservation.h | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/include/linux/reservation.h b/include/linux/reservation.h index 813dae960ebd..92c4851b5a39 100644 --- a/include/linux/reservation.h +++ b/include/linux/reservation.h @@ -6,7 +6,7 @@ * Copyright (C) 2012 Texas Instruments * * Authors: - * Rob Clark rob.clark@linaro.org + * Rob Clark robdclark@gmail.com * Maarten Lankhorst maarten.lankhorst@canonical.com * Thomas Hellstrom <thellstrom-at-vmware-dot-com> * @@ -40,22 +40,38 @@ #define _LINUX_RESERVATION_H
#include <linux/ww_mutex.h> +#include <linux/fence.h>
extern struct ww_class reservation_ww_class;
struct reservation_object { struct ww_mutex lock; + + struct fence *fence_excl; + struct fence **fence_shared; + u32 fence_shared_count, fence_shared_max; };
static inline void reservation_object_init(struct reservation_object *obj) { ww_mutex_init(&obj->lock, &reservation_ww_class); + + obj->fence_shared_count = obj->fence_shared_max = 0; + obj->fence_shared = NULL; + obj->fence_excl = NULL; }
static inline void reservation_object_fini(struct reservation_object *obj) { + int i; + + if (obj->fence_excl) + fence_put(obj->fence_excl); + for (i = 0; i < obj->fence_shared_count; ++i) + fence_put(obj->fence_shared[i]); + ww_mutex_destroy(&obj->lock); }
On Mon, Feb 17, 2014 at 10:58 AM, Maarten Lankhorst maarten.lankhorst@canonical.com wrote:
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Reviewed-by: Rob Clark robdclark@gmail.com
include/linux/reservation.h | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/include/linux/reservation.h b/include/linux/reservation.h index 813dae960ebd..92c4851b5a39 100644 --- a/include/linux/reservation.h +++ b/include/linux/reservation.h @@ -6,7 +6,7 @@
- Copyright (C) 2012 Texas Instruments
- Authors:
- Rob Clark rob.clark@linaro.org
- Rob Clark robdclark@gmail.com
- Maarten Lankhorst maarten.lankhorst@canonical.com
- Thomas Hellstrom <thellstrom-at-vmware-dot-com>
@@ -40,22 +40,38 @@ #define _LINUX_RESERVATION_H
#include <linux/ww_mutex.h> +#include <linux/fence.h>
extern struct ww_class reservation_ww_class;
struct reservation_object { struct ww_mutex lock;
struct fence *fence_excl;
struct fence **fence_shared;
u32 fence_shared_count, fence_shared_max;
};
static inline void reservation_object_init(struct reservation_object *obj) { ww_mutex_init(&obj->lock, &reservation_ww_class);
obj->fence_shared_count = obj->fence_shared_max = 0;
obj->fence_shared = NULL;
obj->fence_excl = NULL;
}
static inline void reservation_object_fini(struct reservation_object *obj) {
int i;
if (obj->fence_excl)
fence_put(obj->fence_excl);
for (i = 0; i < obj->fence_shared_count; ++i)
fence_put(obj->fence_shared[i]);
ww_mutex_destroy(&obj->lock);
}
Thanks to Fengguang Wu for spotting a missing static cast.
v2: - Kill unused variable need_shared.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- drivers/base/dma-buf.c | 101 +++++++++++++++++++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 12 ++++++ 2 files changed, 113 insertions(+)
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 85e792c2c909..77ea621ab59d 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -30,6 +30,7 @@ #include <linux/export.h> #include <linux/debugfs.h> #include <linux/seq_file.h> +#include <linux/poll.h> #include <linux/reservation.h>
static inline int is_dma_buf_file(struct file *); @@ -52,6 +53,13 @@ static int dma_buf_release(struct inode *inode, struct file *file)
BUG_ON(dmabuf->vmapping_counter);
+ /* + * Any fences that a dma-buf poll can wait on should be signaled + * before releasing dma-buf. This is the responsibility of each + * driver that uses the reservation objects. + */ + BUG_ON(dmabuf->cb_shared.active || dmabuf->cb_excl.active); + dmabuf->ops->release(dmabuf);
mutex_lock(&db_list.lock); @@ -108,10 +116,99 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) return base + offset; }
+static void dma_buf_poll_cb(struct fence *fence, struct fence_cb *cb) +{ + struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t*) cb; + unsigned long flags; + + spin_lock_irqsave(&dcb->poll->lock, flags); + wake_up_locked_poll(dcb->poll, dcb->active); + dcb->active = 0; + spin_unlock_irqrestore(&dcb->poll->lock, flags); +} + +static unsigned int dma_buf_poll(struct file *file, poll_table *poll) +{ + struct dma_buf *dmabuf; + struct reservation_object *resv; + unsigned long events; + + dmabuf = file->private_data; + if (!dmabuf || !dmabuf->resv) + return POLLERR; + + resv = dmabuf->resv; + + poll_wait(file, &dmabuf->poll, poll); + + events = poll_requested_events(poll) & (POLLIN | POLLOUT); + if (!events) + return 0; + + ww_mutex_lock(&resv->lock, NULL); + + if (resv->fence_excl && (!(events & POLLOUT) || resv->fence_shared_count == 0)) { + struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl; + unsigned long pevents = POLLIN; + + if (resv->fence_shared_count == 0) + pevents |= POLLOUT; + + spin_lock_irq(&dmabuf->poll.lock); + if (dcb->active) { + dcb->active |= pevents; + events &= ~pevents; + } else + dcb->active = pevents; + spin_unlock_irq(&dmabuf->poll.lock); + + if (events & pevents) { + if (!fence_add_callback(resv->fence_excl, + &dcb->cb, dma_buf_poll_cb)) + events &= ~pevents; + else + // No callback queued, wake up any additional waiters. + dma_buf_poll_cb(NULL, &dcb->cb); + } + } + + if ((events & POLLOUT) && resv->fence_shared_count > 0) { + struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_shared; + int i; + + /* Only queue a new callback if no event has fired yet */ + spin_lock_irq(&dmabuf->poll.lock); + if (dcb->active) + events &= ~POLLOUT; + else + dcb->active = POLLOUT; + spin_unlock_irq(&dmabuf->poll.lock); + + if (!(events & POLLOUT)) + goto out; + + for (i = 0; i < resv->fence_shared_count; ++i) + if (!fence_add_callback(resv->fence_shared[i], + &dcb->cb, dma_buf_poll_cb)) { + events &= ~POLLOUT; + break; + } + + // No callback queued, wake up any additional waiters. + if (i == resv->fence_shared_count) + dma_buf_poll_cb(NULL, &dcb->cb); + } + +out: + ww_mutex_unlock(&resv->lock); + return events; +} + static const struct file_operations dma_buf_fops = { .release = dma_buf_release, .mmap = dma_buf_mmap_internal, .llseek = dma_buf_llseek, + .poll = dma_buf_poll, };
/* @@ -171,6 +268,10 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, dmabuf->ops = ops; dmabuf->size = size; dmabuf->exp_name = exp_name; + init_waitqueue_head(&dmabuf->poll); + dmabuf->cb_excl.poll = dmabuf->cb_shared.poll = &dmabuf->poll; + dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0; + if (!resv) { resv = (struct reservation_object*)&dmabuf[1]; reservation_object_init(resv); diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 34cfbac52c03..e1df18f584ef 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -30,6 +30,8 @@ #include <linux/list.h> #include <linux/dma-mapping.h> #include <linux/fs.h> +#include <linux/fence.h> +#include <linux/wait.h>
struct device; struct dma_buf; @@ -130,6 +132,16 @@ struct dma_buf { struct list_head list_node; void *priv; struct reservation_object *resv; + + /* poll support */ + wait_queue_head_t poll; + + struct dma_buf_poll_cb_t { + struct fence_cb cb; + wait_queue_head_t *poll; + + unsigned long active; + } cb_excl, cb_shared; };
/**
On Mon, Feb 17, 2014 at 10:58 AM, Maarten Lankhorst maarten.lankhorst@canonical.com wrote:
Thanks to Fengguang Wu for spotting a missing static cast.
v2:
- Kill unused variable need_shared.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
drivers/base/dma-buf.c | 101 +++++++++++++++++++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 12 ++++++ 2 files changed, 113 insertions(+)
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 85e792c2c909..77ea621ab59d 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -30,6 +30,7 @@ #include <linux/export.h> #include <linux/debugfs.h> #include <linux/seq_file.h> +#include <linux/poll.h> #include <linux/reservation.h>
static inline int is_dma_buf_file(struct file *); @@ -52,6 +53,13 @@ static int dma_buf_release(struct inode *inode, struct file *file)
BUG_ON(dmabuf->vmapping_counter);
/*
* Any fences that a dma-buf poll can wait on should be signaled
* before releasing dma-buf. This is the responsibility of each
* driver that uses the reservation objects.
minor nit.. but wouldn't hurt to mention in the comment that if you hit this BUG_ON it is because someone isn't holding a ref to the dmabuf while there is a pending fence
*/
BUG_ON(dmabuf->cb_shared.active || dmabuf->cb_excl.active);
dmabuf->ops->release(dmabuf); mutex_lock(&db_list.lock);
@@ -108,10 +116,99 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) return base + offset; }
+static void dma_buf_poll_cb(struct fence *fence, struct fence_cb *cb) +{
struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t*) cb;
unsigned long flags;
spin_lock_irqsave(&dcb->poll->lock, flags);
wake_up_locked_poll(dcb->poll, dcb->active);
dcb->active = 0;
spin_unlock_irqrestore(&dcb->poll->lock, flags);
+}
+static unsigned int dma_buf_poll(struct file *file, poll_table *poll) +{
struct dma_buf *dmabuf;
struct reservation_object *resv;
unsigned long events;
dmabuf = file->private_data;
if (!dmabuf || !dmabuf->resv)
return POLLERR;
resv = dmabuf->resv;
poll_wait(file, &dmabuf->poll, poll);
events = poll_requested_events(poll) & (POLLIN | POLLOUT);
if (!events)
return 0;
ww_mutex_lock(&resv->lock, NULL);
if (resv->fence_excl && (!(events & POLLOUT) || resv->fence_shared_count == 0)) {
struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl;
unsigned long pevents = POLLIN;
if (resv->fence_shared_count == 0)
pevents |= POLLOUT;
spin_lock_irq(&dmabuf->poll.lock);
if (dcb->active) {
dcb->active |= pevents;
events &= ~pevents;
} else
dcb->active = pevents;
spin_unlock_irq(&dmabuf->poll.lock);
if (events & pevents) {
if (!fence_add_callback(resv->fence_excl,
&dcb->cb, dma_buf_poll_cb))
events &= ~pevents;
else
// No callback queued, wake up any additional waiters.
couple spots w/ c++ comments, which I assume you didn't mean to leave?
Anyways, other than those minor things,
Reviewed-by: Rob Clark robdclark@gmail.com
dma_buf_poll_cb(NULL, &dcb->cb);
}
}
if ((events & POLLOUT) && resv->fence_shared_count > 0) {
struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_shared;
int i;
/* Only queue a new callback if no event has fired yet */
spin_lock_irq(&dmabuf->poll.lock);
if (dcb->active)
events &= ~POLLOUT;
else
dcb->active = POLLOUT;
spin_unlock_irq(&dmabuf->poll.lock);
if (!(events & POLLOUT))
goto out;
for (i = 0; i < resv->fence_shared_count; ++i)
if (!fence_add_callback(resv->fence_shared[i],
&dcb->cb, dma_buf_poll_cb)) {
events &= ~POLLOUT;
break;
}
// No callback queued, wake up any additional waiters.
if (i == resv->fence_shared_count)
dma_buf_poll_cb(NULL, &dcb->cb);
}
+out:
ww_mutex_unlock(&resv->lock);
return events;
+}
static const struct file_operations dma_buf_fops = { .release = dma_buf_release, .mmap = dma_buf_mmap_internal, .llseek = dma_buf_llseek,
.poll = dma_buf_poll,
};
/* @@ -171,6 +268,10 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, dmabuf->ops = ops; dmabuf->size = size; dmabuf->exp_name = exp_name;
init_waitqueue_head(&dmabuf->poll);
dmabuf->cb_excl.poll = dmabuf->cb_shared.poll = &dmabuf->poll;
dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0;
if (!resv) { resv = (struct reservation_object*)&dmabuf[1]; reservation_object_init(resv);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 34cfbac52c03..e1df18f584ef 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -30,6 +30,8 @@ #include <linux/list.h> #include <linux/dma-mapping.h> #include <linux/fs.h> +#include <linux/fence.h> +#include <linux/wait.h>
struct device; struct dma_buf; @@ -130,6 +132,16 @@ struct dma_buf { struct list_head list_node; void *priv; struct reservation_object *resv;
/* poll support */
wait_queue_head_t poll;
struct dma_buf_poll_cb_t {
struct fence_cb cb;
wait_queue_head_t *poll;
unsigned long active;
} cb_excl, cb_shared;
};
/**
linaro-mm-sig@lists.linaro.org