So after some more hacking I've moved dma-buf to its own subdirectory, drivers/dma-buf and applied the fence patches to its new place. I believe that the first patch should be applied regardless, and the rest should be ready now. :-)
Changes to the fence api: - release_fence -> fence_release etc. - __fence_init -> fence_init - __fence_signal -> fence_signal_locked - __fence_is_signaled -> fence_is_signaled_locked - Changing BUG_ON to WARN_ON in fence_later, and return NULL if it triggers.
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
---
Maarten Lankhorst (9): dma-buf: move to drivers/dma-buf fence: dma-buf cross-device synchronization (v18) seqno-fence: Hardware dma-buf implementation of fencing (v6) dma-buf: use reservation objects android: convert sync to fence api, v6 reservation: add support for fences to enable cross-device synchronisation dma-buf: add poll support, v3 reservation: update api and add some helpers reservation: add suppport for read-only access using rcu
Documentation/DocBook/device-drivers.tmpl | 8 MAINTAINERS | 4 drivers/Makefile | 1 drivers/base/Kconfig | 9 drivers/base/Makefile | 1 drivers/base/dma-buf.c | 743 -------------------- drivers/base/reservation.c | 39 - drivers/dma-buf/Makefile | 1 drivers/dma-buf/dma-buf.c | 907 ++++++++++++++++++++++++ drivers/dma-buf/fence.c | 431 +++++++++++ drivers/dma-buf/reservation.c | 477 +++++++++++++ drivers/dma-buf/seqno-fence.c | 73 ++ drivers/gpu/drm/armada/armada_gem.c | 2 drivers/gpu/drm/drm_prime.c | 8 drivers/gpu/drm/exynos/exynos_drm_dmabuf.c | 2 drivers/gpu/drm/i915/i915_gem_dmabuf.c | 3 drivers/gpu/drm/nouveau/nouveau_drm.c | 1 drivers/gpu/drm/nouveau/nouveau_gem.h | 1 drivers/gpu/drm/nouveau/nouveau_prime.c | 7 drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 drivers/gpu/drm/radeon/radeon_drv.c | 2 drivers/gpu/drm/radeon/radeon_prime.c | 8 drivers/gpu/drm/tegra/gem.c | 2 drivers/gpu/drm/ttm/ttm_object.c | 2 drivers/media/v4l2-core/videobuf2-dma-contig.c | 2 drivers/staging/android/Kconfig | 1 drivers/staging/android/Makefile | 2 drivers/staging/android/ion/ion.c | 3 drivers/staging/android/sw_sync.c | 6 drivers/staging/android/sync.c | 913 ++++++++---------------- drivers/staging/android/sync.h | 79 +- drivers/staging/android/sync_debug.c | 247 ++++++ drivers/staging/android/trace/sync.h | 12 include/drm/drmP.h | 3 include/linux/dma-buf.h | 21 - include/linux/fence.h | 360 +++++++++ include/linux/reservation.h | 82 ++ include/linux/seqno-fence.h | 116 +++ include/trace/events/fence.h | 128 +++ 39 files changed, 3258 insertions(+), 1451 deletions(-) delete mode 100644 drivers/base/dma-buf.c delete mode 100644 drivers/base/reservation.c create mode 100644 drivers/dma-buf/Makefile create mode 100644 drivers/dma-buf/dma-buf.c create mode 100644 drivers/dma-buf/fence.c create mode 100644 drivers/dma-buf/reservation.c create mode 100644 drivers/dma-buf/seqno-fence.c create mode 100644 drivers/staging/android/sync_debug.c create mode 100644 include/linux/fence.h create mode 100644 include/linux/seqno-fence.h create mode 100644 include/trace/events/fence.h
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- Documentation/DocBook/device-drivers.tmpl | 3 MAINTAINERS | 4 drivers/Makefile | 1 drivers/base/Makefile | 1 drivers/base/dma-buf.c | 743 ----------------------------- drivers/base/reservation.c | 39 -- drivers/dma-buf/Makefile | 1 drivers/dma-buf/dma-buf.c | 743 +++++++++++++++++++++++++++++ drivers/dma-buf/reservation.c | 39 ++ 9 files changed, 787 insertions(+), 787 deletions(-) delete mode 100644 drivers/base/dma-buf.c delete mode 100644 drivers/base/reservation.c create mode 100644 drivers/dma-buf/Makefile create mode 100644 drivers/dma-buf/dma-buf.c create mode 100644 drivers/dma-buf/reservation.c
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index cc63f30de166..ac61ebd92875 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -128,8 +128,7 @@ X!Edrivers/base/interface.c !Edrivers/base/bus.c </sect1> <sect1><title>Device Drivers DMA Management</title> -!Edrivers/base/dma-buf.c -!Edrivers/base/reservation.c +!Edrivers/dma-buf/dma-buf.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c diff --git a/MAINTAINERS b/MAINTAINERS index 134483f206e4..c948e53a4ee6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2882,8 +2882,8 @@ S: Maintained L: linux-media@vger.kernel.org L: dri-devel@lists.freedesktop.org L: linaro-mm-sig@lists.linaro.org -F: drivers/base/dma-buf* -F: include/linux/dma-buf* +F: drivers/dma-buf/ +F: include/linux/dma-buf* include/linux/reservation.h F: Documentation/dma-buf-sharing.txt T: git git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git
diff --git a/drivers/Makefile b/drivers/Makefile index f98b50d8251d..c00337be5351 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -61,6 +61,7 @@ obj-$(CONFIG_FB_INTEL) += video/fbdev/intelfb/
obj-$(CONFIG_PARPORT) += parport/ obj-y += base/ block/ misc/ mfd/ nfc/ +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/ obj-$(CONFIG_NUBUS) += nubus/ obj-y += macintosh/ obj-$(CONFIG_IDE) += ide/ diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 04b314e0fa51..4aab26ec0292 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,6 @@ obj-$(CONFIG_DMA_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c deleted file mode 100644 index 840c7fa80983..000000000000 --- a/drivers/base/dma-buf.c +++ /dev/null @@ -1,743 +0,0 @@ -/* - * Framework for buffer objects that can be shared across devices/subsystems. - * - * Copyright(C) 2011 Linaro Limited. All rights reserved. - * Author: Sumit Semwal sumit.semwal@ti.com - * - * Many thanks to linaro-mm-sig list, and specially - * Arnd Bergmann arnd@arndb.de, Rob Clark rob@ti.com and - * Daniel Vetter daniel@ffwll.ch for their support in creation and - * refining of this idea. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 as published by - * the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * - * You should have received a copy of the GNU General Public License along with - * this program. If not, see http://www.gnu.org/licenses/. - */ - -#include <linux/fs.h> -#include <linux/slab.h> -#include <linux/dma-buf.h> -#include <linux/anon_inodes.h> -#include <linux/export.h> -#include <linux/debugfs.h> -#include <linux/seq_file.h> - -static inline int is_dma_buf_file(struct file *); - -struct dma_buf_list { - struct list_head head; - struct mutex lock; -}; - -static struct dma_buf_list db_list; - -static int dma_buf_release(struct inode *inode, struct file *file) -{ - struct dma_buf *dmabuf; - - if (!is_dma_buf_file(file)) - return -EINVAL; - - dmabuf = file->private_data; - - BUG_ON(dmabuf->vmapping_counter); - - dmabuf->ops->release(dmabuf); - - mutex_lock(&db_list.lock); - list_del(&dmabuf->list_node); - mutex_unlock(&db_list.lock); - - kfree(dmabuf); - return 0; -} - -static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) -{ - struct dma_buf *dmabuf; - - if (!is_dma_buf_file(file)) - return -EINVAL; - - dmabuf = file->private_data; - - /* check for overflowing the buffer's size */ - if (vma->vm_pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) > - dmabuf->size >> PAGE_SHIFT) - return -EINVAL; - - return dmabuf->ops->mmap(dmabuf, vma); -} - -static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) -{ - struct dma_buf *dmabuf; - loff_t base; - - if (!is_dma_buf_file(file)) - return -EBADF; - - dmabuf = file->private_data; - - /* only support discovering the end of the buffer, - but also allow SEEK_SET to maintain the idiomatic - SEEK_END(0), SEEK_CUR(0) pattern */ - if (whence == SEEK_END) - base = dmabuf->size; - else if (whence == SEEK_SET) - base = 0; - else - return -EINVAL; - - if (offset != 0) - return -EINVAL; - - return base + offset; -} - -static const struct file_operations dma_buf_fops = { - .release = dma_buf_release, - .mmap = dma_buf_mmap_internal, - .llseek = dma_buf_llseek, -}; - -/* - * is_dma_buf_file - Check if struct file* is associated with dma_buf - */ -static inline int is_dma_buf_file(struct file *file) -{ - return file->f_op == &dma_buf_fops; -} - -/** - * dma_buf_export_named - Creates a new dma_buf, and associates an anon file - * with this buffer, so it can be exported. - * Also connect the allocator specific data and ops to the buffer. - * Additionally, provide a name string for exporter; useful in debugging. - * - * @priv: [in] Attach private data of allocator to this buffer - * @ops: [in] Attach allocator-defined dma buf ops to the new buffer. - * @size: [in] Size of the buffer - * @flags: [in] mode flags for the file. - * @exp_name: [in] name of the exporting module - useful for debugging. - * - * Returns, on success, a newly created dma_buf object, which wraps the - * supplied private data and operations for dma_buf_ops. On either missing - * ops, or error in allocating struct dma_buf, will return negative error. - * - */ -struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, - size_t size, int flags, const char *exp_name) -{ - struct dma_buf *dmabuf; - struct file *file; - - if (WARN_ON(!priv || !ops - || !ops->map_dma_buf - || !ops->unmap_dma_buf - || !ops->release - || !ops->kmap_atomic - || !ops->kmap - || !ops->mmap)) { - return ERR_PTR(-EINVAL); - } - - dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL); - if (dmabuf == NULL) - return ERR_PTR(-ENOMEM); - - dmabuf->priv = priv; - dmabuf->ops = ops; - dmabuf->size = size; - dmabuf->exp_name = exp_name; - - file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags); - if (IS_ERR(file)) { - kfree(dmabuf); - return ERR_CAST(file); - } - - file->f_mode |= FMODE_LSEEK; - dmabuf->file = file; - - mutex_init(&dmabuf->lock); - INIT_LIST_HEAD(&dmabuf->attachments); - - mutex_lock(&db_list.lock); - list_add(&dmabuf->list_node, &db_list.head); - mutex_unlock(&db_list.lock); - - return dmabuf; -} -EXPORT_SYMBOL_GPL(dma_buf_export_named); - - -/** - * dma_buf_fd - returns a file descriptor for the given dma_buf - * @dmabuf: [in] pointer to dma_buf for which fd is required. - * @flags: [in] flags to give to fd - * - * On success, returns an associated 'fd'. Else, returns error. - */ -int dma_buf_fd(struct dma_buf *dmabuf, int flags) -{ - int fd; - - if (!dmabuf || !dmabuf->file) - return -EINVAL; - - fd = get_unused_fd_flags(flags); - if (fd < 0) - return fd; - - fd_install(fd, dmabuf->file); - - return fd; -} -EXPORT_SYMBOL_GPL(dma_buf_fd); - -/** - * dma_buf_get - returns the dma_buf structure related to an fd - * @fd: [in] fd associated with the dma_buf to be returned - * - * On success, returns the dma_buf structure associated with an fd; uses - * file's refcounting done by fget to increase refcount. returns ERR_PTR - * otherwise. - */ -struct dma_buf *dma_buf_get(int fd) -{ - struct file *file; - - file = fget(fd); - - if (!file) - return ERR_PTR(-EBADF); - - if (!is_dma_buf_file(file)) { - fput(file); - return ERR_PTR(-EINVAL); - } - - return file->private_data; -} -EXPORT_SYMBOL_GPL(dma_buf_get); - -/** - * dma_buf_put - decreases refcount of the buffer - * @dmabuf: [in] buffer to reduce refcount of - * - * Uses file's refcounting done implicitly by fput() - */ -void dma_buf_put(struct dma_buf *dmabuf) -{ - if (WARN_ON(!dmabuf || !dmabuf->file)) - return; - - fput(dmabuf->file); -} -EXPORT_SYMBOL_GPL(dma_buf_put); - -/** - * dma_buf_attach - Add the device to dma_buf's attachments list; optionally, - * calls attach() of dma_buf_ops to allow device-specific attach functionality - * @dmabuf: [in] buffer to attach device to. - * @dev: [in] device to be attached. - * - * Returns struct dma_buf_attachment * for this attachment; returns ERR_PTR on - * error. - */ -struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, - struct device *dev) -{ - struct dma_buf_attachment *attach; - int ret; - - if (WARN_ON(!dmabuf || !dev)) - return ERR_PTR(-EINVAL); - - attach = kzalloc(sizeof(struct dma_buf_attachment), GFP_KERNEL); - if (attach == NULL) - return ERR_PTR(-ENOMEM); - - attach->dev = dev; - attach->dmabuf = dmabuf; - - mutex_lock(&dmabuf->lock); - - if (dmabuf->ops->attach) { - ret = dmabuf->ops->attach(dmabuf, dev, attach); - if (ret) - goto err_attach; - } - list_add(&attach->node, &dmabuf->attachments); - - mutex_unlock(&dmabuf->lock); - return attach; - -err_attach: - kfree(attach); - mutex_unlock(&dmabuf->lock); - return ERR_PTR(ret); -} -EXPORT_SYMBOL_GPL(dma_buf_attach); - -/** - * dma_buf_detach - Remove the given attachment from dmabuf's attachments list; - * optionally calls detach() of dma_buf_ops for device-specific detach - * @dmabuf: [in] buffer to detach from. - * @attach: [in] attachment to be detached; is free'd after this call. - * - */ -void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) -{ - if (WARN_ON(!dmabuf || !attach)) - return; - - mutex_lock(&dmabuf->lock); - list_del(&attach->node); - if (dmabuf->ops->detach) - dmabuf->ops->detach(dmabuf, attach); - - mutex_unlock(&dmabuf->lock); - kfree(attach); -} -EXPORT_SYMBOL_GPL(dma_buf_detach); - -/** - * dma_buf_map_attachment - Returns the scatterlist table of the attachment; - * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the - * dma_buf_ops. - * @attach: [in] attachment whose scatterlist is to be returned - * @direction: [in] direction of DMA transfer - * - * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR - * on error. - */ -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, - enum dma_data_direction direction) -{ - struct sg_table *sg_table = ERR_PTR(-EINVAL); - - might_sleep(); - - if (WARN_ON(!attach || !attach->dmabuf)) - return ERR_PTR(-EINVAL); - - sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction); - if (!sg_table) - sg_table = ERR_PTR(-ENOMEM); - - return sg_table; -} -EXPORT_SYMBOL_GPL(dma_buf_map_attachment); - -/** - * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might - * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of - * dma_buf_ops. - * @attach: [in] attachment to unmap buffer from - * @sg_table: [in] scatterlist info of the buffer to unmap - * @direction: [in] direction of DMA transfer - * - */ -void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, - struct sg_table *sg_table, - enum dma_data_direction direction) -{ - might_sleep(); - - if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) - return; - - attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, - direction); -} -EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment); - - -/** - * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the - * cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific - * preparations. Coherency is only guaranteed in the specified range for the - * specified access direction. - * @dmabuf: [in] buffer to prepare cpu access for. - * @start: [in] start of range for cpu access. - * @len: [in] length of range for cpu access. - * @direction: [in] length of range for cpu access. - * - * Can return negative error values, returns 0 on success. - */ -int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len, - enum dma_data_direction direction) -{ - int ret = 0; - - if (WARN_ON(!dmabuf)) - return -EINVAL; - - if (dmabuf->ops->begin_cpu_access) - ret = dmabuf->ops->begin_cpu_access(dmabuf, start, len, direction); - - return ret; -} -EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access); - -/** - * dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the - * cpu in the kernel context. Calls end_cpu_access to allow exporter-specific - * actions. Coherency is only guaranteed in the specified range for the - * specified access direction. - * @dmabuf: [in] buffer to complete cpu access for. - * @start: [in] start of range for cpu access. - * @len: [in] length of range for cpu access. - * @direction: [in] length of range for cpu access. - * - * This call must always succeed. - */ -void dma_buf_end_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len, - enum dma_data_direction direction) -{ - WARN_ON(!dmabuf); - - if (dmabuf->ops->end_cpu_access) - dmabuf->ops->end_cpu_access(dmabuf, start, len, direction); -} -EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access); - -/** - * dma_buf_kmap_atomic - Map a page of the buffer object into kernel address - * space. The same restrictions as for kmap_atomic and friends apply. - * @dmabuf: [in] buffer to map page from. - * @page_num: [in] page in PAGE_SIZE units to map. - * - * This call must always succeed, any necessary preparations that might fail - * need to be done in begin_cpu_access. - */ -void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num) -{ - WARN_ON(!dmabuf); - - return dmabuf->ops->kmap_atomic(dmabuf, page_num); -} -EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic); - -/** - * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic. - * @dmabuf: [in] buffer to unmap page from. - * @page_num: [in] page in PAGE_SIZE units to unmap. - * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap_atomic. - * - * This call must always succeed. - */ -void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num, - void *vaddr) -{ - WARN_ON(!dmabuf); - - if (dmabuf->ops->kunmap_atomic) - dmabuf->ops->kunmap_atomic(dmabuf, page_num, vaddr); -} -EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic); - -/** - * dma_buf_kmap - Map a page of the buffer object into kernel address space. The - * same restrictions as for kmap and friends apply. - * @dmabuf: [in] buffer to map page from. - * @page_num: [in] page in PAGE_SIZE units to map. - * - * This call must always succeed, any necessary preparations that might fail - * need to be done in begin_cpu_access. - */ -void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long page_num) -{ - WARN_ON(!dmabuf); - - return dmabuf->ops->kmap(dmabuf, page_num); -} -EXPORT_SYMBOL_GPL(dma_buf_kmap); - -/** - * dma_buf_kunmap - Unmap a page obtained by dma_buf_kmap. - * @dmabuf: [in] buffer to unmap page from. - * @page_num: [in] page in PAGE_SIZE units to unmap. - * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap. - * - * This call must always succeed. - */ -void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num, - void *vaddr) -{ - WARN_ON(!dmabuf); - - if (dmabuf->ops->kunmap) - dmabuf->ops->kunmap(dmabuf, page_num, vaddr); -} -EXPORT_SYMBOL_GPL(dma_buf_kunmap); - - -/** - * dma_buf_mmap - Setup up a userspace mmap with the given vma - * @dmabuf: [in] buffer that should back the vma - * @vma: [in] vma for the mmap - * @pgoff: [in] offset in pages where this mmap should start within the - * dma-buf buffer. - * - * This function adjusts the passed in vma so that it points at the file of the - * dma_buf operation. It also adjusts the starting pgoff and does bounds - * checking on the size of the vma. Then it calls the exporters mmap function to - * set up the mapping. - * - * Can return negative error values, returns 0 on success. - */ -int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, - unsigned long pgoff) -{ - struct file *oldfile; - int ret; - - if (WARN_ON(!dmabuf || !vma)) - return -EINVAL; - - /* check for offset overflow */ - if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) < pgoff) - return -EOVERFLOW; - - /* check for overflowing the buffer's size */ - if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) > - dmabuf->size >> PAGE_SHIFT) - return -EINVAL; - - /* readjust the vma */ - get_file(dmabuf->file); - oldfile = vma->vm_file; - vma->vm_file = dmabuf->file; - vma->vm_pgoff = pgoff; - - ret = dmabuf->ops->mmap(dmabuf, vma); - if (ret) { - /* restore old parameters on failure */ - vma->vm_file = oldfile; - fput(dmabuf->file); - } else { - if (oldfile) - fput(oldfile); - } - return ret; - -} -EXPORT_SYMBOL_GPL(dma_buf_mmap); - -/** - * dma_buf_vmap - Create virtual mapping for the buffer object into kernel - * address space. Same restrictions as for vmap and friends apply. - * @dmabuf: [in] buffer to vmap - * - * This call may fail due to lack of virtual mapping address space. - * These calls are optional in drivers. The intended use for them - * is for mapping objects linear in kernel space for high use objects. - * Please attempt to use kmap/kunmap before thinking about these interfaces. - * - * Returns NULL on error. - */ -void *dma_buf_vmap(struct dma_buf *dmabuf) -{ - void *ptr; - - if (WARN_ON(!dmabuf)) - return NULL; - - if (!dmabuf->ops->vmap) - return NULL; - - mutex_lock(&dmabuf->lock); - if (dmabuf->vmapping_counter) { - dmabuf->vmapping_counter++; - BUG_ON(!dmabuf->vmap_ptr); - ptr = dmabuf->vmap_ptr; - goto out_unlock; - } - - BUG_ON(dmabuf->vmap_ptr); - - ptr = dmabuf->ops->vmap(dmabuf); - if (WARN_ON_ONCE(IS_ERR(ptr))) - ptr = NULL; - if (!ptr) - goto out_unlock; - - dmabuf->vmap_ptr = ptr; - dmabuf->vmapping_counter = 1; - -out_unlock: - mutex_unlock(&dmabuf->lock); - return ptr; -} -EXPORT_SYMBOL_GPL(dma_buf_vmap); - -/** - * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. - * @dmabuf: [in] buffer to vunmap - * @vaddr: [in] vmap to vunmap - */ -void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) -{ - if (WARN_ON(!dmabuf)) - return; - - BUG_ON(!dmabuf->vmap_ptr); - BUG_ON(dmabuf->vmapping_counter == 0); - BUG_ON(dmabuf->vmap_ptr != vaddr); - - mutex_lock(&dmabuf->lock); - if (--dmabuf->vmapping_counter == 0) { - if (dmabuf->ops->vunmap) - dmabuf->ops->vunmap(dmabuf, vaddr); - dmabuf->vmap_ptr = NULL; - } - mutex_unlock(&dmabuf->lock); -} -EXPORT_SYMBOL_GPL(dma_buf_vunmap); - -#ifdef CONFIG_DEBUG_FS -static int dma_buf_describe(struct seq_file *s) -{ - int ret; - struct dma_buf *buf_obj; - struct dma_buf_attachment *attach_obj; - int count = 0, attach_count; - size_t size = 0; - - ret = mutex_lock_interruptible(&db_list.lock); - - if (ret) - return ret; - - seq_puts(s, "\nDma-buf Objects:\n"); - seq_puts(s, "size\tflags\tmode\tcount\texp_name\n"); - - list_for_each_entry(buf_obj, &db_list.head, list_node) { - ret = mutex_lock_interruptible(&buf_obj->lock); - - if (ret) { - seq_puts(s, - "\tERROR locking buffer object: skipping\n"); - continue; - } - - seq_printf(s, "%08zu\t%08x\t%08x\t%08ld\t%s\n", - buf_obj->size, - buf_obj->file->f_flags, buf_obj->file->f_mode, - (long)(buf_obj->file->f_count.counter), - buf_obj->exp_name); - - seq_puts(s, "\tAttached Devices:\n"); - attach_count = 0; - - list_for_each_entry(attach_obj, &buf_obj->attachments, node) { - seq_puts(s, "\t"); - - seq_printf(s, "%s\n", dev_name(attach_obj->dev)); - attach_count++; - } - - seq_printf(s, "Total %d devices attached\n\n", - attach_count); - - count++; - size += buf_obj->size; - mutex_unlock(&buf_obj->lock); - } - - seq_printf(s, "\nTotal %d objects, %zu bytes\n", count, size); - - mutex_unlock(&db_list.lock); - return 0; -} - -static int dma_buf_show(struct seq_file *s, void *unused) -{ - void (*func)(struct seq_file *) = s->private; - func(s); - return 0; -} - -static int dma_buf_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, dma_buf_show, inode->i_private); -} - -static const struct file_operations dma_buf_debug_fops = { - .open = dma_buf_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; - -static struct dentry *dma_buf_debugfs_dir; - -static int dma_buf_init_debugfs(void) -{ - int err = 0; - dma_buf_debugfs_dir = debugfs_create_dir("dma_buf", NULL); - if (IS_ERR(dma_buf_debugfs_dir)) { - err = PTR_ERR(dma_buf_debugfs_dir); - dma_buf_debugfs_dir = NULL; - return err; - } - - err = dma_buf_debugfs_create_file("bufinfo", dma_buf_describe); - - if (err) - pr_debug("dma_buf: debugfs: failed to create node bufinfo\n"); - - return err; -} - -static void dma_buf_uninit_debugfs(void) -{ - if (dma_buf_debugfs_dir) - debugfs_remove_recursive(dma_buf_debugfs_dir); -} - -int dma_buf_debugfs_create_file(const char *name, - int (*write)(struct seq_file *)) -{ - struct dentry *d; - - d = debugfs_create_file(name, S_IRUGO, dma_buf_debugfs_dir, - write, &dma_buf_debug_fops); - - return PTR_ERR_OR_ZERO(d); -} -#else -static inline int dma_buf_init_debugfs(void) -{ - return 0; -} -static inline void dma_buf_uninit_debugfs(void) -{ -} -#endif - -static int __init dma_buf_init(void) -{ - mutex_init(&db_list.lock); - INIT_LIST_HEAD(&db_list.head); - dma_buf_init_debugfs(); - return 0; -} -subsys_initcall(dma_buf_init); - -static void __exit dma_buf_deinit(void) -{ - dma_buf_uninit_debugfs(); -} -__exitcall(dma_buf_deinit); diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c deleted file mode 100644 index a73fbf3b8e56..000000000000 --- a/drivers/base/reservation.c +++ /dev/null @@ -1,39 +0,0 @@ -/* - * Copyright (C) 2012-2013 Canonical Ltd - * - * Based on bo.c which bears the following copyright notice, - * but is dual licensed: - * - * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA - * All Rights Reserved. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - **************************************************************************/ -/* - * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com> - */ - -#include <linux/reservation.h> -#include <linux/export.h> - -DEFINE_WW_CLASS(reservation_ww_class); -EXPORT_SYMBOL(reservation_ww_class); diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile new file mode 100644 index 000000000000..4a4f4c9bacd0 --- /dev/null +++ b/drivers/dma-buf/Makefile @@ -0,0 +1 @@ +obj-y := dma-buf.o reservation.o diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c new file mode 100644 index 000000000000..840c7fa80983 --- /dev/null +++ b/drivers/dma-buf/dma-buf.c @@ -0,0 +1,743 @@ +/* + * Framework for buffer objects that can be shared across devices/subsystems. + * + * Copyright(C) 2011 Linaro Limited. All rights reserved. + * Author: Sumit Semwal sumit.semwal@ti.com + * + * Many thanks to linaro-mm-sig list, and specially + * Arnd Bergmann arnd@arndb.de, Rob Clark rob@ti.com and + * Daniel Vetter daniel@ffwll.ch for their support in creation and + * refining of this idea. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <linux/fs.h> +#include <linux/slab.h> +#include <linux/dma-buf.h> +#include <linux/anon_inodes.h> +#include <linux/export.h> +#include <linux/debugfs.h> +#include <linux/seq_file.h> + +static inline int is_dma_buf_file(struct file *); + +struct dma_buf_list { + struct list_head head; + struct mutex lock; +}; + +static struct dma_buf_list db_list; + +static int dma_buf_release(struct inode *inode, struct file *file) +{ + struct dma_buf *dmabuf; + + if (!is_dma_buf_file(file)) + return -EINVAL; + + dmabuf = file->private_data; + + BUG_ON(dmabuf->vmapping_counter); + + dmabuf->ops->release(dmabuf); + + mutex_lock(&db_list.lock); + list_del(&dmabuf->list_node); + mutex_unlock(&db_list.lock); + + kfree(dmabuf); + return 0; +} + +static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) +{ + struct dma_buf *dmabuf; + + if (!is_dma_buf_file(file)) + return -EINVAL; + + dmabuf = file->private_data; + + /* check for overflowing the buffer's size */ + if (vma->vm_pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) > + dmabuf->size >> PAGE_SHIFT) + return -EINVAL; + + return dmabuf->ops->mmap(dmabuf, vma); +} + +static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) +{ + struct dma_buf *dmabuf; + loff_t base; + + if (!is_dma_buf_file(file)) + return -EBADF; + + dmabuf = file->private_data; + + /* only support discovering the end of the buffer, + but also allow SEEK_SET to maintain the idiomatic + SEEK_END(0), SEEK_CUR(0) pattern */ + if (whence == SEEK_END) + base = dmabuf->size; + else if (whence == SEEK_SET) + base = 0; + else + return -EINVAL; + + if (offset != 0) + return -EINVAL; + + return base + offset; +} + +static const struct file_operations dma_buf_fops = { + .release = dma_buf_release, + .mmap = dma_buf_mmap_internal, + .llseek = dma_buf_llseek, +}; + +/* + * is_dma_buf_file - Check if struct file* is associated with dma_buf + */ +static inline int is_dma_buf_file(struct file *file) +{ + return file->f_op == &dma_buf_fops; +} + +/** + * dma_buf_export_named - Creates a new dma_buf, and associates an anon file + * with this buffer, so it can be exported. + * Also connect the allocator specific data and ops to the buffer. + * Additionally, provide a name string for exporter; useful in debugging. + * + * @priv: [in] Attach private data of allocator to this buffer + * @ops: [in] Attach allocator-defined dma buf ops to the new buffer. + * @size: [in] Size of the buffer + * @flags: [in] mode flags for the file. + * @exp_name: [in] name of the exporting module - useful for debugging. + * + * Returns, on success, a newly created dma_buf object, which wraps the + * supplied private data and operations for dma_buf_ops. On either missing + * ops, or error in allocating struct dma_buf, will return negative error. + * + */ +struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, + size_t size, int flags, const char *exp_name) +{ + struct dma_buf *dmabuf; + struct file *file; + + if (WARN_ON(!priv || !ops + || !ops->map_dma_buf + || !ops->unmap_dma_buf + || !ops->release + || !ops->kmap_atomic + || !ops->kmap + || !ops->mmap)) { + return ERR_PTR(-EINVAL); + } + + dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL); + if (dmabuf == NULL) + return ERR_PTR(-ENOMEM); + + dmabuf->priv = priv; + dmabuf->ops = ops; + dmabuf->size = size; + dmabuf->exp_name = exp_name; + + file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags); + if (IS_ERR(file)) { + kfree(dmabuf); + return ERR_CAST(file); + } + + file->f_mode |= FMODE_LSEEK; + dmabuf->file = file; + + mutex_init(&dmabuf->lock); + INIT_LIST_HEAD(&dmabuf->attachments); + + mutex_lock(&db_list.lock); + list_add(&dmabuf->list_node, &db_list.head); + mutex_unlock(&db_list.lock); + + return dmabuf; +} +EXPORT_SYMBOL_GPL(dma_buf_export_named); + + +/** + * dma_buf_fd - returns a file descriptor for the given dma_buf + * @dmabuf: [in] pointer to dma_buf for which fd is required. + * @flags: [in] flags to give to fd + * + * On success, returns an associated 'fd'. Else, returns error. + */ +int dma_buf_fd(struct dma_buf *dmabuf, int flags) +{ + int fd; + + if (!dmabuf || !dmabuf->file) + return -EINVAL; + + fd = get_unused_fd_flags(flags); + if (fd < 0) + return fd; + + fd_install(fd, dmabuf->file); + + return fd; +} +EXPORT_SYMBOL_GPL(dma_buf_fd); + +/** + * dma_buf_get - returns the dma_buf structure related to an fd + * @fd: [in] fd associated with the dma_buf to be returned + * + * On success, returns the dma_buf structure associated with an fd; uses + * file's refcounting done by fget to increase refcount. returns ERR_PTR + * otherwise. + */ +struct dma_buf *dma_buf_get(int fd) +{ + struct file *file; + + file = fget(fd); + + if (!file) + return ERR_PTR(-EBADF); + + if (!is_dma_buf_file(file)) { + fput(file); + return ERR_PTR(-EINVAL); + } + + return file->private_data; +} +EXPORT_SYMBOL_GPL(dma_buf_get); + +/** + * dma_buf_put - decreases refcount of the buffer + * @dmabuf: [in] buffer to reduce refcount of + * + * Uses file's refcounting done implicitly by fput() + */ +void dma_buf_put(struct dma_buf *dmabuf) +{ + if (WARN_ON(!dmabuf || !dmabuf->file)) + return; + + fput(dmabuf->file); +} +EXPORT_SYMBOL_GPL(dma_buf_put); + +/** + * dma_buf_attach - Add the device to dma_buf's attachments list; optionally, + * calls attach() of dma_buf_ops to allow device-specific attach functionality + * @dmabuf: [in] buffer to attach device to. + * @dev: [in] device to be attached. + * + * Returns struct dma_buf_attachment * for this attachment; returns ERR_PTR on + * error. + */ +struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, + struct device *dev) +{ + struct dma_buf_attachment *attach; + int ret; + + if (WARN_ON(!dmabuf || !dev)) + return ERR_PTR(-EINVAL); + + attach = kzalloc(sizeof(struct dma_buf_attachment), GFP_KERNEL); + if (attach == NULL) + return ERR_PTR(-ENOMEM); + + attach->dev = dev; + attach->dmabuf = dmabuf; + + mutex_lock(&dmabuf->lock); + + if (dmabuf->ops->attach) { + ret = dmabuf->ops->attach(dmabuf, dev, attach); + if (ret) + goto err_attach; + } + list_add(&attach->node, &dmabuf->attachments); + + mutex_unlock(&dmabuf->lock); + return attach; + +err_attach: + kfree(attach); + mutex_unlock(&dmabuf->lock); + return ERR_PTR(ret); +} +EXPORT_SYMBOL_GPL(dma_buf_attach); + +/** + * dma_buf_detach - Remove the given attachment from dmabuf's attachments list; + * optionally calls detach() of dma_buf_ops for device-specific detach + * @dmabuf: [in] buffer to detach from. + * @attach: [in] attachment to be detached; is free'd after this call. + * + */ +void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) +{ + if (WARN_ON(!dmabuf || !attach)) + return; + + mutex_lock(&dmabuf->lock); + list_del(&attach->node); + if (dmabuf->ops->detach) + dmabuf->ops->detach(dmabuf, attach); + + mutex_unlock(&dmabuf->lock); + kfree(attach); +} +EXPORT_SYMBOL_GPL(dma_buf_detach); + +/** + * dma_buf_map_attachment - Returns the scatterlist table of the attachment; + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the + * dma_buf_ops. + * @attach: [in] attachment whose scatterlist is to be returned + * @direction: [in] direction of DMA transfer + * + * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR + * on error. + */ +struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, + enum dma_data_direction direction) +{ + struct sg_table *sg_table = ERR_PTR(-EINVAL); + + might_sleep(); + + if (WARN_ON(!attach || !attach->dmabuf)) + return ERR_PTR(-EINVAL); + + sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction); + if (!sg_table) + sg_table = ERR_PTR(-ENOMEM); + + return sg_table; +} +EXPORT_SYMBOL_GPL(dma_buf_map_attachment); + +/** + * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of + * dma_buf_ops. + * @attach: [in] attachment to unmap buffer from + * @sg_table: [in] scatterlist info of the buffer to unmap + * @direction: [in] direction of DMA transfer + * + */ +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) +{ + might_sleep(); + + if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) + return; + + attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, + direction); +} +EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment); + + +/** + * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the + * cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific + * preparations. Coherency is only guaranteed in the specified range for the + * specified access direction. + * @dmabuf: [in] buffer to prepare cpu access for. + * @start: [in] start of range for cpu access. + * @len: [in] length of range for cpu access. + * @direction: [in] length of range for cpu access. + * + * Can return negative error values, returns 0 on success. + */ +int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len, + enum dma_data_direction direction) +{ + int ret = 0; + + if (WARN_ON(!dmabuf)) + return -EINVAL; + + if (dmabuf->ops->begin_cpu_access) + ret = dmabuf->ops->begin_cpu_access(dmabuf, start, len, direction); + + return ret; +} +EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access); + +/** + * dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the + * cpu in the kernel context. Calls end_cpu_access to allow exporter-specific + * actions. Coherency is only guaranteed in the specified range for the + * specified access direction. + * @dmabuf: [in] buffer to complete cpu access for. + * @start: [in] start of range for cpu access. + * @len: [in] length of range for cpu access. + * @direction: [in] length of range for cpu access. + * + * This call must always succeed. + */ +void dma_buf_end_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len, + enum dma_data_direction direction) +{ + WARN_ON(!dmabuf); + + if (dmabuf->ops->end_cpu_access) + dmabuf->ops->end_cpu_access(dmabuf, start, len, direction); +} +EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access); + +/** + * dma_buf_kmap_atomic - Map a page of the buffer object into kernel address + * space. The same restrictions as for kmap_atomic and friends apply. + * @dmabuf: [in] buffer to map page from. + * @page_num: [in] page in PAGE_SIZE units to map. + * + * This call must always succeed, any necessary preparations that might fail + * need to be done in begin_cpu_access. + */ +void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num) +{ + WARN_ON(!dmabuf); + + return dmabuf->ops->kmap_atomic(dmabuf, page_num); +} +EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic); + +/** + * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic. + * @dmabuf: [in] buffer to unmap page from. + * @page_num: [in] page in PAGE_SIZE units to unmap. + * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap_atomic. + * + * This call must always succeed. + */ +void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num, + void *vaddr) +{ + WARN_ON(!dmabuf); + + if (dmabuf->ops->kunmap_atomic) + dmabuf->ops->kunmap_atomic(dmabuf, page_num, vaddr); +} +EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic); + +/** + * dma_buf_kmap - Map a page of the buffer object into kernel address space. The + * same restrictions as for kmap and friends apply. + * @dmabuf: [in] buffer to map page from. + * @page_num: [in] page in PAGE_SIZE units to map. + * + * This call must always succeed, any necessary preparations that might fail + * need to be done in begin_cpu_access. + */ +void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long page_num) +{ + WARN_ON(!dmabuf); + + return dmabuf->ops->kmap(dmabuf, page_num); +} +EXPORT_SYMBOL_GPL(dma_buf_kmap); + +/** + * dma_buf_kunmap - Unmap a page obtained by dma_buf_kmap. + * @dmabuf: [in] buffer to unmap page from. + * @page_num: [in] page in PAGE_SIZE units to unmap. + * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap. + * + * This call must always succeed. + */ +void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num, + void *vaddr) +{ + WARN_ON(!dmabuf); + + if (dmabuf->ops->kunmap) + dmabuf->ops->kunmap(dmabuf, page_num, vaddr); +} +EXPORT_SYMBOL_GPL(dma_buf_kunmap); + + +/** + * dma_buf_mmap - Setup up a userspace mmap with the given vma + * @dmabuf: [in] buffer that should back the vma + * @vma: [in] vma for the mmap + * @pgoff: [in] offset in pages where this mmap should start within the + * dma-buf buffer. + * + * This function adjusts the passed in vma so that it points at the file of the + * dma_buf operation. It also adjusts the starting pgoff and does bounds + * checking on the size of the vma. Then it calls the exporters mmap function to + * set up the mapping. + * + * Can return negative error values, returns 0 on success. + */ +int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, + unsigned long pgoff) +{ + struct file *oldfile; + int ret; + + if (WARN_ON(!dmabuf || !vma)) + return -EINVAL; + + /* check for offset overflow */ + if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) < pgoff) + return -EOVERFLOW; + + /* check for overflowing the buffer's size */ + if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) > + dmabuf->size >> PAGE_SHIFT) + return -EINVAL; + + /* readjust the vma */ + get_file(dmabuf->file); + oldfile = vma->vm_file; + vma->vm_file = dmabuf->file; + vma->vm_pgoff = pgoff; + + ret = dmabuf->ops->mmap(dmabuf, vma); + if (ret) { + /* restore old parameters on failure */ + vma->vm_file = oldfile; + fput(dmabuf->file); + } else { + if (oldfile) + fput(oldfile); + } + return ret; + +} +EXPORT_SYMBOL_GPL(dma_buf_mmap); + +/** + * dma_buf_vmap - Create virtual mapping for the buffer object into kernel + * address space. Same restrictions as for vmap and friends apply. + * @dmabuf: [in] buffer to vmap + * + * This call may fail due to lack of virtual mapping address space. + * These calls are optional in drivers. The intended use for them + * is for mapping objects linear in kernel space for high use objects. + * Please attempt to use kmap/kunmap before thinking about these interfaces. + * + * Returns NULL on error. + */ +void *dma_buf_vmap(struct dma_buf *dmabuf) +{ + void *ptr; + + if (WARN_ON(!dmabuf)) + return NULL; + + if (!dmabuf->ops->vmap) + return NULL; + + mutex_lock(&dmabuf->lock); + if (dmabuf->vmapping_counter) { + dmabuf->vmapping_counter++; + BUG_ON(!dmabuf->vmap_ptr); + ptr = dmabuf->vmap_ptr; + goto out_unlock; + } + + BUG_ON(dmabuf->vmap_ptr); + + ptr = dmabuf->ops->vmap(dmabuf); + if (WARN_ON_ONCE(IS_ERR(ptr))) + ptr = NULL; + if (!ptr) + goto out_unlock; + + dmabuf->vmap_ptr = ptr; + dmabuf->vmapping_counter = 1; + +out_unlock: + mutex_unlock(&dmabuf->lock); + return ptr; +} +EXPORT_SYMBOL_GPL(dma_buf_vmap); + +/** + * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. + * @dmabuf: [in] buffer to vunmap + * @vaddr: [in] vmap to vunmap + */ +void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) +{ + if (WARN_ON(!dmabuf)) + return; + + BUG_ON(!dmabuf->vmap_ptr); + BUG_ON(dmabuf->vmapping_counter == 0); + BUG_ON(dmabuf->vmap_ptr != vaddr); + + mutex_lock(&dmabuf->lock); + if (--dmabuf->vmapping_counter == 0) { + if (dmabuf->ops->vunmap) + dmabuf->ops->vunmap(dmabuf, vaddr); + dmabuf->vmap_ptr = NULL; + } + mutex_unlock(&dmabuf->lock); +} +EXPORT_SYMBOL_GPL(dma_buf_vunmap); + +#ifdef CONFIG_DEBUG_FS +static int dma_buf_describe(struct seq_file *s) +{ + int ret; + struct dma_buf *buf_obj; + struct dma_buf_attachment *attach_obj; + int count = 0, attach_count; + size_t size = 0; + + ret = mutex_lock_interruptible(&db_list.lock); + + if (ret) + return ret; + + seq_puts(s, "\nDma-buf Objects:\n"); + seq_puts(s, "size\tflags\tmode\tcount\texp_name\n"); + + list_for_each_entry(buf_obj, &db_list.head, list_node) { + ret = mutex_lock_interruptible(&buf_obj->lock); + + if (ret) { + seq_puts(s, + "\tERROR locking buffer object: skipping\n"); + continue; + } + + seq_printf(s, "%08zu\t%08x\t%08x\t%08ld\t%s\n", + buf_obj->size, + buf_obj->file->f_flags, buf_obj->file->f_mode, + (long)(buf_obj->file->f_count.counter), + buf_obj->exp_name); + + seq_puts(s, "\tAttached Devices:\n"); + attach_count = 0; + + list_for_each_entry(attach_obj, &buf_obj->attachments, node) { + seq_puts(s, "\t"); + + seq_printf(s, "%s\n", dev_name(attach_obj->dev)); + attach_count++; + } + + seq_printf(s, "Total %d devices attached\n\n", + attach_count); + + count++; + size += buf_obj->size; + mutex_unlock(&buf_obj->lock); + } + + seq_printf(s, "\nTotal %d objects, %zu bytes\n", count, size); + + mutex_unlock(&db_list.lock); + return 0; +} + +static int dma_buf_show(struct seq_file *s, void *unused) +{ + void (*func)(struct seq_file *) = s->private; + func(s); + return 0; +} + +static int dma_buf_debug_open(struct inode *inode, struct file *file) +{ + return single_open(file, dma_buf_show, inode->i_private); +} + +static const struct file_operations dma_buf_debug_fops = { + .open = dma_buf_debug_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + +static struct dentry *dma_buf_debugfs_dir; + +static int dma_buf_init_debugfs(void) +{ + int err = 0; + dma_buf_debugfs_dir = debugfs_create_dir("dma_buf", NULL); + if (IS_ERR(dma_buf_debugfs_dir)) { + err = PTR_ERR(dma_buf_debugfs_dir); + dma_buf_debugfs_dir = NULL; + return err; + } + + err = dma_buf_debugfs_create_file("bufinfo", dma_buf_describe); + + if (err) + pr_debug("dma_buf: debugfs: failed to create node bufinfo\n"); + + return err; +} + +static void dma_buf_uninit_debugfs(void) +{ + if (dma_buf_debugfs_dir) + debugfs_remove_recursive(dma_buf_debugfs_dir); +} + +int dma_buf_debugfs_create_file(const char *name, + int (*write)(struct seq_file *)) +{ + struct dentry *d; + + d = debugfs_create_file(name, S_IRUGO, dma_buf_debugfs_dir, + write, &dma_buf_debug_fops); + + return PTR_ERR_OR_ZERO(d); +} +#else +static inline int dma_buf_init_debugfs(void) +{ + return 0; +} +static inline void dma_buf_uninit_debugfs(void) +{ +} +#endif + +static int __init dma_buf_init(void) +{ + mutex_init(&db_list.lock); + INIT_LIST_HEAD(&db_list.head); + dma_buf_init_debugfs(); + return 0; +} +subsys_initcall(dma_buf_init); + +static void __exit dma_buf_deinit(void) +{ + dma_buf_uninit_debugfs(); +} +__exitcall(dma_buf_deinit); diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c new file mode 100644 index 000000000000..a73fbf3b8e56 --- /dev/null +++ b/drivers/dma-buf/reservation.c @@ -0,0 +1,39 @@ +/* + * Copyright (C) 2012-2013 Canonical Ltd + * + * Based on bo.c which bears the following copyright notice, + * but is dual licensed: + * + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com> + */ + +#include <linux/reservation.h> +#include <linux/export.h> + +DEFINE_WW_CLASS(reservation_ww_class); +EXPORT_SYMBOL(reservation_ww_class);
On 01-07-14 12:57, Maarten Lankhorst wrote:
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
It would help to use '-M' option with format-patch for this kind of rework.
Regards, Arend
Documentation/DocBook/device-drivers.tmpl | 3 MAINTAINERS | 4 drivers/Makefile | 1 drivers/base/Makefile | 1 drivers/base/dma-buf.c | 743 ----------------------------- drivers/base/reservation.c | 39 -- drivers/dma-buf/Makefile | 1 drivers/dma-buf/dma-buf.c | 743 +++++++++++++++++++++++++++++ drivers/dma-buf/reservation.c | 39 ++ 9 files changed, 787 insertions(+), 787 deletions(-) delete mode 100644 drivers/base/dma-buf.c delete mode 100644 drivers/base/reservation.c create mode 100644 drivers/dma-buf/Makefile create mode 100644 drivers/dma-buf/dma-buf.c create mode 100644 drivers/dma-buf/reservation.c
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index cc63f30de166..ac61ebd92875 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -128,8 +128,7 @@ X!Edrivers/base/interface.c !Edrivers/base/bus.c </sect1> <sect1><title>Device Drivers DMA Management</title> -!Edrivers/base/dma-buf.c -!Edrivers/base/reservation.c +!Edrivers/dma-buf/dma-buf.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c diff --git a/MAINTAINERS b/MAINTAINERS index 134483f206e4..c948e53a4ee6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2882,8 +2882,8 @@ S: Maintained L: linux-media@vger.kernel.org L: dri-devel@lists.freedesktop.org L: linaro-mm-sig@lists.linaro.org -F: drivers/base/dma-buf* -F: include/linux/dma-buf* +F: drivers/dma-buf/ +F: include/linux/dma-buf* include/linux/reservation.h F: Documentation/dma-buf-sharing.txt T: git git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git diff --git a/drivers/Makefile b/drivers/Makefile index f98b50d8251d..c00337be5351 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -61,6 +61,7 @@ obj-$(CONFIG_FB_INTEL) += video/fbdev/intelfb/ obj-$(CONFIG_PARPORT) += parport/ obj-y += base/ block/ misc/ mfd/ nfc/ +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/ obj-$(CONFIG_NUBUS) += nubus/ obj-y += macintosh/ obj-$(CONFIG_IDE) += ide/ diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 04b314e0fa51..4aab26ec0292 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,6 @@ obj-$(CONFIG_DMA_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c deleted file mode 100644 index 840c7fa80983..000000000000 --- a/drivers/base/dma-buf.c +++ /dev/null @@ -1,743 +0,0 @@ -/*
- Framework for buffer objects that can be shared across devices/subsystems.
- Copyright(C) 2011 Linaro Limited. All rights reserved.
- Author: Sumit Semwal sumit.semwal@ti.com
- Many thanks to linaro-mm-sig list, and specially
- Arnd Bergmann arnd@arndb.de, Rob Clark rob@ti.com and
- Daniel Vetter daniel@ffwll.ch for their support in creation and
- refining of this idea.
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
-#include <linux/fs.h> -#include <linux/slab.h> -#include <linux/dma-buf.h> -#include <linux/anon_inodes.h> -#include <linux/export.h> -#include <linux/debugfs.h> -#include <linux/seq_file.h>
-static inline int is_dma_buf_file(struct file *);
-struct dma_buf_list {
- struct list_head head;
- struct mutex lock;
-};
-static struct dma_buf_list db_list;
-static int dma_buf_release(struct inode *inode, struct file *file) -{
- struct dma_buf *dmabuf;
- if (!is_dma_buf_file(file))
return -EINVAL;
- dmabuf = file->private_data;
- BUG_ON(dmabuf->vmapping_counter);
- dmabuf->ops->release(dmabuf);
- mutex_lock(&db_list.lock);
- list_del(&dmabuf->list_node);
- mutex_unlock(&db_list.lock);
- kfree(dmabuf);
- return 0;
-}
-static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) -{
- struct dma_buf *dmabuf;
- if (!is_dma_buf_file(file))
return -EINVAL;
- dmabuf = file->private_data;
- /* check for overflowing the buffer's size */
- if (vma->vm_pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- return dmabuf->ops->mmap(dmabuf, vma);
-}
-static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) -{
- struct dma_buf *dmabuf;
- loff_t base;
- if (!is_dma_buf_file(file))
return -EBADF;
- dmabuf = file->private_data;
- /* only support discovering the end of the buffer,
but also allow SEEK_SET to maintain the idiomatic
SEEK_END(0), SEEK_CUR(0) pattern */
- if (whence == SEEK_END)
base = dmabuf->size;
- else if (whence == SEEK_SET)
base = 0;
- else
return -EINVAL;
- if (offset != 0)
return -EINVAL;
- return base + offset;
-}
-static const struct file_operations dma_buf_fops = {
- .release = dma_buf_release,
- .mmap = dma_buf_mmap_internal,
- .llseek = dma_buf_llseek,
-};
-/*
- is_dma_buf_file - Check if struct file* is associated with dma_buf
- */
-static inline int is_dma_buf_file(struct file *file) -{
- return file->f_op == &dma_buf_fops;
-}
-/**
- dma_buf_export_named - Creates a new dma_buf, and associates an anon file
- with this buffer, so it can be exported.
- Also connect the allocator specific data and ops to the buffer.
- Additionally, provide a name string for exporter; useful in debugging.
- @priv: [in] Attach private data of allocator to this buffer
- @ops: [in] Attach allocator-defined dma buf ops to the new buffer.
- @size: [in] Size of the buffer
- @flags: [in] mode flags for the file.
- @exp_name: [in] name of the exporting module - useful for debugging.
- Returns, on success, a newly created dma_buf object, which wraps the
- supplied private data and operations for dma_buf_ops. On either missing
- ops, or error in allocating struct dma_buf, will return negative error.
- */
-struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
size_t size, int flags, const char *exp_name)
-{
- struct dma_buf *dmabuf;
- struct file *file;
- if (WARN_ON(!priv || !ops
|| !ops->map_dma_buf
|| !ops->unmap_dma_buf
|| !ops->release
|| !ops->kmap_atomic
|| !ops->kmap
|| !ops->mmap)) {
return ERR_PTR(-EINVAL);
- }
- dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL);
- if (dmabuf == NULL)
return ERR_PTR(-ENOMEM);
- dmabuf->priv = priv;
- dmabuf->ops = ops;
- dmabuf->size = size;
- dmabuf->exp_name = exp_name;
- file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags);
- if (IS_ERR(file)) {
kfree(dmabuf);
return ERR_CAST(file);
- }
- file->f_mode |= FMODE_LSEEK;
- dmabuf->file = file;
- mutex_init(&dmabuf->lock);
- INIT_LIST_HEAD(&dmabuf->attachments);
- mutex_lock(&db_list.lock);
- list_add(&dmabuf->list_node, &db_list.head);
- mutex_unlock(&db_list.lock);
- return dmabuf;
-} -EXPORT_SYMBOL_GPL(dma_buf_export_named);
-/**
- dma_buf_fd - returns a file descriptor for the given dma_buf
- @dmabuf: [in] pointer to dma_buf for which fd is required.
- @flags: [in] flags to give to fd
- On success, returns an associated 'fd'. Else, returns error.
- */
-int dma_buf_fd(struct dma_buf *dmabuf, int flags) -{
- int fd;
- if (!dmabuf || !dmabuf->file)
return -EINVAL;
- fd = get_unused_fd_flags(flags);
- if (fd < 0)
return fd;
- fd_install(fd, dmabuf->file);
- return fd;
-} -EXPORT_SYMBOL_GPL(dma_buf_fd);
-/**
- dma_buf_get - returns the dma_buf structure related to an fd
- @fd: [in] fd associated with the dma_buf to be returned
- On success, returns the dma_buf structure associated with an fd; uses
- file's refcounting done by fget to increase refcount. returns ERR_PTR
- otherwise.
- */
-struct dma_buf *dma_buf_get(int fd) -{
- struct file *file;
- file = fget(fd);
- if (!file)
return ERR_PTR(-EBADF);
- if (!is_dma_buf_file(file)) {
fput(file);
return ERR_PTR(-EINVAL);
- }
- return file->private_data;
-} -EXPORT_SYMBOL_GPL(dma_buf_get);
-/**
- dma_buf_put - decreases refcount of the buffer
- @dmabuf: [in] buffer to reduce refcount of
- Uses file's refcounting done implicitly by fput()
- */
-void dma_buf_put(struct dma_buf *dmabuf) -{
- if (WARN_ON(!dmabuf || !dmabuf->file))
return;
- fput(dmabuf->file);
-} -EXPORT_SYMBOL_GPL(dma_buf_put);
-/**
- dma_buf_attach - Add the device to dma_buf's attachments list; optionally,
- calls attach() of dma_buf_ops to allow device-specific attach functionality
- @dmabuf: [in] buffer to attach device to.
- @dev: [in] device to be attached.
- Returns struct dma_buf_attachment * for this attachment; returns ERR_PTR on
- error.
- */
-struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev)
-{
- struct dma_buf_attachment *attach;
- int ret;
- if (WARN_ON(!dmabuf || !dev))
return ERR_PTR(-EINVAL);
- attach = kzalloc(sizeof(struct dma_buf_attachment), GFP_KERNEL);
- if (attach == NULL)
return ERR_PTR(-ENOMEM);
- attach->dev = dev;
- attach->dmabuf = dmabuf;
- mutex_lock(&dmabuf->lock);
- if (dmabuf->ops->attach) {
ret = dmabuf->ops->attach(dmabuf, dev, attach);
if (ret)
goto err_attach;
- }
- list_add(&attach->node, &dmabuf->attachments);
- mutex_unlock(&dmabuf->lock);
- return attach;
-err_attach:
- kfree(attach);
- mutex_unlock(&dmabuf->lock);
- return ERR_PTR(ret);
-} -EXPORT_SYMBOL_GPL(dma_buf_attach);
-/**
- dma_buf_detach - Remove the given attachment from dmabuf's attachments list;
- optionally calls detach() of dma_buf_ops for device-specific detach
- @dmabuf: [in] buffer to detach from.
- @attach: [in] attachment to be detached; is free'd after this call.
- */
-void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) -{
- if (WARN_ON(!dmabuf || !attach))
return;
- mutex_lock(&dmabuf->lock);
- list_del(&attach->node);
- if (dmabuf->ops->detach)
dmabuf->ops->detach(dmabuf, attach);
- mutex_unlock(&dmabuf->lock);
- kfree(attach);
-} -EXPORT_SYMBOL_GPL(dma_buf_detach);
-/**
- dma_buf_map_attachment - Returns the scatterlist table of the attachment;
- mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
- dma_buf_ops.
- @attach: [in] attachment whose scatterlist is to be returned
- @direction: [in] direction of DMA transfer
- Returns sg_table containing the scatterlist to be returned; returns ERR_PTR
- on error.
- */
-struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
enum dma_data_direction direction)
-{
- struct sg_table *sg_table = ERR_PTR(-EINVAL);
- might_sleep();
- if (WARN_ON(!attach || !attach->dmabuf))
return ERR_PTR(-EINVAL);
- sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
- if (!sg_table)
sg_table = ERR_PTR(-ENOMEM);
- return sg_table;
-} -EXPORT_SYMBOL_GPL(dma_buf_map_attachment);
-/**
- dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might
- deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
- dma_buf_ops.
- @attach: [in] attachment to unmap buffer from
- @sg_table: [in] scatterlist info of the buffer to unmap
- @direction: [in] direction of DMA transfer
- */
-void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
struct sg_table *sg_table,
enum dma_data_direction direction)
-{
- might_sleep();
- if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
return;
- attach->dmabuf->ops->unmap_dma_buf(attach, sg_table,
direction);
-} -EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
-/**
- dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
- cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific
- preparations. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to prepare cpu access for.
- @start: [in] start of range for cpu access.
- @len: [in] length of range for cpu access.
- @direction: [in] length of range for cpu access.
- Can return negative error values, returns 0 on success.
- */
-int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
enum dma_data_direction direction)
-{
- int ret = 0;
- if (WARN_ON(!dmabuf))
return -EINVAL;
- if (dmabuf->ops->begin_cpu_access)
ret = dmabuf->ops->begin_cpu_access(dmabuf, start, len, direction);
- return ret;
-} -EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
-/**
- dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the
- cpu in the kernel context. Calls end_cpu_access to allow exporter-specific
- actions. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to complete cpu access for.
- @start: [in] start of range for cpu access.
- @len: [in] length of range for cpu access.
- @direction: [in] length of range for cpu access.
- This call must always succeed.
- */
-void dma_buf_end_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
enum dma_data_direction direction)
-{
- WARN_ON(!dmabuf);
- if (dmabuf->ops->end_cpu_access)
dmabuf->ops->end_cpu_access(dmabuf, start, len, direction);
-} -EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
-/**
- dma_buf_kmap_atomic - Map a page of the buffer object into kernel address
- space. The same restrictions as for kmap_atomic and friends apply.
- @dmabuf: [in] buffer to map page from.
- @page_num: [in] page in PAGE_SIZE units to map.
- This call must always succeed, any necessary preparations that might fail
- need to be done in begin_cpu_access.
- */
-void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num) -{
- WARN_ON(!dmabuf);
- return dmabuf->ops->kmap_atomic(dmabuf, page_num);
-} -EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic);
-/**
- dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic.
- @dmabuf: [in] buffer to unmap page from.
- @page_num: [in] page in PAGE_SIZE units to unmap.
- @vaddr: [in] kernel space pointer obtained from dma_buf_kmap_atomic.
- This call must always succeed.
- */
-void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num,
void *vaddr)
-{
- WARN_ON(!dmabuf);
- if (dmabuf->ops->kunmap_atomic)
dmabuf->ops->kunmap_atomic(dmabuf, page_num, vaddr);
-} -EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic);
-/**
- dma_buf_kmap - Map a page of the buffer object into kernel address space. The
- same restrictions as for kmap and friends apply.
- @dmabuf: [in] buffer to map page from.
- @page_num: [in] page in PAGE_SIZE units to map.
- This call must always succeed, any necessary preparations that might fail
- need to be done in begin_cpu_access.
- */
-void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long page_num) -{
- WARN_ON(!dmabuf);
- return dmabuf->ops->kmap(dmabuf, page_num);
-} -EXPORT_SYMBOL_GPL(dma_buf_kmap);
-/**
- dma_buf_kunmap - Unmap a page obtained by dma_buf_kmap.
- @dmabuf: [in] buffer to unmap page from.
- @page_num: [in] page in PAGE_SIZE units to unmap.
- @vaddr: [in] kernel space pointer obtained from dma_buf_kmap.
- This call must always succeed.
- */
-void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num,
void *vaddr)
-{
- WARN_ON(!dmabuf);
- if (dmabuf->ops->kunmap)
dmabuf->ops->kunmap(dmabuf, page_num, vaddr);
-} -EXPORT_SYMBOL_GPL(dma_buf_kunmap);
-/**
- dma_buf_mmap - Setup up a userspace mmap with the given vma
- @dmabuf: [in] buffer that should back the vma
- @vma: [in] vma for the mmap
- @pgoff: [in] offset in pages where this mmap should start within the
dma-buf buffer.
- This function adjusts the passed in vma so that it points at the file of the
- dma_buf operation. It also adjusts the starting pgoff and does bounds
- checking on the size of the vma. Then it calls the exporters mmap function to
- set up the mapping.
- Can return negative error values, returns 0 on success.
- */
-int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
-{
- struct file *oldfile;
- int ret;
- if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
- /* check for offset overflow */
- if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) < pgoff)
return -EOVERFLOW;
- /* check for overflowing the buffer's size */
- if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- /* readjust the vma */
- get_file(dmabuf->file);
- oldfile = vma->vm_file;
- vma->vm_file = dmabuf->file;
- vma->vm_pgoff = pgoff;
- ret = dmabuf->ops->mmap(dmabuf, vma);
- if (ret) {
/* restore old parameters on failure */
vma->vm_file = oldfile;
fput(dmabuf->file);
- } else {
if (oldfile)
fput(oldfile);
- }
- return ret;
-} -EXPORT_SYMBOL_GPL(dma_buf_mmap);
-/**
- dma_buf_vmap - Create virtual mapping for the buffer object into kernel
- address space. Same restrictions as for vmap and friends apply.
- @dmabuf: [in] buffer to vmap
- This call may fail due to lack of virtual mapping address space.
- These calls are optional in drivers. The intended use for them
- is for mapping objects linear in kernel space for high use objects.
- Please attempt to use kmap/kunmap before thinking about these interfaces.
- Returns NULL on error.
- */
-void *dma_buf_vmap(struct dma_buf *dmabuf) -{
- void *ptr;
- if (WARN_ON(!dmabuf))
return NULL;
- if (!dmabuf->ops->vmap)
return NULL;
- mutex_lock(&dmabuf->lock);
- if (dmabuf->vmapping_counter) {
dmabuf->vmapping_counter++;
BUG_ON(!dmabuf->vmap_ptr);
ptr = dmabuf->vmap_ptr;
goto out_unlock;
- }
- BUG_ON(dmabuf->vmap_ptr);
- ptr = dmabuf->ops->vmap(dmabuf);
- if (WARN_ON_ONCE(IS_ERR(ptr)))
ptr = NULL;
- if (!ptr)
goto out_unlock;
- dmabuf->vmap_ptr = ptr;
- dmabuf->vmapping_counter = 1;
-out_unlock:
- mutex_unlock(&dmabuf->lock);
- return ptr;
-} -EXPORT_SYMBOL_GPL(dma_buf_vmap);
-/**
- dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
- @dmabuf: [in] buffer to vunmap
- @vaddr: [in] vmap to vunmap
- */
-void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) -{
- if (WARN_ON(!dmabuf))
return;
- BUG_ON(!dmabuf->vmap_ptr);
- BUG_ON(dmabuf->vmapping_counter == 0);
- BUG_ON(dmabuf->vmap_ptr != vaddr);
- mutex_lock(&dmabuf->lock);
- if (--dmabuf->vmapping_counter == 0) {
if (dmabuf->ops->vunmap)
dmabuf->ops->vunmap(dmabuf, vaddr);
dmabuf->vmap_ptr = NULL;
- }
- mutex_unlock(&dmabuf->lock);
-} -EXPORT_SYMBOL_GPL(dma_buf_vunmap);
-#ifdef CONFIG_DEBUG_FS -static int dma_buf_describe(struct seq_file *s) -{
- int ret;
- struct dma_buf *buf_obj;
- struct dma_buf_attachment *attach_obj;
- int count = 0, attach_count;
- size_t size = 0;
- ret = mutex_lock_interruptible(&db_list.lock);
- if (ret)
return ret;
- seq_puts(s, "\nDma-buf Objects:\n");
- seq_puts(s, "size\tflags\tmode\tcount\texp_name\n");
- list_for_each_entry(buf_obj, &db_list.head, list_node) {
ret = mutex_lock_interruptible(&buf_obj->lock);
if (ret) {
seq_puts(s,
"\tERROR locking buffer object: skipping\n");
continue;
}
seq_printf(s, "%08zu\t%08x\t%08x\t%08ld\t%s\n",
buf_obj->size,
buf_obj->file->f_flags, buf_obj->file->f_mode,
(long)(buf_obj->file->f_count.counter),
buf_obj->exp_name);
seq_puts(s, "\tAttached Devices:\n");
attach_count = 0;
list_for_each_entry(attach_obj, &buf_obj->attachments, node) {
seq_puts(s, "\t");
seq_printf(s, "%s\n", dev_name(attach_obj->dev));
attach_count++;
}
seq_printf(s, "Total %d devices attached\n\n",
attach_count);
count++;
size += buf_obj->size;
mutex_unlock(&buf_obj->lock);
- }
- seq_printf(s, "\nTotal %d objects, %zu bytes\n", count, size);
- mutex_unlock(&db_list.lock);
- return 0;
-}
-static int dma_buf_show(struct seq_file *s, void *unused) -{
- void (*func)(struct seq_file *) = s->private;
- func(s);
- return 0;
-}
-static int dma_buf_debug_open(struct inode *inode, struct file *file) -{
- return single_open(file, dma_buf_show, inode->i_private);
-}
-static const struct file_operations dma_buf_debug_fops = {
- .open = dma_buf_debug_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = single_release,
-};
-static struct dentry *dma_buf_debugfs_dir;
-static int dma_buf_init_debugfs(void) -{
- int err = 0;
- dma_buf_debugfs_dir = debugfs_create_dir("dma_buf", NULL);
- if (IS_ERR(dma_buf_debugfs_dir)) {
err = PTR_ERR(dma_buf_debugfs_dir);
dma_buf_debugfs_dir = NULL;
return err;
- }
- err = dma_buf_debugfs_create_file("bufinfo", dma_buf_describe);
- if (err)
pr_debug("dma_buf: debugfs: failed to create node bufinfo\n");
- return err;
-}
-static void dma_buf_uninit_debugfs(void) -{
- if (dma_buf_debugfs_dir)
debugfs_remove_recursive(dma_buf_debugfs_dir);
-}
-int dma_buf_debugfs_create_file(const char *name,
int (*write)(struct seq_file *))
-{
- struct dentry *d;
- d = debugfs_create_file(name, S_IRUGO, dma_buf_debugfs_dir,
write, &dma_buf_debug_fops);
- return PTR_ERR_OR_ZERO(d);
-} -#else -static inline int dma_buf_init_debugfs(void) -{
- return 0;
-} -static inline void dma_buf_uninit_debugfs(void) -{ -} -#endif
-static int __init dma_buf_init(void) -{
- mutex_init(&db_list.lock);
- INIT_LIST_HEAD(&db_list.head);
- dma_buf_init_debugfs();
- return 0;
-} -subsys_initcall(dma_buf_init);
-static void __exit dma_buf_deinit(void) -{
- dma_buf_uninit_debugfs();
-} -__exitcall(dma_buf_deinit); diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c deleted file mode 100644 index a73fbf3b8e56..000000000000 --- a/drivers/base/reservation.c +++ /dev/null @@ -1,39 +0,0 @@ -/*
- Copyright (C) 2012-2013 Canonical Ltd
- Based on bo.c which bears the following copyright notice,
- but is dual licensed:
- Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
- All Rights Reserved.
- Permission is hereby granted, free of charge, to any person obtaining a
- copy of this software and associated documentation files (the
- "Software"), to deal in the Software without restriction, including
- without limitation the rights to use, copy, modify, merge, publish,
- distribute, sub license, and/or sell copies of the Software, and to
- permit persons to whom the Software is furnished to do so, subject to
- the following conditions:
- The above copyright notice and this permission notice (including the
- next paragraph) shall be included in all copies or substantial portions
- of the Software.
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
- THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
- DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
- USE OR OTHER DEALINGS IN THE SOFTWARE.
- **************************************************************************/
-/*
- Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
- */
-#include <linux/reservation.h> -#include <linux/export.h>
-DEFINE_WW_CLASS(reservation_ww_class); -EXPORT_SYMBOL(reservation_ww_class); diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile new file mode 100644 index 000000000000..4a4f4c9bacd0 --- /dev/null +++ b/drivers/dma-buf/Makefile @@ -0,0 +1 @@ +obj-y := dma-buf.o reservation.o diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c new file mode 100644 index 000000000000..840c7fa80983 --- /dev/null +++ b/drivers/dma-buf/dma-buf.c @@ -0,0 +1,743 @@ +/*
- Framework for buffer objects that can be shared across devices/subsystems.
- Copyright(C) 2011 Linaro Limited. All rights reserved.
- Author: Sumit Semwal sumit.semwal@ti.com
- Many thanks to linaro-mm-sig list, and specially
- Arnd Bergmann arnd@arndb.de, Rob Clark rob@ti.com and
- Daniel Vetter daniel@ffwll.ch for their support in creation and
- refining of this idea.
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/fs.h> +#include <linux/slab.h> +#include <linux/dma-buf.h> +#include <linux/anon_inodes.h> +#include <linux/export.h> +#include <linux/debugfs.h> +#include <linux/seq_file.h>
+static inline int is_dma_buf_file(struct file *);
+struct dma_buf_list {
- struct list_head head;
- struct mutex lock;
+};
+static struct dma_buf_list db_list;
+static int dma_buf_release(struct inode *inode, struct file *file) +{
- struct dma_buf *dmabuf;
- if (!is_dma_buf_file(file))
return -EINVAL;
- dmabuf = file->private_data;
- BUG_ON(dmabuf->vmapping_counter);
- dmabuf->ops->release(dmabuf);
- mutex_lock(&db_list.lock);
- list_del(&dmabuf->list_node);
- mutex_unlock(&db_list.lock);
- kfree(dmabuf);
- return 0;
+}
+static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) +{
- struct dma_buf *dmabuf;
- if (!is_dma_buf_file(file))
return -EINVAL;
- dmabuf = file->private_data;
- /* check for overflowing the buffer's size */
- if (vma->vm_pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- return dmabuf->ops->mmap(dmabuf, vma);
+}
+static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) +{
- struct dma_buf *dmabuf;
- loff_t base;
- if (!is_dma_buf_file(file))
return -EBADF;
- dmabuf = file->private_data;
- /* only support discovering the end of the buffer,
but also allow SEEK_SET to maintain the idiomatic
SEEK_END(0), SEEK_CUR(0) pattern */
- if (whence == SEEK_END)
base = dmabuf->size;
- else if (whence == SEEK_SET)
base = 0;
- else
return -EINVAL;
- if (offset != 0)
return -EINVAL;
- return base + offset;
+}
+static const struct file_operations dma_buf_fops = {
- .release = dma_buf_release,
- .mmap = dma_buf_mmap_internal,
- .llseek = dma_buf_llseek,
+};
+/*
- is_dma_buf_file - Check if struct file* is associated with dma_buf
- */
+static inline int is_dma_buf_file(struct file *file) +{
- return file->f_op == &dma_buf_fops;
+}
+/**
- dma_buf_export_named - Creates a new dma_buf, and associates an anon file
- with this buffer, so it can be exported.
- Also connect the allocator specific data and ops to the buffer.
- Additionally, provide a name string for exporter; useful in debugging.
- @priv: [in] Attach private data of allocator to this buffer
- @ops: [in] Attach allocator-defined dma buf ops to the new buffer.
- @size: [in] Size of the buffer
- @flags: [in] mode flags for the file.
- @exp_name: [in] name of the exporting module - useful for debugging.
- Returns, on success, a newly created dma_buf object, which wraps the
- supplied private data and operations for dma_buf_ops. On either missing
- ops, or error in allocating struct dma_buf, will return negative error.
- */
+struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
size_t size, int flags, const char *exp_name)
+{
- struct dma_buf *dmabuf;
- struct file *file;
- if (WARN_ON(!priv || !ops
|| !ops->map_dma_buf
|| !ops->unmap_dma_buf
|| !ops->release
|| !ops->kmap_atomic
|| !ops->kmap
|| !ops->mmap)) {
return ERR_PTR(-EINVAL);
- }
- dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL);
- if (dmabuf == NULL)
return ERR_PTR(-ENOMEM);
- dmabuf->priv = priv;
- dmabuf->ops = ops;
- dmabuf->size = size;
- dmabuf->exp_name = exp_name;
- file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags);
- if (IS_ERR(file)) {
kfree(dmabuf);
return ERR_CAST(file);
- }
- file->f_mode |= FMODE_LSEEK;
- dmabuf->file = file;
- mutex_init(&dmabuf->lock);
- INIT_LIST_HEAD(&dmabuf->attachments);
- mutex_lock(&db_list.lock);
- list_add(&dmabuf->list_node, &db_list.head);
- mutex_unlock(&db_list.lock);
- return dmabuf;
+} +EXPORT_SYMBOL_GPL(dma_buf_export_named);
+/**
- dma_buf_fd - returns a file descriptor for the given dma_buf
- @dmabuf: [in] pointer to dma_buf for which fd is required.
- @flags: [in] flags to give to fd
- On success, returns an associated 'fd'. Else, returns error.
- */
+int dma_buf_fd(struct dma_buf *dmabuf, int flags) +{
- int fd;
- if (!dmabuf || !dmabuf->file)
return -EINVAL;
- fd = get_unused_fd_flags(flags);
- if (fd < 0)
return fd;
- fd_install(fd, dmabuf->file);
- return fd;
+} +EXPORT_SYMBOL_GPL(dma_buf_fd);
+/**
- dma_buf_get - returns the dma_buf structure related to an fd
- @fd: [in] fd associated with the dma_buf to be returned
- On success, returns the dma_buf structure associated with an fd; uses
- file's refcounting done by fget to increase refcount. returns ERR_PTR
- otherwise.
- */
+struct dma_buf *dma_buf_get(int fd) +{
- struct file *file;
- file = fget(fd);
- if (!file)
return ERR_PTR(-EBADF);
- if (!is_dma_buf_file(file)) {
fput(file);
return ERR_PTR(-EINVAL);
- }
- return file->private_data;
+} +EXPORT_SYMBOL_GPL(dma_buf_get);
+/**
- dma_buf_put - decreases refcount of the buffer
- @dmabuf: [in] buffer to reduce refcount of
- Uses file's refcounting done implicitly by fput()
- */
+void dma_buf_put(struct dma_buf *dmabuf) +{
- if (WARN_ON(!dmabuf || !dmabuf->file))
return;
- fput(dmabuf->file);
+} +EXPORT_SYMBOL_GPL(dma_buf_put);
+/**
- dma_buf_attach - Add the device to dma_buf's attachments list; optionally,
- calls attach() of dma_buf_ops to allow device-specific attach functionality
- @dmabuf: [in] buffer to attach device to.
- @dev: [in] device to be attached.
- Returns struct dma_buf_attachment * for this attachment; returns ERR_PTR on
- error.
- */
+struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev)
+{
- struct dma_buf_attachment *attach;
- int ret;
- if (WARN_ON(!dmabuf || !dev))
return ERR_PTR(-EINVAL);
- attach = kzalloc(sizeof(struct dma_buf_attachment), GFP_KERNEL);
- if (attach == NULL)
return ERR_PTR(-ENOMEM);
- attach->dev = dev;
- attach->dmabuf = dmabuf;
- mutex_lock(&dmabuf->lock);
- if (dmabuf->ops->attach) {
ret = dmabuf->ops->attach(dmabuf, dev, attach);
if (ret)
goto err_attach;
- }
- list_add(&attach->node, &dmabuf->attachments);
- mutex_unlock(&dmabuf->lock);
- return attach;
+err_attach:
- kfree(attach);
- mutex_unlock(&dmabuf->lock);
- return ERR_PTR(ret);
+} +EXPORT_SYMBOL_GPL(dma_buf_attach);
+/**
- dma_buf_detach - Remove the given attachment from dmabuf's attachments list;
- optionally calls detach() of dma_buf_ops for device-specific detach
- @dmabuf: [in] buffer to detach from.
- @attach: [in] attachment to be detached; is free'd after this call.
- */
+void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) +{
- if (WARN_ON(!dmabuf || !attach))
return;
- mutex_lock(&dmabuf->lock);
- list_del(&attach->node);
- if (dmabuf->ops->detach)
dmabuf->ops->detach(dmabuf, attach);
- mutex_unlock(&dmabuf->lock);
- kfree(attach);
+} +EXPORT_SYMBOL_GPL(dma_buf_detach);
+/**
- dma_buf_map_attachment - Returns the scatterlist table of the attachment;
- mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
- dma_buf_ops.
- @attach: [in] attachment whose scatterlist is to be returned
- @direction: [in] direction of DMA transfer
- Returns sg_table containing the scatterlist to be returned; returns ERR_PTR
- on error.
- */
+struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
enum dma_data_direction direction)
+{
- struct sg_table *sg_table = ERR_PTR(-EINVAL);
- might_sleep();
- if (WARN_ON(!attach || !attach->dmabuf))
return ERR_PTR(-EINVAL);
- sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
- if (!sg_table)
sg_table = ERR_PTR(-ENOMEM);
- return sg_table;
+} +EXPORT_SYMBOL_GPL(dma_buf_map_attachment);
+/**
- dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might
- deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
- dma_buf_ops.
- @attach: [in] attachment to unmap buffer from
- @sg_table: [in] scatterlist info of the buffer to unmap
- @direction: [in] direction of DMA transfer
- */
+void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
struct sg_table *sg_table,
enum dma_data_direction direction)
+{
- might_sleep();
- if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
return;
- attach->dmabuf->ops->unmap_dma_buf(attach, sg_table,
direction);
+} +EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
+/**
- dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
- cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific
- preparations. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to prepare cpu access for.
- @start: [in] start of range for cpu access.
- @len: [in] length of range for cpu access.
- @direction: [in] length of range for cpu access.
- Can return negative error values, returns 0 on success.
- */
+int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
enum dma_data_direction direction)
+{
- int ret = 0;
- if (WARN_ON(!dmabuf))
return -EINVAL;
- if (dmabuf->ops->begin_cpu_access)
ret = dmabuf->ops->begin_cpu_access(dmabuf, start, len, direction);
- return ret;
+} +EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
+/**
- dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the
- cpu in the kernel context. Calls end_cpu_access to allow exporter-specific
- actions. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to complete cpu access for.
- @start: [in] start of range for cpu access.
- @len: [in] length of range for cpu access.
- @direction: [in] length of range for cpu access.
- This call must always succeed.
- */
+void dma_buf_end_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
enum dma_data_direction direction)
+{
- WARN_ON(!dmabuf);
- if (dmabuf->ops->end_cpu_access)
dmabuf->ops->end_cpu_access(dmabuf, start, len, direction);
+} +EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
+/**
- dma_buf_kmap_atomic - Map a page of the buffer object into kernel address
- space. The same restrictions as for kmap_atomic and friends apply.
- @dmabuf: [in] buffer to map page from.
- @page_num: [in] page in PAGE_SIZE units to map.
- This call must always succeed, any necessary preparations that might fail
- need to be done in begin_cpu_access.
- */
+void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num) +{
- WARN_ON(!dmabuf);
- return dmabuf->ops->kmap_atomic(dmabuf, page_num);
+} +EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic);
+/**
- dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic.
- @dmabuf: [in] buffer to unmap page from.
- @page_num: [in] page in PAGE_SIZE units to unmap.
- @vaddr: [in] kernel space pointer obtained from dma_buf_kmap_atomic.
- This call must always succeed.
- */
+void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num,
void *vaddr)
+{
- WARN_ON(!dmabuf);
- if (dmabuf->ops->kunmap_atomic)
dmabuf->ops->kunmap_atomic(dmabuf, page_num, vaddr);
+} +EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic);
+/**
- dma_buf_kmap - Map a page of the buffer object into kernel address space. The
- same restrictions as for kmap and friends apply.
- @dmabuf: [in] buffer to map page from.
- @page_num: [in] page in PAGE_SIZE units to map.
- This call must always succeed, any necessary preparations that might fail
- need to be done in begin_cpu_access.
- */
+void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long page_num) +{
- WARN_ON(!dmabuf);
- return dmabuf->ops->kmap(dmabuf, page_num);
+} +EXPORT_SYMBOL_GPL(dma_buf_kmap);
+/**
- dma_buf_kunmap - Unmap a page obtained by dma_buf_kmap.
- @dmabuf: [in] buffer to unmap page from.
- @page_num: [in] page in PAGE_SIZE units to unmap.
- @vaddr: [in] kernel space pointer obtained from dma_buf_kmap.
- This call must always succeed.
- */
+void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num,
void *vaddr)
+{
- WARN_ON(!dmabuf);
- if (dmabuf->ops->kunmap)
dmabuf->ops->kunmap(dmabuf, page_num, vaddr);
+} +EXPORT_SYMBOL_GPL(dma_buf_kunmap);
+/**
- dma_buf_mmap - Setup up a userspace mmap with the given vma
- @dmabuf: [in] buffer that should back the vma
- @vma: [in] vma for the mmap
- @pgoff: [in] offset in pages where this mmap should start within the
dma-buf buffer.
- This function adjusts the passed in vma so that it points at the file of the
- dma_buf operation. It also adjusts the starting pgoff and does bounds
- checking on the size of the vma. Then it calls the exporters mmap function to
- set up the mapping.
- Can return negative error values, returns 0 on success.
- */
+int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
+{
- struct file *oldfile;
- int ret;
- if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
- /* check for offset overflow */
- if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) < pgoff)
return -EOVERFLOW;
- /* check for overflowing the buffer's size */
- if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- /* readjust the vma */
- get_file(dmabuf->file);
- oldfile = vma->vm_file;
- vma->vm_file = dmabuf->file;
- vma->vm_pgoff = pgoff;
- ret = dmabuf->ops->mmap(dmabuf, vma);
- if (ret) {
/* restore old parameters on failure */
vma->vm_file = oldfile;
fput(dmabuf->file);
- } else {
if (oldfile)
fput(oldfile);
- }
- return ret;
+} +EXPORT_SYMBOL_GPL(dma_buf_mmap);
+/**
- dma_buf_vmap - Create virtual mapping for the buffer object into kernel
- address space. Same restrictions as for vmap and friends apply.
- @dmabuf: [in] buffer to vmap
- This call may fail due to lack of virtual mapping address space.
- These calls are optional in drivers. The intended use for them
- is for mapping objects linear in kernel space for high use objects.
- Please attempt to use kmap/kunmap before thinking about these interfaces.
- Returns NULL on error.
- */
+void *dma_buf_vmap(struct dma_buf *dmabuf) +{
- void *ptr;
- if (WARN_ON(!dmabuf))
return NULL;
- if (!dmabuf->ops->vmap)
return NULL;
- mutex_lock(&dmabuf->lock);
- if (dmabuf->vmapping_counter) {
dmabuf->vmapping_counter++;
BUG_ON(!dmabuf->vmap_ptr);
ptr = dmabuf->vmap_ptr;
goto out_unlock;
- }
- BUG_ON(dmabuf->vmap_ptr);
- ptr = dmabuf->ops->vmap(dmabuf);
- if (WARN_ON_ONCE(IS_ERR(ptr)))
ptr = NULL;
- if (!ptr)
goto out_unlock;
- dmabuf->vmap_ptr = ptr;
- dmabuf->vmapping_counter = 1;
+out_unlock:
- mutex_unlock(&dmabuf->lock);
- return ptr;
+} +EXPORT_SYMBOL_GPL(dma_buf_vmap);
+/**
- dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
- @dmabuf: [in] buffer to vunmap
- @vaddr: [in] vmap to vunmap
- */
+void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) +{
- if (WARN_ON(!dmabuf))
return;
- BUG_ON(!dmabuf->vmap_ptr);
- BUG_ON(dmabuf->vmapping_counter == 0);
- BUG_ON(dmabuf->vmap_ptr != vaddr);
- mutex_lock(&dmabuf->lock);
- if (--dmabuf->vmapping_counter == 0) {
if (dmabuf->ops->vunmap)
dmabuf->ops->vunmap(dmabuf, vaddr);
dmabuf->vmap_ptr = NULL;
- }
- mutex_unlock(&dmabuf->lock);
+} +EXPORT_SYMBOL_GPL(dma_buf_vunmap);
+#ifdef CONFIG_DEBUG_FS +static int dma_buf_describe(struct seq_file *s) +{
- int ret;
- struct dma_buf *buf_obj;
- struct dma_buf_attachment *attach_obj;
- int count = 0, attach_count;
- size_t size = 0;
- ret = mutex_lock_interruptible(&db_list.lock);
- if (ret)
return ret;
- seq_puts(s, "\nDma-buf Objects:\n");
- seq_puts(s, "size\tflags\tmode\tcount\texp_name\n");
- list_for_each_entry(buf_obj, &db_list.head, list_node) {
ret = mutex_lock_interruptible(&buf_obj->lock);
if (ret) {
seq_puts(s,
"\tERROR locking buffer object: skipping\n");
continue;
}
seq_printf(s, "%08zu\t%08x\t%08x\t%08ld\t%s\n",
buf_obj->size,
buf_obj->file->f_flags, buf_obj->file->f_mode,
(long)(buf_obj->file->f_count.counter),
buf_obj->exp_name);
seq_puts(s, "\tAttached Devices:\n");
attach_count = 0;
list_for_each_entry(attach_obj, &buf_obj->attachments, node) {
seq_puts(s, "\t");
seq_printf(s, "%s\n", dev_name(attach_obj->dev));
attach_count++;
}
seq_printf(s, "Total %d devices attached\n\n",
attach_count);
count++;
size += buf_obj->size;
mutex_unlock(&buf_obj->lock);
- }
- seq_printf(s, "\nTotal %d objects, %zu bytes\n", count, size);
- mutex_unlock(&db_list.lock);
- return 0;
+}
+static int dma_buf_show(struct seq_file *s, void *unused) +{
- void (*func)(struct seq_file *) = s->private;
- func(s);
- return 0;
+}
+static int dma_buf_debug_open(struct inode *inode, struct file *file) +{
- return single_open(file, dma_buf_show, inode->i_private);
+}
+static const struct file_operations dma_buf_debug_fops = {
- .open = dma_buf_debug_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = single_release,
+};
+static struct dentry *dma_buf_debugfs_dir;
+static int dma_buf_init_debugfs(void) +{
- int err = 0;
- dma_buf_debugfs_dir = debugfs_create_dir("dma_buf", NULL);
- if (IS_ERR(dma_buf_debugfs_dir)) {
err = PTR_ERR(dma_buf_debugfs_dir);
dma_buf_debugfs_dir = NULL;
return err;
- }
- err = dma_buf_debugfs_create_file("bufinfo", dma_buf_describe);
- if (err)
pr_debug("dma_buf: debugfs: failed to create node bufinfo\n");
- return err;
+}
+static void dma_buf_uninit_debugfs(void) +{
- if (dma_buf_debugfs_dir)
debugfs_remove_recursive(dma_buf_debugfs_dir);
+}
+int dma_buf_debugfs_create_file(const char *name,
int (*write)(struct seq_file *))
+{
- struct dentry *d;
- d = debugfs_create_file(name, S_IRUGO, dma_buf_debugfs_dir,
write, &dma_buf_debug_fops);
- return PTR_ERR_OR_ZERO(d);
+} +#else +static inline int dma_buf_init_debugfs(void) +{
- return 0;
+} +static inline void dma_buf_uninit_debugfs(void) +{ +} +#endif
+static int __init dma_buf_init(void) +{
- mutex_init(&db_list.lock);
- INIT_LIST_HEAD(&db_list.head);
- dma_buf_init_debugfs();
- return 0;
+} +subsys_initcall(dma_buf_init);
+static void __exit dma_buf_deinit(void) +{
- dma_buf_uninit_debugfs();
+} +__exitcall(dma_buf_deinit); diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c new file mode 100644 index 000000000000..a73fbf3b8e56 --- /dev/null +++ b/drivers/dma-buf/reservation.c @@ -0,0 +1,39 @@ +/*
- Copyright (C) 2012-2013 Canonical Ltd
- Based on bo.c which bears the following copyright notice,
- but is dual licensed:
- Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
- All Rights Reserved.
- Permission is hereby granted, free of charge, to any person obtaining a
- copy of this software and associated documentation files (the
- "Software"), to deal in the Software without restriction, including
- without limitation the rights to use, copy, modify, merge, publish,
- distribute, sub license, and/or sell copies of the Software, and to
- permit persons to whom the Software is furnished to do so, subject to
- the following conditions:
- The above copyright notice and this permission notice (including the
- next paragraph) shall be included in all copies or substantial portions
- of the Software.
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
- THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
- DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
- OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
- USE OR OTHER DEALINGS IN THE SOFTWARE.
- **************************************************************************/
+/*
- Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
- */
+#include <linux/reservation.h> +#include <linux/export.h>
+DEFINE_WW_CLASS(reservation_ww_class); +EXPORT_SYMBOL(reservation_ww_class);
-- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
op 01-07-14 13:06, Arend van Spriel schreef:
On 01-07-14 12:57, Maarten Lankhorst wrote:
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
It would help to use '-M' option with format-patch for this kind of rework.
Regards, Arend
Thanks, was looking for some option but didn't find it.
Have a rediff below. :-) 8< --------
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- Documentation/DocBook/device-drivers.tmpl | 3 +-- MAINTAINERS | 4 ++-- drivers/Makefile | 1 + drivers/base/Makefile | 1 - drivers/dma-buf/Makefile | 1 + drivers/{base => dma-buf}/dma-buf.c | 0 drivers/{base => dma-buf}/reservation.c | 0 7 files changed, 5 insertions(+), 5 deletions(-) create mode 100644 drivers/dma-buf/Makefile rename drivers/{base => dma-buf}/dma-buf.c (100%) rename drivers/{base => dma-buf}/reservation.c (100%)
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index cc63f30de166..ac61ebd92875 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -128,8 +128,7 @@ X!Edrivers/base/interface.c !Edrivers/base/bus.c </sect1> <sect1><title>Device Drivers DMA Management</title> -!Edrivers/base/dma-buf.c -!Edrivers/base/reservation.c +!Edrivers/dma-buf/dma-buf.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c diff --git a/MAINTAINERS b/MAINTAINERS index 134483f206e4..c948e53a4ee6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2882,8 +2882,8 @@ S: Maintained L: linux-media@vger.kernel.org L: dri-devel@lists.freedesktop.org L: linaro-mm-sig@lists.linaro.org -F: drivers/base/dma-buf* -F: include/linux/dma-buf* +F: drivers/dma-buf/ +F: include/linux/dma-buf* include/linux/reservation.h F: Documentation/dma-buf-sharing.txt T: git git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git
diff --git a/drivers/Makefile b/drivers/Makefile index f98b50d8251d..c00337be5351 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -61,6 +61,7 @@ obj-$(CONFIG_FB_INTEL) += video/fbdev/intelfb/
obj-$(CONFIG_PARPORT) += parport/ obj-y += base/ block/ misc/ mfd/ nfc/ +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/ obj-$(CONFIG_NUBUS) += nubus/ obj-y += macintosh/ obj-$(CONFIG_IDE) += ide/ diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 04b314e0fa51..4aab26ec0292 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,6 @@ obj-$(CONFIG_DMA_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile new file mode 100644 index 000000000000..4a4f4c9bacd0 --- /dev/null +++ b/drivers/dma-buf/Makefile @@ -0,0 +1 @@ +obj-y := dma-buf.o reservation.o diff --git a/drivers/base/dma-buf.c b/drivers/dma-buf/dma-buf.c similarity index 100% rename from drivers/base/dma-buf.c rename to drivers/dma-buf/dma-buf.c diff --git a/drivers/base/reservation.c b/drivers/dma-buf/reservation.c similarity index 100% rename from drivers/base/reservation.c rename to drivers/dma-buf/reservation.c
A fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace.
A driver must allocate a fence context for each execution ring that can run in parallel. The function for this takes an argument with how many contexts to allocate: + fence_context_alloc()
A fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence: + fence_signal()
To have a rough approximation whether a fence is fired, call: + fence_is_signaled()
The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf.
The one pending on the fence can add an async callback: + fence_add_callback()
The callback can optionally be cancelled with: + fence_remove_callback()
To wait synchronously, optionally with a timeout: + fence_wait() + fence_wait_timeout()
When emitting a fence, call: + trace_fence_emit()
To annotate that a fence is blocking on another fence, call: + trace_fence_annotate_wait_on(fence, on_fence)
A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example:
fence = custom_get_fence(...); if ((seqno_fence = to_seqno_fence(fence)) != NULL) { dma_buf *fence_buf = seqno_fence->sync_buf; get_dma_buf(fence_buf);
... tell the hw the memory location to wait ... custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno); } else { /* fall-back to sw sync * / fence_add_callback(fence, my_cb); }
On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way.
enable_signaling callback is used to provide sw signaling in case a cpu waiter is requested or no compatible hardware signaling could be used.
The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization).
v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added. v9: Change compiler bitfields to flags, change return type of enable_signaling to bool. Rework dma_fence_wait. Added dma_fence_is_signaled and dma_fence_wait_timeout. s/dma// and change exports to non GPL. Added fence_is_signaled and fence_enable_sw_signaling calls, add ability to override default wait operation. v10: remove event_queue, use a custom list, export try_to_wake_up from scheduler. Remove fence lock and use a global spinlock instead, this should hopefully remove all the locking headaches I was having on trying to implement this. enable_signaling is called with this lock held. v11: Use atomic ops for flags, lifting the need for some spin_lock_irqsaves. However I kept the guarantee that after fence_signal returns, it is guaranteed that enable_signaling has either been called to completion, or will not be called any more.
Add contexts and seqno to base fence implementation. This allows you to wait for less fences, by testing for seqno + signaled, and then only wait on the later fence.
Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier. An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE spam, and another runtime option can turn it off at runtime. v12: Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context and fence->seqno members. v13: Fixup CONFIG_FENCE_TRACE kconfig description. Move fence_context_alloc to fence. Simplify fence_later. Kill priv member to fence_cb. v14: Remove priv argument from fence_add_callback, oops! v15: Remove priv from documentation. Explicitly include linux/atomic.h. v16: Add trace events. Import changes required by android syncpoints. v17: Use wake_up_state instead of try_to_wake_up. (Colin Cross) Fix up commit description for seqno_fence. (Rob Clark) v18: Rename release_fence to fence_release. Move to drivers/dma-buf/. Rename __fence_is_signaled and __fence_signal to *_locked. Rename __fence_init to fence_init. Make fence_default_wait return a signed long, and fix wait ops too.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Signed-off-by: Thierry Reding thierry.reding@gmail.com #use smp_mb__before_atomic() Reviewed-by: Rob Clark robdclark@gmail.com --- Documentation/DocBook/device-drivers.tmpl | 2 MAINTAINERS | 2 drivers/base/Kconfig | 9 + drivers/dma-buf/Makefile | 2 drivers/dma-buf/fence.c | 431 +++++++++++++++++++++++++++++ include/linux/fence.h | 343 +++++++++++++++++++++++ include/trace/events/fence.h | 128 +++++++++ 7 files changed, 915 insertions(+), 2 deletions(-) create mode 100644 drivers/dma-buf/fence.c create mode 100644 include/linux/fence.h create mode 100644 include/trace/events/fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index ac61ebd92875..e634657efb52 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -129,6 +129,8 @@ X!Edrivers/base/interface.c </sect1> <sect1><title>Device Drivers DMA Management</title> !Edrivers/dma-buf/dma-buf.c +!Edrivers/dma-buf/fence.c +!Iinclude/linux/fence.h !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c diff --git a/MAINTAINERS b/MAINTAINERS index c948e53a4ee6..ebc1ebf6f542 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2883,7 +2883,7 @@ L: linux-media@vger.kernel.org L: dri-devel@lists.freedesktop.org L: linaro-mm-sig@lists.linaro.org F: drivers/dma-buf/ -F: include/linux/dma-buf* include/linux/reservation.h +F: include/linux/dma-buf* include/linux/reservation.h include/linux/fence.h F: Documentation/dma-buf-sharing.txt T: git git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig index 23b8726962af..00e13ce5cbbd 100644 --- a/drivers/base/Kconfig +++ b/drivers/base/Kconfig @@ -208,6 +208,15 @@ config DMA_SHARED_BUFFER APIs extension; the file's descriptor can then be passed on to other driver.
+config FENCE_TRACE + bool "Enable verbose FENCE_TRACE messages" + depends on DMA_SHARED_BUFFER + help + Enable the FENCE_TRACE printks. This will add extra + spam to the console log, but will make it easier to diagnose + lockup related problems for dma-buffers shared across multiple + devices. + config DMA_CMA bool "DMA Contiguous Memory Allocator" depends on HAVE_DMA_CONTIGUOUS && CMA diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile index 4a4f4c9bacd0..d7825bfe630e 100644 --- a/drivers/dma-buf/Makefile +++ b/drivers/dma-buf/Makefile @@ -1 +1 @@ -obj-y := dma-buf.o reservation.o +obj-y := dma-buf.o fence.o reservation.o diff --git a/drivers/dma-buf/fence.c b/drivers/dma-buf/fence.c new file mode 100644 index 000000000000..948bf00d955e --- /dev/null +++ b/drivers/dma-buf/fence.c @@ -0,0 +1,431 @@ +/* + * Fence mechanism for dma-buf and to allow for asynchronous dma access + * + * Copyright (C) 2012 Canonical Ltd + * Copyright (C) 2012 Texas Instruments + * + * Authors: + * Rob Clark robdclark@gmail.com + * Maarten Lankhorst maarten.lankhorst@canonical.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + */ + +#include <linux/slab.h> +#include <linux/export.h> +#include <linux/atomic.h> +#include <linux/fence.h> + +#define CREATE_TRACE_POINTS +#include <trace/events/fence.h> + +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on); +EXPORT_TRACEPOINT_SYMBOL(fence_emit); + +/** + * fence context counter: each execution context should have its own + * fence context, this allows checking if fences belong to the same + * context or not. One device can have multiple separate contexts, + * and they're used if some engine can run independently of another. + */ +static atomic_t fence_context_counter = ATOMIC_INIT(0); + +/** + * fence_context_alloc - allocate an array of fence contexts + * @num: [in] amount of contexts to allocate + * + * This function will return the first index of the number of fences allocated. + * The fence context is used for setting fence->context to a unique number. + */ +unsigned fence_context_alloc(unsigned num) +{ + BUG_ON(!num); + return atomic_add_return(num, &fence_context_counter) - num; +} +EXPORT_SYMBOL(fence_context_alloc); + +/** + * fence_signal_locked - signal completion of a fence + * @fence: the fence to signal + * + * Signal completion for software callbacks on a fence, this will unblock + * fence_wait() calls and run all the callbacks added with + * fence_add_callback(). Can be called multiple times, but since a fence + * can only go from unsignaled to signaled state, it will only be effective + * the first time. + * + * Unlike fence_signal, this function must be called with fence->lock held. + */ +int fence_signal_locked(struct fence *fence) +{ + struct fence_cb *cur, *tmp; + int ret = 0; + + if (WARN_ON(!fence)) + return -EINVAL; + + if (!ktime_to_ns(fence->timestamp)) { + fence->timestamp = ktime_get(); + smp_mb__before_atomic(); + } + + if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { + ret = -EINVAL; + + /* + * we might have raced with the unlocked fence_signal, + * still run through all callbacks + */ + } else + trace_fence_signaled(fence); + + list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { + list_del_init(&cur->node); + cur->func(fence, cur); + } + return ret; +} +EXPORT_SYMBOL(fence_signal_locked); + +/** + * fence_signal - signal completion of a fence + * @fence: the fence to signal + * + * Signal completion for software callbacks on a fence, this will unblock + * fence_wait() calls and run all the callbacks added with + * fence_add_callback(). Can be called multiple times, but since a fence + * can only go from unsignaled to signaled state, it will only be effective + * the first time. + */ +int fence_signal(struct fence *fence) +{ + unsigned long flags; + + if (!fence) + return -EINVAL; + + if (!ktime_to_ns(fence->timestamp)) { + fence->timestamp = ktime_get(); + smp_mb__before_atomic(); + } + + if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return -EINVAL; + + trace_fence_signaled(fence); + + if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) { + struct fence_cb *cur, *tmp; + + spin_lock_irqsave(fence->lock, flags); + list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { + list_del_init(&cur->node); + cur->func(fence, cur); + } + spin_unlock_irqrestore(fence->lock, flags); + } + return 0; +} +EXPORT_SYMBOL(fence_signal); + +/** + * fence_wait_timeout - sleep until the fence gets signaled + * or until timeout elapses + * @fence: [in] the fence to wait on + * @intr: [in] if true, do an interruptible wait + * @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT + * + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the + * remaining timeout in jiffies on success. Other error values may be + * returned on custom implementations. + * + * Performs a synchronous wait on this fence. It is assumed the caller + * directly or indirectly (buf-mgr between reservation and committing) + * holds a reference to the fence, otherwise the fence might be + * freed before return, resulting in undefined behavior. + */ +signed long +fence_wait_timeout(struct fence *fence, bool intr, signed long timeout) +{ + signed long ret; + + if (WARN_ON(timeout < 0)) + return -EINVAL; + + trace_fence_wait_start(fence); + ret = fence->ops->wait(fence, intr, timeout); + trace_fence_wait_end(fence); + return ret; +} +EXPORT_SYMBOL(fence_wait_timeout); + +void fence_release(struct kref *kref) +{ + struct fence *fence = + container_of(kref, struct fence, refcount); + + trace_fence_destroy(fence); + + BUG_ON(!list_empty(&fence->cb_list)); + + if (fence->ops->release) + fence->ops->release(fence); + else + fence_free(fence); +} +EXPORT_SYMBOL(fence_release); + +void fence_free(struct fence *fence) +{ + kfree(fence); +} +EXPORT_SYMBOL(fence_free); + +/** + * fence_enable_sw_signaling - enable signaling on fence + * @fence: [in] the fence to enable + * + * this will request for sw signaling to be enabled, to make the fence + * complete as soon as possible + */ +void fence_enable_sw_signaling(struct fence *fence) +{ + unsigned long flags; + + if (!test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags) && + !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { + trace_fence_enable_signal(fence); + + spin_lock_irqsave(fence->lock, flags); + + if (!fence->ops->enable_signaling(fence)) + fence_signal_locked(fence); + + spin_unlock_irqrestore(fence->lock, flags); + } +} +EXPORT_SYMBOL(fence_enable_sw_signaling); + +/** + * fence_add_callback - add a callback to be called when the fence + * is signaled + * @fence: [in] the fence to wait on + * @cb: [in] the callback to register + * @func: [in] the function to call + * + * cb will be initialized by fence_add_callback, no initialization + * by the caller is required. Any number of callbacks can be registered + * to a fence, but a callback can only be registered to one fence at a time. + * + * Note that the callback can be called from an atomic context. If + * fence is already signaled, this function will return -ENOENT (and + * *not* call the callback) + * + * Add a software callback to the fence. Same restrictions apply to + * refcount as it does to fence_wait, however the caller doesn't need to + * keep a refcount to fence afterwards: when software access is enabled, + * the creator of the fence is required to keep the fence alive until + * after it signals with fence_signal. The callback itself can be called + * from irq context. + * + */ +int fence_add_callback(struct fence *fence, struct fence_cb *cb, + fence_func_t func) +{ + unsigned long flags; + int ret = 0; + bool was_set; + + if (WARN_ON(!fence || !func)) + return -EINVAL; + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { + INIT_LIST_HEAD(&cb->node); + return -ENOENT; + } + + spin_lock_irqsave(fence->lock, flags); + + was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags); + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + ret = -ENOENT; + else if (!was_set) { + trace_fence_enable_signal(fence); + + if (!fence->ops->enable_signaling(fence)) { + fence_signal_locked(fence); + ret = -ENOENT; + } + } + + if (!ret) { + cb->func = func; + list_add_tail(&cb->node, &fence->cb_list); + } else + INIT_LIST_HEAD(&cb->node); + spin_unlock_irqrestore(fence->lock, flags); + + return ret; +} +EXPORT_SYMBOL(fence_add_callback); + +/** + * fence_remove_callback - remove a callback from the signaling list + * @fence: [in] the fence to wait on + * @cb: [in] the callback to remove + * + * Remove a previously queued callback from the fence. This function returns + * true if the callback is succesfully removed, or false if the fence has + * already been signaled. + * + * *WARNING*: + * Cancelling a callback should only be done if you really know what you're + * doing, since deadlocks and race conditions could occur all too easily. For + * this reason, it should only ever be done on hardware lockup recovery, + * with a reference held to the fence. + */ +bool +fence_remove_callback(struct fence *fence, struct fence_cb *cb) +{ + unsigned long flags; + bool ret; + + spin_lock_irqsave(fence->lock, flags); + + ret = !list_empty(&cb->node); + if (ret) + list_del_init(&cb->node); + + spin_unlock_irqrestore(fence->lock, flags); + + return ret; +} +EXPORT_SYMBOL(fence_remove_callback); + +struct default_wait_cb { + struct fence_cb base; + struct task_struct *task; +}; + +static void +fence_default_wait_cb(struct fence *fence, struct fence_cb *cb) +{ + struct default_wait_cb *wait = + container_of(cb, struct default_wait_cb, base); + + wake_up_state(wait->task, TASK_NORMAL); +} + +/** + * fence_default_wait - default sleep until the fence gets signaled + * or until timeout elapses + * @fence: [in] the fence to wait on + * @intr: [in] if true, do an interruptible wait + * @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT + * + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the + * remaining timeout in jiffies on success. + */ +signed long +fence_default_wait(struct fence *fence, bool intr, signed long timeout) +{ + struct default_wait_cb cb; + unsigned long flags; + signed long ret = timeout; + bool was_set; + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return timeout; + + spin_lock_irqsave(fence->lock, flags); + + if (intr && signal_pending(current)) { + ret = -ERESTARTSYS; + goto out; + } + + was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags); + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + goto out; + + if (!was_set) { + trace_fence_enable_signal(fence); + + if (!fence->ops->enable_signaling(fence)) { + fence_signal_locked(fence); + goto out; + } + } + + cb.base.func = fence_default_wait_cb; + cb.task = current; + list_add(&cb.base.node, &fence->cb_list); + + while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { + if (intr) + __set_current_state(TASK_INTERRUPTIBLE); + else + __set_current_state(TASK_UNINTERRUPTIBLE); + spin_unlock_irqrestore(fence->lock, flags); + + ret = schedule_timeout(ret); + + spin_lock_irqsave(fence->lock, flags); + if (ret > 0 && intr && signal_pending(current)) + ret = -ERESTARTSYS; + } + + if (!list_empty(&cb.base.node)) + list_del(&cb.base.node); + __set_current_state(TASK_RUNNING); + +out: + spin_unlock_irqrestore(fence->lock, flags); + return ret; +} +EXPORT_SYMBOL(fence_default_wait); + +/** + * fence_init - Initialize a custom fence. + * @fence: [in] the fence to initialize + * @ops: [in] the fence_ops for operations on this fence + * @lock: [in] the irqsafe spinlock to use for locking this fence + * @context: [in] the execution context this fence is run on + * @seqno: [in] a linear increasing sequence number for this context + * + * Initializes an allocated fence, the caller doesn't have to keep its + * refcount after committing with this fence, but it will need to hold a + * refcount again if fence_ops.enable_signaling gets called. This can + * be used for other implementing other types of fence. + * + * context and seqno are used for easy comparison between fences, allowing + * to check which fence is later by simply using fence_later. + */ +void +fence_init(struct fence *fence, const struct fence_ops *ops, + spinlock_t *lock, unsigned context, unsigned seqno) +{ + BUG_ON(!lock); + BUG_ON(!ops || !ops->wait || !ops->enable_signaling || + !ops->get_driver_name || !ops->get_timeline_name); + + kref_init(&fence->refcount); + fence->ops = ops; + INIT_LIST_HEAD(&fence->cb_list); + fence->lock = lock; + fence->context = context; + fence->seqno = seqno; + fence->flags = 0UL; + + trace_fence_init(fence); +} +EXPORT_SYMBOL(fence_init); diff --git a/include/linux/fence.h b/include/linux/fence.h new file mode 100644 index 000000000000..b935cc650123 --- /dev/null +++ b/include/linux/fence.h @@ -0,0 +1,343 @@ +/* + * Fence mechanism for dma-buf to allow for asynchronous dma access + * + * Copyright (C) 2012 Canonical Ltd + * Copyright (C) 2012 Texas Instruments + * + * Authors: + * Rob Clark robdclark@gmail.com + * Maarten Lankhorst maarten.lankhorst@canonical.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + */ + +#ifndef __LINUX_FENCE_H +#define __LINUX_FENCE_H + +#include <linux/err.h> +#include <linux/wait.h> +#include <linux/list.h> +#include <linux/bitops.h> +#include <linux/kref.h> +#include <linux/sched.h> +#include <linux/printk.h> + +struct fence; +struct fence_ops; +struct fence_cb; + +/** + * struct fence - software synchronization primitive + * @refcount: refcount for this fence + * @ops: fence_ops associated with this fence + * @cb_list: list of all callbacks to call + * @lock: spin_lock_irqsave used for locking + * @context: execution context this fence belongs to, returned by + * fence_context_alloc() + * @seqno: the sequence number of this fence inside the execution context, + * can be compared to decide which fence would be signaled later. + * @flags: A mask of FENCE_FLAG_* defined below + * @timestamp: Timestamp when the fence was signaled. + * @status: Optional, only valid if < 0, must be set before calling + * fence_signal, indicates that the fence has completed with an error. + * + * the flags member must be manipulated and read using the appropriate + * atomic ops (bit_*), so taking the spinlock will not be needed most + * of the time. + * + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called* + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the + * implementer of the fence for its own purposes. Can be used in different + * ways by different fence implementers, so do not rely on this. + * + * *) Since atomic bitops are used, this is not guaranteed to be the case. + * Particularly, if the bit was set, but fence_signal was called right + * before this bit was set, it would have been able to set the + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that + * after fence_signal was called, any enable_signaling call will have either + * been completed, or never called at all. + */ +struct fence { + struct kref refcount; + const struct fence_ops *ops; + struct list_head cb_list; + spinlock_t *lock; + unsigned context, seqno; + unsigned long flags; + ktime_t timestamp; + int status; +}; + +enum fence_flag_bits { + FENCE_FLAG_SIGNALED_BIT, + FENCE_FLAG_ENABLE_SIGNAL_BIT, + FENCE_FLAG_USER_BITS, /* must always be last member */ +}; + +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb); + +/** + * struct fence_cb - callback for fence_add_callback + * @node: used by fence_add_callback to append this struct to fence::cb_list + * @func: fence_func_t to call + * + * This struct will be initialized by fence_add_callback, additional + * data can be passed along by embedding fence_cb in another struct. + */ +struct fence_cb { + struct list_head node; + fence_func_t func; +}; + +/** + * struct fence_ops - operations implemented for fence + * @get_driver_name: returns the driver name. + * @get_timeline_name: return the name of the context this fence belongs to. + * @enable_signaling: enable software signaling of fence. + * @signaled: [optional] peek whether the fence is signaled, can be null. + * @wait: custom wait implementation, or fence_default_wait. + * @release: [optional] called on destruction of fence, can be null + * @fill_driver_data: [optional] callback to fill in free-form debug info + * Returns amount of bytes filled, or -errno. + * @fence_value_str: [optional] fills in the value of the fence as a string + * @timeline_value_str: [optional] fills in the current value of the timeline + * as a string + * + * Notes on enable_signaling: + * For fence implementations that have the capability for hw->hw + * signaling, they can implement this op to enable the necessary + * irqs, or insert commands into cmdstream, etc. This is called + * in the first wait() or add_callback() path to let the fence + * implementation know that there is another driver waiting on + * the signal (ie. hw->sw case). + * + * This function can be called called from atomic context, but not + * from irq context, so normal spinlocks can be used. + * + * A return value of false indicates the fence already passed, + * or some failure occured that made it impossible to enable + * signaling. True indicates succesful enabling. + * + * fence->status may be set in enable_signaling, but only when false is + * returned. + * + * Calling fence_signal before enable_signaling is called allows + * for a tiny race window in which enable_signaling is called during, + * before, or after fence_signal. To fight this, it is recommended + * that before enable_signaling returns true an extra reference is + * taken on the fence, to be released when the fence is signaled. + * This will mean fence_signal will still be called twice, but + * the second time will be a noop since it was already signaled. + * + * Notes on signaled: + * May set fence->status if returning true. + * + * Notes on wait: + * Must not be NULL, set to fence_default_wait for default implementation. + * the fence_default_wait implementation should work for any fence, as long + * as enable_signaling works correctly. + * + * Must return -ERESTARTSYS if the wait is intr = true and the wait was + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait + * timed out. Can also return other error values on custom implementations, + * which should be treated as if the fence is signaled. For example a hardware + * lockup could be reported like that. + * + * Notes on release: + * Can be NULL, this function allows additional commands to run on + * destruction of the fence. Can be called from irq context. + * If pointer is set to NULL, kfree will get called instead. + */ + +struct fence_ops { + const char * (*get_driver_name)(struct fence *fence); + const char * (*get_timeline_name)(struct fence *fence); + bool (*enable_signaling)(struct fence *fence); + bool (*signaled)(struct fence *fence); + signed long (*wait)(struct fence *fence, bool intr, signed long timeout); + void (*release)(struct fence *fence); + + int (*fill_driver_data)(struct fence *fence, void *data, int size); + void (*fence_value_str)(struct fence *fence, char *str, int size); + void (*timeline_value_str)(struct fence *fence, char *str, int size); +}; + +void fence_init(struct fence *fence, const struct fence_ops *ops, + spinlock_t *lock, unsigned context, unsigned seqno); + +void fence_release(struct kref *kref); +void fence_free(struct fence *fence); + +/** + * fence_get - increases refcount of the fence + * @fence: [in] fence to increase refcount of + * + * Returns the same fence, with refcount increased by 1. + */ +static inline struct fence *fence_get(struct fence *fence) +{ + if (fence) + kref_get(&fence->refcount); + return fence; +} + +/** + * fence_put - decreases refcount of the fence + * @fence: [in] fence to reduce refcount of + */ +static inline void fence_put(struct fence *fence) +{ + if (fence) + kref_put(&fence->refcount, fence_release); +} + +int fence_signal(struct fence *fence); +int fence_signal_locked(struct fence *fence); +signed long fence_default_wait(struct fence *fence, bool intr, signed long timeout); +int fence_add_callback(struct fence *fence, struct fence_cb *cb, + fence_func_t func); +bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); +void fence_enable_sw_signaling(struct fence *fence); + +/** + * fence_is_signaled_locked - Return an indication if the fence is signaled yet. + * @fence: [in] the fence to check + * + * Returns true if the fence was already signaled, false if not. Since this + * function doesn't enable signaling, it is not guaranteed to ever return + * true if fence_add_callback, fence_wait or fence_enable_sw_signaling + * haven't been called before. + * + * This function requires fence->lock to be held. + */ +static inline bool +fence_is_signaled_locked(struct fence *fence) +{ + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return true; + + if (fence->ops->signaled && fence->ops->signaled(fence)) { + fence_signal_locked(fence); + return true; + } + + return false; +} + +/** + * fence_is_signaled - Return an indication if the fence is signaled yet. + * @fence: [in] the fence to check + * + * Returns true if the fence was already signaled, false if not. Since this + * function doesn't enable signaling, it is not guaranteed to ever return + * true if fence_add_callback, fence_wait or fence_enable_sw_signaling + * haven't been called before. + * + * It's recommended for seqno fences to call fence_signal when the + * operation is complete, it makes it possible to prevent issues from + * wraparound between time of issue and time of use by checking the return + * value of this function before calling hardware-specific wait instructions. + */ +static inline bool +fence_is_signaled(struct fence *fence) +{ + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) + return true; + + if (fence->ops->signaled && fence->ops->signaled(fence)) { + fence_signal(fence); + return true; + } + + return false; +} + +/** + * fence_later - return the chronologically later fence + * @f1: [in] the first fence from the same context + * @f2: [in] the second fence from the same context + * + * Returns NULL if both fences are signaled, otherwise the fence that would be + * signaled last. Both fences must be from the same context, since a seqno is + * not re-used across contexts. + */ +static inline struct fence *fence_later(struct fence *f1, struct fence *f2) +{ + if (WARN_ON(f1->context != f2->context)) + return NULL; + + /* + * can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been + * set if enable_signaling wasn't called, and enabling that here is + * overkill. + */ + if (f2->seqno - f1->seqno <= INT_MAX) + return fence_is_signaled(f2) ? NULL : f2; + else + return fence_is_signaled(f1) ? NULL : f1; +} + +signed long fence_wait_timeout(struct fence *, bool intr, signed long timeout); + + +/** + * fence_wait - sleep until the fence gets signaled + * @fence: [in] the fence to wait on + * @intr: [in] if true, do an interruptible wait + * + * This function will return -ERESTARTSYS if interrupted by a signal, + * or 0 if the fence was signaled. Other error values may be + * returned on custom implementations. + * + * Performs a synchronous wait on this fence. It is assumed the caller + * directly or indirectly holds a reference to the fence, otherwise the + * fence might be freed before return, resulting in undefined behavior. + */ +static inline signed long fence_wait(struct fence *fence, bool intr) +{ + signed long ret; + + /* Since fence_wait_timeout cannot timeout with + * MAX_SCHEDULE_TIMEOUT, only valid return values are + * -ERESTARTSYS and MAX_SCHEDULE_TIMEOUT. + */ + ret = fence_wait_timeout(fence, intr, MAX_SCHEDULE_TIMEOUT); + + return ret < 0 ? ret : 0; +} + +unsigned fence_context_alloc(unsigned num); + +#define FENCE_TRACE(f, fmt, args...) \ + do { \ + struct fence *__ff = (f); \ + if (config_enabled(CONFIG_FENCE_TRACE)) \ + pr_info("f %u#%u: " fmt, \ + __ff->context, __ff->seqno, ##args); \ + } while (0) + +#define FENCE_WARN(f, fmt, args...) \ + do { \ + struct fence *__ff = (f); \ + pr_warn("f %u#%u: " fmt, __ff->context, __ff->seqno, \ + ##args); \ + } while (0) + +#define FENCE_ERR(f, fmt, args...) \ + do { \ + struct fence *__ff = (f); \ + pr_err("f %u#%u: " fmt, __ff->context, __ff->seqno, \ + ##args); \ + } while (0) + +#endif /* __LINUX_FENCE_H */ diff --git a/include/trace/events/fence.h b/include/trace/events/fence.h new file mode 100644 index 000000000000..98feb1b82896 --- /dev/null +++ b/include/trace/events/fence.h @@ -0,0 +1,128 @@ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM fence + +#if !defined(_TRACE_FENCE_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_FENCE_H + +#include <linux/tracepoint.h> + +struct fence; + +TRACE_EVENT(fence_annotate_wait_on, + + /* fence: the fence waiting on f1, f1: the fence to be waited on. */ + TP_PROTO(struct fence *fence, struct fence *f1), + + TP_ARGS(fence, f1), + + TP_STRUCT__entry( + __string(driver, fence->ops->get_driver_name(fence)) + __string(timeline, fence->ops->get_driver_name(fence)) + __field(unsigned int, context) + __field(unsigned int, seqno) + + __string(waiting_driver, f1->ops->get_driver_name(f1)) + __string(waiting_timeline, f1->ops->get_timeline_name(f1)) + __field(unsigned int, waiting_context) + __field(unsigned int, waiting_seqno) + ), + + TP_fast_assign( + __assign_str(driver, fence->ops->get_driver_name(fence)) + __assign_str(timeline, fence->ops->get_timeline_name(fence)) + __entry->context = fence->context; + __entry->seqno = fence->seqno; + + __assign_str(waiting_driver, f1->ops->get_driver_name(f1)) + __assign_str(waiting_timeline, f1->ops->get_timeline_name(f1)) + __entry->waiting_context = f1->context; + __entry->waiting_seqno = f1->seqno; + + ), + + TP_printk("driver=%s timeline=%s context=%u seqno=%u " \ + "waits on driver=%s timeline=%s context=%u seqno=%u", + __get_str(driver), __get_str(timeline), __entry->context, + __entry->seqno, + __get_str(waiting_driver), __get_str(waiting_timeline), + __entry->waiting_context, __entry->waiting_seqno) +); + +DECLARE_EVENT_CLASS(fence, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence), + + TP_STRUCT__entry( + __string(driver, fence->ops->get_driver_name(fence)) + __string(timeline, fence->ops->get_timeline_name(fence)) + __field(unsigned int, context) + __field(unsigned int, seqno) + ), + + TP_fast_assign( + __assign_str(driver, fence->ops->get_driver_name(fence)) + __assign_str(timeline, fence->ops->get_timeline_name(fence)) + __entry->context = fence->context; + __entry->seqno = fence->seqno; + ), + + TP_printk("driver=%s timeline=%s context=%u seqno=%u", + __get_str(driver), __get_str(timeline), __entry->context, + __entry->seqno) +); + +DEFINE_EVENT(fence, fence_emit, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_init, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_destroy, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_enable_signal, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_signaled, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_wait_start, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +DEFINE_EVENT(fence, fence_wait_end, + + TP_PROTO(struct fence *fence), + + TP_ARGS(fence) +); + +#endif /* _TRACE_FENCE_H */ + +/* This part must be outside protection */ +#include <trace/define_trace.h>
This type of fence can be used with hardware synchronization for simple hardware that can block execution until the condition (dma_buf[offset] - value) >= 0 has been met when WAIT_GEQUAL is used, or (dma_buf[offset] != 0) has been met when WAIT_NONZERO is set.
A software fallback still has to be provided in case the fence is used with a device that doesn't support this mechanism. It is useful to expose this for graphics cards that have an op to support this.
Some cards like i915 can export those, but don't have an option to wait, so they need the software fallback.
I extended the original patch by Rob Clark.
v1: Original v2: Renamed from bikeshed to seqno, moved into dma-fence.c since not much was left of the file. Lots of documentation added. v3: Use fence_ops instead of custom callbacks. Moved to own file to avoid circular dependency between dma-buf.h and fence.h v4: Add spinlock pointer to seqno_fence_init v5: Add condition member to allow wait for != 0. Fix small style errors pointed out by checkpatch. v6: Move to a separate file. Fix up api changes in fences.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Reviewed-by: Rob Clark robdclark@gmail.com #v4 --- Documentation/DocBook/device-drivers.tmpl | 2 + MAINTAINERS | 2 - drivers/dma-buf/Makefile | 2 - drivers/dma-buf/seqno-fence.c | 73 ++++++++++++++++++ include/linux/seqno-fence.h | 116 +++++++++++++++++++++++++++++ 5 files changed, 193 insertions(+), 2 deletions(-) create mode 100644 drivers/dma-buf/seqno-fence.c create mode 100644 include/linux/seqno-fence.h
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index e634657efb52..ed0ef00cd7bc 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -130,7 +130,9 @@ X!Edrivers/base/interface.c <sect1><title>Device Drivers DMA Management</title> !Edrivers/dma-buf/dma-buf.c !Edrivers/dma-buf/fence.c +!Edrivers/dma-buf/seqno-fence.c !Iinclude/linux/fence.h +!Iinclude/linux/seqno-fence.h !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c diff --git a/MAINTAINERS b/MAINTAINERS index ebc1ebf6f542..135929f6cf6a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2883,7 +2883,7 @@ L: linux-media@vger.kernel.org L: dri-devel@lists.freedesktop.org L: linaro-mm-sig@lists.linaro.org F: drivers/dma-buf/ -F: include/linux/dma-buf* include/linux/reservation.h include/linux/fence.h +F: include/linux/dma-buf* include/linux/reservation.h include/linux/*fence.h F: Documentation/dma-buf-sharing.txt T: git git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile index d7825bfe630e..57a675f90cd0 100644 --- a/drivers/dma-buf/Makefile +++ b/drivers/dma-buf/Makefile @@ -1 +1 @@ -obj-y := dma-buf.o fence.o reservation.o +obj-y := dma-buf.o fence.o reservation.o seqno-fence.o diff --git a/drivers/dma-buf/seqno-fence.c b/drivers/dma-buf/seqno-fence.c new file mode 100644 index 000000000000..7d12a39a4b57 --- /dev/null +++ b/drivers/dma-buf/seqno-fence.c @@ -0,0 +1,73 @@ +/* + * seqno-fence, using a dma-buf to synchronize fencing + * + * Copyright (C) 2012 Texas Instruments + * Copyright (C) 2012-2014 Canonical Ltd + * Authors: + * Rob Clark robdclark@gmail.com + * Maarten Lankhorst maarten.lankhorst@canonical.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + */ + +#include <linux/slab.h> +#include <linux/export.h> +#include <linux/seqno-fence.h> + +static const char *seqno_fence_get_driver_name(struct fence *fence) +{ + struct seqno_fence *seqno_fence = to_seqno_fence(fence); + return seqno_fence->ops->get_driver_name(fence); +} + +static const char *seqno_fence_get_timeline_name(struct fence *fence) +{ + struct seqno_fence *seqno_fence = to_seqno_fence(fence); + return seqno_fence->ops->get_timeline_name(fence); +} + +static bool seqno_enable_signaling(struct fence *fence) +{ + struct seqno_fence *seqno_fence = to_seqno_fence(fence); + return seqno_fence->ops->enable_signaling(fence); +} + +static bool seqno_signaled(struct fence *fence) +{ + struct seqno_fence *seqno_fence = to_seqno_fence(fence); + return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fence); +} + +static void seqno_release(struct fence *fence) +{ + struct seqno_fence *f = to_seqno_fence(fence); + + dma_buf_put(f->sync_buf); + if (f->ops->release) + f->ops->release(fence); + else + fence_free(&f->base); +} + +static signed long seqno_wait(struct fence *fence, bool intr, signed long timeout) +{ + struct seqno_fence *f = to_seqno_fence(fence); + return f->ops->wait(fence, intr, timeout); +} + +const struct fence_ops seqno_fence_ops = { + .get_driver_name = seqno_fence_get_driver_name, + .get_timeline_name = seqno_fence_get_timeline_name, + .enable_signaling = seqno_enable_signaling, + .signaled = seqno_signaled, + .wait = seqno_wait, + .release = seqno_release, +}; +EXPORT_SYMBOL(seqno_fence_ops); diff --git a/include/linux/seqno-fence.h b/include/linux/seqno-fence.h new file mode 100644 index 000000000000..3d6003de4b0d --- /dev/null +++ b/include/linux/seqno-fence.h @@ -0,0 +1,116 @@ +/* + * seqno-fence, using a dma-buf to synchronize fencing + * + * Copyright (C) 2012 Texas Instruments + * Copyright (C) 2012 Canonical Ltd + * Authors: + * Rob Clark robdclark@gmail.com + * Maarten Lankhorst maarten.lankhorst@canonical.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + */ + +#ifndef __LINUX_SEQNO_FENCE_H +#define __LINUX_SEQNO_FENCE_H + +#include <linux/fence.h> +#include <linux/dma-buf.h> + +enum seqno_fence_condition { + SEQNO_FENCE_WAIT_GEQUAL, + SEQNO_FENCE_WAIT_NONZERO +}; + +struct seqno_fence { + struct fence base; + + const struct fence_ops *ops; + struct dma_buf *sync_buf; + uint32_t seqno_ofs; + enum seqno_fence_condition condition; +}; + +extern const struct fence_ops seqno_fence_ops; + +/** + * to_seqno_fence - cast a fence to a seqno_fence + * @fence: fence to cast to a seqno_fence + * + * Returns NULL if the fence is not a seqno_fence, + * or the seqno_fence otherwise. + */ +static inline struct seqno_fence * +to_seqno_fence(struct fence *fence) +{ + if (fence->ops != &seqno_fence_ops) + return NULL; + return container_of(fence, struct seqno_fence, base); +} + +/** + * seqno_fence_init - initialize a seqno fence + * @fence: seqno_fence to initialize + * @lock: pointer to spinlock to use for fence + * @sync_buf: buffer containing the memory location to signal on + * @context: the execution context this fence is a part of + * @seqno_ofs: the offset within @sync_buf + * @seqno: the sequence # to signal on + * @ops: the fence_ops for operations on this seqno fence + * + * This function initializes a struct seqno_fence with passed parameters, + * and takes a reference on sync_buf which is released on fence destruction. + * + * A seqno_fence is a dma_fence which can complete in software when + * enable_signaling is called, but it also completes when + * (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true + * + * The seqno_fence will take a refcount on the sync_buf until it's + * destroyed, but actual lifetime of sync_buf may be longer if one of the + * callers take a reference to it. + * + * Certain hardware have instructions to insert this type of wait condition + * in the command stream, so no intervention from software would be needed. + * This type of fence can be destroyed before completed, however a reference + * on the sync_buf dma-buf can be taken. It is encouraged to re-use the same + * dma-buf for sync_buf, since mapping or unmapping the sync_buf to the + * device's vm can be expensive. + * + * It is recommended for creators of seqno_fence to call fence_signal + * before destruction. This will prevent possible issues from wraparound at + * time of issue vs time of check, since users can check fence_is_signaled + * before submitting instructions for the hardware to wait on the fence. + * However, when ops.enable_signaling is not called, it doesn't have to be + * done as soon as possible, just before there's any real danger of seqno + * wraparound. + */ +static inline void +seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock, + struct dma_buf *sync_buf, uint32_t context, + uint32_t seqno_ofs, uint32_t seqno, + enum seqno_fence_condition cond, + const struct fence_ops *ops) +{ + BUG_ON(!fence || !sync_buf || !ops); + BUG_ON(!ops->wait || !ops->enable_signaling || + !ops->get_driver_name || !ops->get_timeline_name); + + /* + * ops is used in fence_init for get_driver_name, so needs to be + * initialized first + */ + fence->ops = ops; + fence_init(&fence->base, &seqno_fence_ops, lock, context, seqno); + get_dma_buf(sync_buf); + fence->sync_buf = sync_buf; + fence->seqno_ofs = seqno_ofs; + fence->condition = cond; +} + +#endif /* __LINUX_SEQNO_FENCE_H */
This allows reservation objects to be used in dma-buf. it's required for implementing polling support on the fences that belong to a dma-buf.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Acked-by: Mauro Carvalho Chehab m.chehab@samsung.com #drivers/media/v4l2-core/ Acked-by: Thomas Hellstrom thellstrom@vmware.com #drivers/gpu/drm/ttm Signed-off-by: Vincent Stehlé vincent.stehle@laposte.net #drivers/gpu/drm/armada/ --- drivers/dma-buf/dma-buf.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/armada/armada_gem.c | 2 +- drivers/gpu/drm/drm_prime.c | 8 +++++++- drivers/gpu/drm/exynos/exynos_drm_dmabuf.c | 2 +- drivers/gpu/drm/i915/i915_gem_dmabuf.c | 3 ++- drivers/gpu/drm/nouveau/nouveau_drm.c | 1 + drivers/gpu/drm/nouveau/nouveau_gem.h | 1 + drivers/gpu/drm/nouveau/nouveau_prime.c | 7 +++++++ drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 +- drivers/gpu/drm/radeon/radeon_drv.c | 2 ++ drivers/gpu/drm/radeon/radeon_prime.c | 8 ++++++++ drivers/gpu/drm/tegra/gem.c | 2 +- drivers/gpu/drm/ttm/ttm_object.c | 2 +- drivers/media/v4l2-core/videobuf2-dma-contig.c | 2 +- drivers/staging/android/ion/ion.c | 3 ++- include/drm/drmP.h | 3 +++ include/linux/dma-buf.h | 9 ++++++--- 17 files changed, 65 insertions(+), 14 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 840c7fa80983..cd40ca22911f 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -25,10 +25,12 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h> +#include <linux/fence.h> #include <linux/anon_inodes.h> #include <linux/export.h> #include <linux/debugfs.h> #include <linux/seq_file.h> +#include <linux/reservation.h>
static inline int is_dma_buf_file(struct file *);
@@ -56,6 +58,9 @@ static int dma_buf_release(struct inode *inode, struct file *file) list_del(&dmabuf->list_node); mutex_unlock(&db_list.lock);
+ if (dmabuf->resv == (struct reservation_object *)&dmabuf[1]) + reservation_object_fini(dmabuf->resv); + kfree(dmabuf); return 0; } @@ -128,6 +133,7 @@ static inline int is_dma_buf_file(struct file *file) * @size: [in] Size of the buffer * @flags: [in] mode flags for the file. * @exp_name: [in] name of the exporting module - useful for debugging. + * @resv: [in] reservation-object, NULL to allocate default one. * * Returns, on success, a newly created dma_buf object, which wraps the * supplied private data and operations for dma_buf_ops. On either missing @@ -135,10 +141,17 @@ static inline int is_dma_buf_file(struct file *file) * */ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, - size_t size, int flags, const char *exp_name) + size_t size, int flags, const char *exp_name, + struct reservation_object *resv) { struct dma_buf *dmabuf; struct file *file; + size_t alloc_size = sizeof(struct dma_buf); + if (!resv) + alloc_size += sizeof(struct reservation_object); + else + /* prevent &dma_buf[1] == dma_buf->resv */ + alloc_size += 1;
if (WARN_ON(!priv || !ops || !ops->map_dma_buf @@ -150,7 +163,7 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, return ERR_PTR(-EINVAL); }
- dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL); + dmabuf = kzalloc(alloc_size, GFP_KERNEL); if (dmabuf == NULL) return ERR_PTR(-ENOMEM);
@@ -158,6 +171,11 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, dmabuf->ops = ops; dmabuf->size = size; dmabuf->exp_name = exp_name; + if (!resv) { + resv = (struct reservation_object *)&dmabuf[1]; + reservation_object_init(resv); + } + dmabuf->resv = resv;
file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags); if (IS_ERR(file)) { diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c index bb9b642d8485..7496f55611a5 100644 --- a/drivers/gpu/drm/armada/armada_gem.c +++ b/drivers/gpu/drm/armada/armada_gem.c @@ -539,7 +539,7 @@ armada_gem_prime_export(struct drm_device *dev, struct drm_gem_object *obj, int flags) { return dma_buf_export(obj, &armada_gem_prime_dmabuf_ops, obj->size, - O_RDWR); + O_RDWR, NULL); }
struct drm_gem_object * diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 304ca8cacbc4..99d578bad17e 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -336,7 +336,13 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { struct dma_buf *drm_gem_prime_export(struct drm_device *dev, struct drm_gem_object *obj, int flags) { - return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, flags); + struct reservation_object *robj = NULL; + + if (dev->driver->gem_prime_res_obj) + robj = dev->driver->gem_prime_res_obj(obj); + + return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, + flags, robj); } EXPORT_SYMBOL(drm_gem_prime_export);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c index 2a3ad24276f8..60192ed544f0 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c +++ b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c @@ -187,7 +187,7 @@ struct dma_buf *exynos_dmabuf_prime_export(struct drm_device *drm_dev, struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
return dma_buf_export(obj, &exynos_dmabuf_ops, - exynos_gem_obj->base.size, flags); + exynos_gem_obj->base.size, flags, NULL); }
struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev, diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c index 580aa42443ed..82a1f4b57778 100644 --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c @@ -237,7 +237,8 @@ struct dma_buf *i915_gem_prime_export(struct drm_device *dev, return ERR_PTR(ret); }
- return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags); + return dma_buf_export(gem_obj, &i915_dmabuf_ops, gem_obj->size, flags, + NULL); }
static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index ddd83756b9a2..e8ae68a9aaf1 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -844,6 +844,7 @@ driver = { .gem_prime_export = drm_gem_prime_export, .gem_prime_import = drm_gem_prime_import, .gem_prime_pin = nouveau_gem_prime_pin, + .gem_prime_res_obj = nouveau_gem_prime_res_obj, .gem_prime_unpin = nouveau_gem_prime_unpin, .gem_prime_get_sg_table = nouveau_gem_prime_get_sg_table, .gem_prime_import_sg_table = nouveau_gem_prime_import_sg_table, diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h index 7caca057bc38..ddab762d81fe 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.h +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h @@ -35,6 +35,7 @@ extern int nouveau_gem_ioctl_info(struct drm_device *, void *, struct drm_file *);
extern int nouveau_gem_prime_pin(struct drm_gem_object *); +struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *); extern void nouveau_gem_prime_unpin(struct drm_gem_object *); extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *); extern struct drm_gem_object *nouveau_gem_prime_import_sg_table( diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index 51a2cb102b44..1f51008e4d26 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -102,3 +102,10 @@ void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
nouveau_bo_unpin(nvbo); } + +struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *obj) +{ + struct nouveau_bo *nvbo = nouveau_gem_object(obj); + + return nvbo->bo.resv; +} diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index 4fcca8d42796..a2dbfb1737b4 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -171,7 +171,7 @@ static struct dma_buf_ops omap_dmabuf_ops = { struct dma_buf *omap_gem_prime_export(struct drm_device *dev, struct drm_gem_object *obj, int flags) { - return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags); + return dma_buf_export(obj, &omap_dmabuf_ops, obj->size, flags, NULL); }
struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev, diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c index 6e3017413386..9b30e0ce8a4d 100644 --- a/drivers/gpu/drm/radeon/radeon_drv.c +++ b/drivers/gpu/drm/radeon/radeon_drv.c @@ -132,6 +132,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sg); int radeon_gem_prime_pin(struct drm_gem_object *obj); void radeon_gem_prime_unpin(struct drm_gem_object *obj); +struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *); void *radeon_gem_prime_vmap(struct drm_gem_object *obj); void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); extern long radeon_kms_compat_ioctl(struct file *filp, unsigned int cmd, @@ -562,6 +563,7 @@ static struct drm_driver kms_driver = { .gem_prime_import = drm_gem_prime_import, .gem_prime_pin = radeon_gem_prime_pin, .gem_prime_unpin = radeon_gem_prime_unpin, + .gem_prime_res_obj = radeon_gem_prime_res_obj, .gem_prime_get_sg_table = radeon_gem_prime_get_sg_table, .gem_prime_import_sg_table = radeon_gem_prime_import_sg_table, .gem_prime_vmap = radeon_gem_prime_vmap, diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index 20074560fc25..28d71070c389 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -103,3 +103,11 @@ void radeon_gem_prime_unpin(struct drm_gem_object *obj) radeon_bo_unpin(bo); radeon_bo_unreserve(bo); } + + +struct reservation_object *radeon_gem_prime_res_obj(struct drm_gem_object *obj) +{ + struct radeon_bo *bo = gem_to_radeon_bo(obj); + + return bo->tbo.resv; +} diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index aa85b7b26f10..78cc8143760a 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -420,7 +420,7 @@ struct dma_buf *tegra_gem_prime_export(struct drm_device *drm, int flags) { return dma_buf_export(gem, &tegra_gem_prime_dmabuf_ops, gem->size, - flags); + flags, NULL); }
struct drm_gem_object *tegra_gem_prime_import(struct drm_device *drm, diff --git a/drivers/gpu/drm/ttm/ttm_object.c b/drivers/gpu/drm/ttm/ttm_object.c index d2a053352789..12c87110db3a 100644 --- a/drivers/gpu/drm/ttm/ttm_object.c +++ b/drivers/gpu/drm/ttm/ttm_object.c @@ -695,7 +695,7 @@ int ttm_prime_handle_to_fd(struct ttm_object_file *tfile, }
dma_buf = dma_buf_export(prime, &tdev->ops, - prime->size, flags); + prime->size, flags, NULL); if (IS_ERR(dma_buf)) { ret = PTR_ERR(dma_buf); ttm_mem_global_free(tdev->mem_glob, diff --git a/drivers/media/v4l2-core/videobuf2-dma-contig.c b/drivers/media/v4l2-core/videobuf2-dma-contig.c index 880be0782dd9..c4e4dfa8123a 100644 --- a/drivers/media/v4l2-core/videobuf2-dma-contig.c +++ b/drivers/media/v4l2-core/videobuf2-dma-contig.c @@ -404,7 +404,7 @@ static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv, unsigned long flags) if (WARN_ON(!buf->sgt_base)) return NULL;
- dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags); + dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags, NULL); if (IS_ERR(dbuf)) return NULL;
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 389b8f67a2ec..270360912b2c 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -1120,7 +1120,8 @@ struct dma_buf *ion_share_dma_buf(struct ion_client *client, ion_buffer_get(buffer); mutex_unlock(&client->lock);
- dmabuf = dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR); + dmabuf = dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR, + NULL); if (IS_ERR(dmabuf)) { ion_buffer_put(buffer); return dmabuf; diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 8af71a8e2c00..e41f17ea1f13 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -83,6 +83,7 @@ struct drm_device;
struct device_node; struct videomode; +struct reservation_object;
#include <drm/drm_os_linux.h> #include <drm/drm_hashtab.h> @@ -923,6 +924,8 @@ struct drm_driver { /* low-level interface used by drm_gem_prime_{import,export} */ int (*gem_prime_pin)(struct drm_gem_object *obj); void (*gem_prime_unpin)(struct drm_gem_object *obj); + struct reservation_object * (*gem_prime_res_obj)( + struct drm_gem_object *obj); struct sg_table *(*gem_prime_get_sg_table)(struct drm_gem_object *obj); struct drm_gem_object *(*gem_prime_import_sg_table)( struct drm_device *dev, size_t size, diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index f886985a28b2..fd7def2e0ae2 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -115,6 +115,7 @@ struct dma_buf_ops { * @exp_name: name of the exporter; useful for debugging. * @list_node: node for dma_buf accounting and debugging. * @priv: exporter specific private data for this buffer object. + * @resv: reservation object linked to this dma-buf */ struct dma_buf { size_t size; @@ -128,6 +129,7 @@ struct dma_buf { const char *exp_name; struct list_head list_node; void *priv; + struct reservation_object *resv; };
/** @@ -168,10 +170,11 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *dmabuf_attach);
struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, - size_t size, int flags, const char *); + size_t size, int flags, const char *, + struct reservation_object *);
-#define dma_buf_export(priv, ops, size, flags) \ - dma_buf_export_named(priv, ops, size, flags, KBUILD_MODNAME) +#define dma_buf_export(priv, ops, size, flags, resv) \ + dma_buf_export_named(priv, ops, size, flags, KBUILD_MODNAME, resv)
int dma_buf_fd(struct dma_buf *dmabuf, int flags); struct dma_buf *dma_buf_get(int fd);
Just to show it's easy.
Android syncpoints can be mapped to a timeline. This removes the need to maintain a separate api for synchronization. I've left the android trace events in place, but the core fence events should already be sufficient for debugging.
v2: - Call fence_remove_callback in sync_fence_free if not all fences have fired. v3: - Merge Colin Cross' bugfixes, and the android fence merge optimization. v4: - Merge with the upstream fixes. v5: - Fix small style issues pointed out by Thomas Hellstrom. v6: - Fix for updates to fence api.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Acked-by: John Stultz john.stultz@linaro.org --- drivers/staging/android/Kconfig | 1 drivers/staging/android/Makefile | 2 drivers/staging/android/sw_sync.c | 6 drivers/staging/android/sync.c | 913 +++++++++++----------------------- drivers/staging/android/sync.h | 79 ++- drivers/staging/android/sync_debug.c | 247 +++++++++ drivers/staging/android/trace/sync.h | 12 7 files changed, 609 insertions(+), 651 deletions(-) create mode 100644 drivers/staging/android/sync_debug.c
diff --git a/drivers/staging/android/Kconfig b/drivers/staging/android/Kconfig index 99e484f845f2..51607e9aa049 100644 --- a/drivers/staging/android/Kconfig +++ b/drivers/staging/android/Kconfig @@ -88,6 +88,7 @@ config SYNC bool "Synchronization framework" default n select ANON_INODES + select DMA_SHARED_BUFFER ---help--- This option enables the framework for synchronization between multiple drivers. Sync implementations can take advantage of hardware diff --git a/drivers/staging/android/Makefile b/drivers/staging/android/Makefile index 0a01e1914905..517ad5ffa429 100644 --- a/drivers/staging/android/Makefile +++ b/drivers/staging/android/Makefile @@ -9,5 +9,5 @@ obj-$(CONFIG_ANDROID_TIMED_OUTPUT) += timed_output.o obj-$(CONFIG_ANDROID_TIMED_GPIO) += timed_gpio.o obj-$(CONFIG_ANDROID_LOW_MEMORY_KILLER) += lowmemorykiller.o obj-$(CONFIG_ANDROID_INTF_ALARM_DEV) += alarm-dev.o -obj-$(CONFIG_SYNC) += sync.o +obj-$(CONFIG_SYNC) += sync.o sync_debug.o obj-$(CONFIG_SW_SYNC) += sw_sync.o diff --git a/drivers/staging/android/sw_sync.c b/drivers/staging/android/sw_sync.c index 12a136ec1cec..a76db3ff87cb 100644 --- a/drivers/staging/android/sw_sync.c +++ b/drivers/staging/android/sw_sync.c @@ -50,7 +50,7 @@ static struct sync_pt *sw_sync_pt_dup(struct sync_pt *sync_pt) { struct sw_sync_pt *pt = (struct sw_sync_pt *) sync_pt; struct sw_sync_timeline *obj = - (struct sw_sync_timeline *)sync_pt->parent; + (struct sw_sync_timeline *)sync_pt_parent(sync_pt);
return (struct sync_pt *) sw_sync_pt_create(obj, pt->value); } @@ -59,7 +59,7 @@ static int sw_sync_pt_has_signaled(struct sync_pt *sync_pt) { struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt; struct sw_sync_timeline *obj = - (struct sw_sync_timeline *)sync_pt->parent; + (struct sw_sync_timeline *)sync_pt_parent(sync_pt);
return sw_sync_cmp(obj->value, pt->value) >= 0; } @@ -97,7 +97,6 @@ static void sw_sync_pt_value_str(struct sync_pt *sync_pt, char *str, int size) { struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt; - snprintf(str, size, "%d", pt->value); }
@@ -157,7 +156,6 @@ static int sw_sync_open(struct inode *inode, struct file *file) static int sw_sync_release(struct inode *inode, struct file *file) { struct sw_sync_timeline *obj = file->private_data; - sync_timeline_destroy(&obj->obj); return 0; } diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c index 18174f7c871c..c9a0c2cdc81a 100644 --- a/drivers/staging/android/sync.c +++ b/drivers/staging/android/sync.c @@ -31,22 +31,13 @@ #define CREATE_TRACE_POINTS #include "trace/sync.h"
-static void sync_fence_signal_pt(struct sync_pt *pt); -static int _sync_pt_has_signaled(struct sync_pt *pt); -static void sync_fence_free(struct kref *kref); -static void sync_dump(void); - -static LIST_HEAD(sync_timeline_list_head); -static DEFINE_SPINLOCK(sync_timeline_list_lock); - -static LIST_HEAD(sync_fence_list_head); -static DEFINE_SPINLOCK(sync_fence_list_lock); +static const struct fence_ops android_fence_ops; +static const struct file_operations sync_fence_fops;
struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops, int size, const char *name) { struct sync_timeline *obj; - unsigned long flags;
if (size < sizeof(struct sync_timeline)) return NULL; @@ -57,17 +48,14 @@ struct sync_timeline *sync_timeline_create(const struct sync_timeline_ops *ops,
kref_init(&obj->kref); obj->ops = ops; + obj->context = fence_context_alloc(1); strlcpy(obj->name, name, sizeof(obj->name));
INIT_LIST_HEAD(&obj->child_list_head); - spin_lock_init(&obj->child_list_lock); - INIT_LIST_HEAD(&obj->active_list_head); - spin_lock_init(&obj->active_list_lock); + spin_lock_init(&obj->child_list_lock);
- spin_lock_irqsave(&sync_timeline_list_lock, flags); - list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head); - spin_unlock_irqrestore(&sync_timeline_list_lock, flags); + sync_timeline_debug_add(obj);
return obj; } @@ -77,11 +65,8 @@ static void sync_timeline_free(struct kref *kref) { struct sync_timeline *obj = container_of(kref, struct sync_timeline, kref); - unsigned long flags;
- spin_lock_irqsave(&sync_timeline_list_lock, flags); - list_del(&obj->sync_timeline_list); - spin_unlock_irqrestore(&sync_timeline_list_lock, flags); + sync_timeline_debug_remove(obj);
if (obj->ops->release_obj) obj->ops->release_obj(obj); @@ -89,6 +74,16 @@ static void sync_timeline_free(struct kref *kref) kfree(obj); }
+static void sync_timeline_get(struct sync_timeline *obj) +{ + kref_get(&obj->kref); +} + +static void sync_timeline_put(struct sync_timeline *obj) +{ + kref_put(&obj->kref, sync_timeline_free); +} + void sync_timeline_destroy(struct sync_timeline *obj) { obj->destroyed = true; @@ -102,75 +97,33 @@ void sync_timeline_destroy(struct sync_timeline *obj) * signal any children that their parent is going away. */ sync_timeline_signal(obj); - - kref_put(&obj->kref, sync_timeline_free); + sync_timeline_put(obj); } EXPORT_SYMBOL(sync_timeline_destroy);
-static void sync_timeline_add_pt(struct sync_timeline *obj, struct sync_pt *pt) -{ - unsigned long flags; - - pt->parent = obj; - - spin_lock_irqsave(&obj->child_list_lock, flags); - list_add_tail(&pt->child_list, &obj->child_list_head); - spin_unlock_irqrestore(&obj->child_list_lock, flags); -} - -static void sync_timeline_remove_pt(struct sync_pt *pt) -{ - struct sync_timeline *obj = pt->parent; - unsigned long flags; - - spin_lock_irqsave(&obj->active_list_lock, flags); - if (!list_empty(&pt->active_list)) - list_del_init(&pt->active_list); - spin_unlock_irqrestore(&obj->active_list_lock, flags); - - spin_lock_irqsave(&obj->child_list_lock, flags); - if (!list_empty(&pt->child_list)) - list_del_init(&pt->child_list); - - spin_unlock_irqrestore(&obj->child_list_lock, flags); -} - void sync_timeline_signal(struct sync_timeline *obj) { unsigned long flags; LIST_HEAD(signaled_pts); - struct list_head *pos, *n; + struct sync_pt *pt, *next;
trace_sync_timeline(obj);
- spin_lock_irqsave(&obj->active_list_lock, flags); - - list_for_each_safe(pos, n, &obj->active_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, active_list); + spin_lock_irqsave(&obj->child_list_lock, flags);
- if (_sync_pt_has_signaled(pt)) { - list_del_init(pos); - list_add(&pt->signaled_list, &signaled_pts); - kref_get(&pt->fence->kref); - } + list_for_each_entry_safe(pt, next, &obj->active_list_head, + active_list) { + if (fence_is_signaled_locked(&pt->base)) + list_del(&pt->active_list); }
- spin_unlock_irqrestore(&obj->active_list_lock, flags); - - list_for_each_safe(pos, n, &signaled_pts) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, signaled_list); - - list_del_init(pos); - sync_fence_signal_pt(pt); - kref_put(&pt->fence->kref, sync_fence_free); - } + spin_unlock_irqrestore(&obj->child_list_lock, flags); } EXPORT_SYMBOL(sync_timeline_signal);
-struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size) +struct sync_pt *sync_pt_create(struct sync_timeline *obj, int size) { + unsigned long flags; struct sync_pt *pt;
if (size < sizeof(struct sync_pt)) @@ -180,87 +133,28 @@ struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size) if (pt == NULL) return NULL;
+ spin_lock_irqsave(&obj->child_list_lock, flags); + sync_timeline_get(obj); + fence_init(&pt->base, &android_fence_ops, &obj->child_list_lock, + obj->context, ++obj->value); + list_add_tail(&pt->child_list, &obj->child_list_head); INIT_LIST_HEAD(&pt->active_list); - kref_get(&parent->kref); - sync_timeline_add_pt(parent, pt); - + spin_unlock_irqrestore(&obj->child_list_lock, flags); return pt; } EXPORT_SYMBOL(sync_pt_create);
void sync_pt_free(struct sync_pt *pt) { - if (pt->parent->ops->free_pt) - pt->parent->ops->free_pt(pt); - - sync_timeline_remove_pt(pt); - - kref_put(&pt->parent->kref, sync_timeline_free); - - kfree(pt); + fence_put(&pt->base); } EXPORT_SYMBOL(sync_pt_free);
-/* call with pt->parent->active_list_lock held */ -static int _sync_pt_has_signaled(struct sync_pt *pt) -{ - int old_status = pt->status; - - if (!pt->status) - pt->status = pt->parent->ops->has_signaled(pt); - - if (!pt->status && pt->parent->destroyed) - pt->status = -ENOENT; - - if (pt->status != old_status) - pt->timestamp = ktime_get(); - - return pt->status; -} - -static struct sync_pt *sync_pt_dup(struct sync_pt *pt) -{ - return pt->parent->ops->dup(pt); -} - -/* Adds a sync pt to the active queue. Called when added to a fence */ -static void sync_pt_activate(struct sync_pt *pt) -{ - struct sync_timeline *obj = pt->parent; - unsigned long flags; - int err; - - spin_lock_irqsave(&obj->active_list_lock, flags); - - err = _sync_pt_has_signaled(pt); - if (err != 0) - goto out; - - list_add_tail(&pt->active_list, &obj->active_list_head); - -out: - spin_unlock_irqrestore(&obj->active_list_lock, flags); -} - -static int sync_fence_release(struct inode *inode, struct file *file); -static unsigned int sync_fence_poll(struct file *file, poll_table *wait); -static long sync_fence_ioctl(struct file *file, unsigned int cmd, - unsigned long arg); - - -static const struct file_operations sync_fence_fops = { - .release = sync_fence_release, - .poll = sync_fence_poll, - .unlocked_ioctl = sync_fence_ioctl, - .compat_ioctl = sync_fence_ioctl, -}; - -static struct sync_fence *sync_fence_alloc(const char *name) +static struct sync_fence *sync_fence_alloc(int size, const char *name) { struct sync_fence *fence; - unsigned long flags;
- fence = kzalloc(sizeof(struct sync_fence), GFP_KERNEL); + fence = kzalloc(size, GFP_KERNEL); if (fence == NULL) return NULL;
@@ -272,16 +166,8 @@ static struct sync_fence *sync_fence_alloc(const char *name) kref_init(&fence->kref); strlcpy(fence->name, name, sizeof(fence->name));
- INIT_LIST_HEAD(&fence->pt_list_head); - INIT_LIST_HEAD(&fence->waiter_list_head); - spin_lock_init(&fence->waiter_list_lock); - init_waitqueue_head(&fence->wq);
- spin_lock_irqsave(&sync_fence_list_lock, flags); - list_add_tail(&fence->sync_fence_list, &sync_fence_list_head); - spin_unlock_irqrestore(&sync_fence_list_lock, flags); - return fence;
err: @@ -289,120 +175,42 @@ err: return NULL; }
-/* TODO: implement a create which takes more that one sync_pt */ -struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt) +static void fence_check_cb_func(struct fence *f, struct fence_cb *cb) { + struct sync_fence_cb *check; struct sync_fence *fence;
- if (pt->fence) - return NULL; - - fence = sync_fence_alloc(name); - if (fence == NULL) - return NULL; + check = container_of(cb, struct sync_fence_cb, cb); + fence = check->fence;
- pt->fence = fence; - list_add(&pt->pt_list, &fence->pt_list_head); - sync_pt_activate(pt); - - /* - * signal the fence in case pt was activated before - * sync_pt_activate(pt) was called - */ - sync_fence_signal_pt(pt); - - return fence; -} -EXPORT_SYMBOL(sync_fence_create); - -static int sync_fence_copy_pts(struct sync_fence *dst, struct sync_fence *src) -{ - struct list_head *pos; - - list_for_each(pos, &src->pt_list_head) { - struct sync_pt *orig_pt = - container_of(pos, struct sync_pt, pt_list); - struct sync_pt *new_pt = sync_pt_dup(orig_pt); - - if (new_pt == NULL) - return -ENOMEM; - - new_pt->fence = dst; - list_add(&new_pt->pt_list, &dst->pt_list_head); - } - - return 0; + if (atomic_dec_and_test(&fence->status)) + wake_up_all(&fence->wq); }
-static int sync_fence_merge_pts(struct sync_fence *dst, struct sync_fence *src) -{ - struct list_head *src_pos, *dst_pos, *n; - - list_for_each(src_pos, &src->pt_list_head) { - struct sync_pt *src_pt = - container_of(src_pos, struct sync_pt, pt_list); - bool collapsed = false; - - list_for_each_safe(dst_pos, n, &dst->pt_list_head) { - struct sync_pt *dst_pt = - container_of(dst_pos, struct sync_pt, pt_list); - /* collapse two sync_pts on the same timeline - * to a single sync_pt that will signal at - * the later of the two - */ - if (dst_pt->parent == src_pt->parent) { - if (dst_pt->parent->ops->compare(dst_pt, src_pt) - == -1) { - struct sync_pt *new_pt = - sync_pt_dup(src_pt); - if (new_pt == NULL) - return -ENOMEM; - - new_pt->fence = dst; - list_replace(&dst_pt->pt_list, - &new_pt->pt_list); - sync_pt_free(dst_pt); - } - collapsed = true; - break; - } - } - - if (!collapsed) { - struct sync_pt *new_pt = sync_pt_dup(src_pt); - - if (new_pt == NULL) - return -ENOMEM; - - new_pt->fence = dst; - list_add(&new_pt->pt_list, &dst->pt_list_head); - } - } - - return 0; -} - -static void sync_fence_detach_pts(struct sync_fence *fence) +/* TODO: implement a create which takes more that one sync_pt */ +struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt) { - struct list_head *pos, *n; + struct sync_fence *fence;
- list_for_each_safe(pos, n, &fence->pt_list_head) { - struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list); + fence = sync_fence_alloc(offsetof(struct sync_fence, cbs[1]), name); + if (fence == NULL) + return NULL;
- sync_timeline_remove_pt(pt); - } -} + fence->num_fences = 1; + atomic_set(&fence->status, 1);
-static void sync_fence_free_pts(struct sync_fence *fence) -{ - struct list_head *pos, *n; + fence_get(&pt->base); + fence->cbs[0].sync_pt = &pt->base; + fence->cbs[0].fence = fence; + if (fence_add_callback(&pt->base, &fence->cbs[0].cb, + fence_check_cb_func)) + atomic_dec(&fence->status);
- list_for_each_safe(pos, n, &fence->pt_list_head) { - struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list); + sync_fence_debug_add(fence);
- sync_pt_free(pt); - } + return fence; } +EXPORT_SYMBOL(sync_fence_create);
struct sync_fence *sync_fence_fdget(int fd) { @@ -434,197 +242,155 @@ void sync_fence_install(struct sync_fence *fence, int fd) } EXPORT_SYMBOL(sync_fence_install);
-static int sync_fence_get_status(struct sync_fence *fence) +static void sync_fence_add_pt(struct sync_fence *fence, + int *i, struct fence *pt) { - struct list_head *pos; - int status = 1; - - list_for_each(pos, &fence->pt_list_head) { - struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list); - int pt_status = pt->status; - - if (pt_status < 0) { - status = pt_status; - break; - } else if (status == 1) { - status = pt_status; - } - } + fence->cbs[*i].sync_pt = pt; + fence->cbs[*i].fence = fence;
- return status; + if (!fence_add_callback(pt, &fence->cbs[*i].cb, fence_check_cb_func)) { + fence_get(pt); + (*i)++; + } }
struct sync_fence *sync_fence_merge(const char *name, struct sync_fence *a, struct sync_fence *b) { + int num_fences = a->num_fences + b->num_fences; struct sync_fence *fence; - struct list_head *pos; - int err; + int i, i_a, i_b; + unsigned long size = offsetof(struct sync_fence, cbs[num_fences]);
- fence = sync_fence_alloc(name); + fence = sync_fence_alloc(size, name); if (fence == NULL) return NULL;
- err = sync_fence_copy_pts(fence, a); - if (err < 0) - goto err; + atomic_set(&fence->status, num_fences);
- err = sync_fence_merge_pts(fence, b); - if (err < 0) - goto err; + /* + * Assume sync_fence a and b are both ordered and have no + * duplicates with the same context. + * + * If a sync_fence can only be created with sync_fence_merge + * and sync_fence_create, this is a reasonable assumption. + */ + for (i = i_a = i_b = 0; i_a < a->num_fences && i_b < b->num_fences; ) { + struct fence *pt_a = a->cbs[i_a].sync_pt; + struct fence *pt_b = b->cbs[i_b].sync_pt; + + if (pt_a->context < pt_b->context) { + sync_fence_add_pt(fence, &i, pt_a); + + i_a++; + } else if (pt_a->context > pt_b->context) { + sync_fence_add_pt(fence, &i, pt_b);
- list_for_each(pos, &fence->pt_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, pt_list); - sync_pt_activate(pt); + i_b++; + } else { + if (pt_a->seqno - pt_b->seqno <= INT_MAX) + sync_fence_add_pt(fence, &i, pt_a); + else + sync_fence_add_pt(fence, &i, pt_b); + + i_a++; + i_b++; + } }
- /* - * signal the fence in case one of it's pts were activated before - * they were activated - */ - sync_fence_signal_pt(list_first_entry(&fence->pt_list_head, - struct sync_pt, - pt_list)); + for (; i_a < a->num_fences; i_a++) + sync_fence_add_pt(fence, &i, a->cbs[i_a].sync_pt);
+ for (; i_b < b->num_fences; i_b++) + sync_fence_add_pt(fence, &i, b->cbs[i_b].sync_pt); + + if (num_fences > i) + atomic_sub(num_fences - i, &fence->status); + fence->num_fences = i; + + sync_fence_debug_add(fence); return fence; -err: - sync_fence_free_pts(fence); - kfree(fence); - return NULL; } EXPORT_SYMBOL(sync_fence_merge);
-static void sync_fence_signal_pt(struct sync_pt *pt) +int sync_fence_wake_up_wq(wait_queue_t *curr, unsigned mode, + int wake_flags, void *key) { - LIST_HEAD(signaled_waiters); - struct sync_fence *fence = pt->fence; - struct list_head *pos; - struct list_head *n; - unsigned long flags; - int status; - - status = sync_fence_get_status(fence); - - spin_lock_irqsave(&fence->waiter_list_lock, flags); - /* - * this should protect against two threads racing on the signaled - * false -> true transition - */ - if (status && !fence->status) { - list_for_each_safe(pos, n, &fence->waiter_list_head) - list_move(pos, &signaled_waiters); - - fence->status = status; - } else { - status = 0; - } - spin_unlock_irqrestore(&fence->waiter_list_lock, flags); + struct sync_fence_waiter *wait;
- if (status) { - list_for_each_safe(pos, n, &signaled_waiters) { - struct sync_fence_waiter *waiter = - container_of(pos, struct sync_fence_waiter, - waiter_list); + wait = container_of(curr, struct sync_fence_waiter, work); + list_del_init(&wait->work.task_list);
- list_del(pos); - waiter->callback(fence, waiter); - } - wake_up(&fence->wq); - } + wait->callback(wait->work.private, wait); + return 1; }
int sync_fence_wait_async(struct sync_fence *fence, struct sync_fence_waiter *waiter) { + int err = atomic_read(&fence->status); unsigned long flags; - int err = 0;
- spin_lock_irqsave(&fence->waiter_list_lock, flags); + if (err < 0) + return err;
- if (fence->status) { - err = fence->status; - goto out; - } + if (!err) + return 1;
- list_add_tail(&waiter->waiter_list, &fence->waiter_list_head); -out: - spin_unlock_irqrestore(&fence->waiter_list_lock, flags); + init_waitqueue_func_entry(&waiter->work, sync_fence_wake_up_wq); + waiter->work.private = fence;
- return err; + spin_lock_irqsave(&fence->wq.lock, flags); + err = atomic_read(&fence->status); + if (err > 0) + __add_wait_queue_tail(&fence->wq, &waiter->work); + spin_unlock_irqrestore(&fence->wq.lock, flags); + + if (err < 0) + return err; + + return !err; } EXPORT_SYMBOL(sync_fence_wait_async);
int sync_fence_cancel_async(struct sync_fence *fence, struct sync_fence_waiter *waiter) { - struct list_head *pos; - struct list_head *n; unsigned long flags; - int ret = -ENOENT; + int ret = 0;
- spin_lock_irqsave(&fence->waiter_list_lock, flags); - /* - * Make sure waiter is still in waiter_list because it is possible for - * the waiter to be removed from the list while the callback is still - * pending. - */ - list_for_each_safe(pos, n, &fence->waiter_list_head) { - struct sync_fence_waiter *list_waiter = - container_of(pos, struct sync_fence_waiter, - waiter_list); - if (list_waiter == waiter) { - list_del(pos); - ret = 0; - break; - } - } - spin_unlock_irqrestore(&fence->waiter_list_lock, flags); + spin_lock_irqsave(&fence->wq.lock, flags); + if (!list_empty(&waiter->work.task_list)) + list_del_init(&waiter->work.task_list); + else + ret = -ENOENT; + spin_unlock_irqrestore(&fence->wq.lock, flags); return ret; } EXPORT_SYMBOL(sync_fence_cancel_async);
-static bool sync_fence_check(struct sync_fence *fence) -{ - /* - * Make sure that reads to fence->status are ordered with the - * wait queue event triggering - */ - smp_rmb(); - return fence->status != 0; -} - int sync_fence_wait(struct sync_fence *fence, long timeout) { - int err = 0; - struct sync_pt *pt; - - trace_sync_wait(fence, 1); - list_for_each_entry(pt, &fence->pt_list_head, pt_list) - trace_sync_pt(pt); + long ret; + int i;
- if (timeout > 0) { + if (timeout < 0) + timeout = MAX_SCHEDULE_TIMEOUT; + else timeout = msecs_to_jiffies(timeout); - err = wait_event_interruptible_timeout(fence->wq, - sync_fence_check(fence), - timeout); - } else if (timeout < 0) { - err = wait_event_interruptible(fence->wq, - sync_fence_check(fence)); - } - trace_sync_wait(fence, 0);
- if (err < 0) - return err; - - if (fence->status < 0) { - pr_info("fence error %d on [%p]\n", fence->status, fence); - sync_dump(); - return fence->status; - } + trace_sync_wait(fence, 1); + for (i = 0; i < fence->num_fences; ++i) + trace_sync_pt(fence->cbs[i].sync_pt); + ret = wait_event_interruptible_timeout(fence->wq, + atomic_read(&fence->status) <= 0, + timeout); + trace_sync_wait(fence, 0);
- if (fence->status == 0) { - if (timeout > 0) { + if (ret < 0) + return ret; + else if (ret == 0) { + if (timeout) { pr_info("fence timeout on [%p] after %dms\n", fence, jiffies_to_msecs(timeout)); sync_dump(); @@ -632,15 +398,136 @@ int sync_fence_wait(struct sync_fence *fence, long timeout) return -ETIME; }
- return 0; + ret = atomic_read(&fence->status); + if (ret) { + pr_info("fence error %ld on [%p]\n", ret, fence); + sync_dump(); + } + return ret; } EXPORT_SYMBOL(sync_fence_wait);
+static const char *android_fence_get_driver_name(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + return parent->ops->driver_name; +} + +static const char *android_fence_get_timeline_name(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + return parent->name; +} + +static void android_fence_release(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + unsigned long flags; + + spin_lock_irqsave(fence->lock, flags); + list_del(&pt->child_list); + if (WARN_ON_ONCE(!list_empty(&pt->active_list))) + list_del(&pt->active_list); + spin_unlock_irqrestore(fence->lock, flags); + + if (parent->ops->free_pt) + parent->ops->free_pt(pt); + + sync_timeline_put(parent); + fence_free(&pt->base); +} + +static bool android_fence_signaled(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + int ret; + + ret = parent->ops->has_signaled(pt); + if (ret < 0) + fence->status = ret; + return ret; +} + +static bool android_fence_enable_signaling(struct fence *fence) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + if (android_fence_signaled(fence)) + return false; + + list_add_tail(&pt->active_list, &parent->active_list_head); + return true; +} + +static int android_fence_fill_driver_data(struct fence *fence, + void *data, int size) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + if (!parent->ops->fill_driver_data) + return 0; + return parent->ops->fill_driver_data(pt, data, size); +} + +static void android_fence_value_str(struct fence *fence, + char *str, int size) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + if (!parent->ops->pt_value_str) { + if (size) + *str = 0; + return; + } + parent->ops->pt_value_str(pt, str, size); +} + +static void android_fence_timeline_value_str(struct fence *fence, + char *str, int size) +{ + struct sync_pt *pt = container_of(fence, struct sync_pt, base); + struct sync_timeline *parent = sync_pt_parent(pt); + + if (!parent->ops->timeline_value_str) { + if (size) + *str = 0; + return; + } + parent->ops->timeline_value_str(parent, str, size); +} + +static const struct fence_ops android_fence_ops = { + .get_driver_name = android_fence_get_driver_name, + .get_timeline_name = android_fence_get_timeline_name, + .enable_signaling = android_fence_enable_signaling, + .signaled = android_fence_signaled, + .wait = fence_default_wait, + .release = android_fence_release, + .fill_driver_data = android_fence_fill_driver_data, + .fence_value_str = android_fence_value_str, + .timeline_value_str = android_fence_timeline_value_str, +}; + static void sync_fence_free(struct kref *kref) { struct sync_fence *fence = container_of(kref, struct sync_fence, kref); + int i, status = atomic_read(&fence->status);
- sync_fence_free_pts(fence); + for (i = 0; i < fence->num_fences; ++i) { + if (status) + fence_remove_callback(fence->cbs[i].sync_pt, + &fence->cbs[i].cb); + fence_put(fence->cbs[i].sync_pt); + }
kfree(fence); } @@ -648,44 +535,25 @@ static void sync_fence_free(struct kref *kref) static int sync_fence_release(struct inode *inode, struct file *file) { struct sync_fence *fence = file->private_data; - unsigned long flags; - - /* - * We need to remove all ways to access this fence before droping - * our ref. - * - * start with its membership in the global fence list - */ - spin_lock_irqsave(&sync_fence_list_lock, flags); - list_del(&fence->sync_fence_list); - spin_unlock_irqrestore(&sync_fence_list_lock, flags);
- /* - * remove its pts from their parents so that sync_timeline_signal() - * can't reference the fence. - */ - sync_fence_detach_pts(fence); + sync_fence_debug_remove(fence);
kref_put(&fence->kref, sync_fence_free); - return 0; }
static unsigned int sync_fence_poll(struct file *file, poll_table *wait) { struct sync_fence *fence = file->private_data; + int status;
poll_wait(file, &fence->wq, wait);
- /* - * Make sure that reads to fence->status are ordered with the - * wait queue event triggering - */ - smp_rmb(); + status = atomic_read(&fence->status);
- if (fence->status == 1) + if (!status) return POLLIN; - else if (fence->status < 0) + else if (status < 0) return POLLERR; else return 0; @@ -750,7 +618,7 @@ err_put_fd: return err; }
-static int sync_fill_pt_info(struct sync_pt *pt, void *data, int size) +static int sync_fill_pt_info(struct fence *fence, void *data, int size) { struct sync_pt_info *info = data; int ret; @@ -760,20 +628,24 @@ static int sync_fill_pt_info(struct sync_pt *pt, void *data, int size)
info->len = sizeof(struct sync_pt_info);
- if (pt->parent->ops->fill_driver_data) { - ret = pt->parent->ops->fill_driver_data(pt, info->driver_data, - size - sizeof(*info)); + if (fence->ops->fill_driver_data) { + ret = fence->ops->fill_driver_data(fence, info->driver_data, + size - sizeof(*info)); if (ret < 0) return ret;
info->len += ret; }
- strlcpy(info->obj_name, pt->parent->name, sizeof(info->obj_name)); - strlcpy(info->driver_name, pt->parent->ops->driver_name, + strlcpy(info->obj_name, fence->ops->get_timeline_name(fence), + sizeof(info->obj_name)); + strlcpy(info->driver_name, fence->ops->get_driver_name(fence), sizeof(info->driver_name)); - info->status = pt->status; - info->timestamp_ns = ktime_to_ns(pt->timestamp); + if (fence_is_signaled(fence)) + info->status = fence->status >= 0 ? 1 : fence->status; + else + info->status = 0; + info->timestamp_ns = ktime_to_ns(fence->timestamp);
return info->len; } @@ -782,10 +654,9 @@ static long sync_fence_ioctl_fence_info(struct sync_fence *fence, unsigned long arg) { struct sync_fence_info_data *data; - struct list_head *pos; __u32 size; __u32 len = 0; - int ret; + int ret, i;
if (copy_from_user(&size, (void __user *)arg, sizeof(size))) return -EFAULT; @@ -801,12 +672,14 @@ static long sync_fence_ioctl_fence_info(struct sync_fence *fence, return -ENOMEM;
strlcpy(data->name, fence->name, sizeof(data->name)); - data->status = fence->status; + data->status = atomic_read(&fence->status); + if (data->status >= 0) + data->status = !data->status; + len = sizeof(struct sync_fence_info_data);
- list_for_each(pos, &fence->pt_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, pt_list); + for (i = 0; i < fence->num_fences; ++i) { + struct fence *pt = fence->cbs[i].sync_pt;
ret = sync_fill_pt_info(pt, (u8 *)data + len, size - len);
@@ -833,7 +706,6 @@ static long sync_fence_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct sync_fence *fence = file->private_data; - switch (cmd) { case SYNC_IOC_WAIT: return sync_fence_ioctl_wait(fence, arg); @@ -849,181 +721,10 @@ static long sync_fence_ioctl(struct file *file, unsigned int cmd, } }
-#ifdef CONFIG_DEBUG_FS -static const char *sync_status_str(int status) -{ - if (status > 0) - return "signaled"; - else if (status == 0) - return "active"; - else - return "error"; -} - -static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence) -{ - int status = pt->status; - - seq_printf(s, " %s%spt %s", - fence ? pt->parent->name : "", - fence ? "_" : "", - sync_status_str(status)); - if (pt->status) { - struct timeval tv = ktime_to_timeval(pt->timestamp); - - seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec); - } - - if (pt->parent->ops->timeline_value_str && - pt->parent->ops->pt_value_str) { - char value[64]; - - pt->parent->ops->pt_value_str(pt, value, sizeof(value)); - seq_printf(s, ": %s", value); - if (fence) { - pt->parent->ops->timeline_value_str(pt->parent, value, - sizeof(value)); - seq_printf(s, " / %s", value); - } - } else if (pt->parent->ops->print_pt) { - seq_puts(s, ": "); - pt->parent->ops->print_pt(s, pt); - } - - seq_puts(s, "\n"); -} - -static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj) -{ - struct list_head *pos; - unsigned long flags; - - seq_printf(s, "%s %s", obj->name, obj->ops->driver_name); - - if (obj->ops->timeline_value_str) { - char value[64]; - - obj->ops->timeline_value_str(obj, value, sizeof(value)); - seq_printf(s, ": %s", value); - } else if (obj->ops->print_obj) { - seq_puts(s, ": "); - obj->ops->print_obj(s, obj); - } - - seq_puts(s, "\n"); - - spin_lock_irqsave(&obj->child_list_lock, flags); - list_for_each(pos, &obj->child_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, child_list); - sync_print_pt(s, pt, false); - } - spin_unlock_irqrestore(&obj->child_list_lock, flags); -} - -static void sync_print_fence(struct seq_file *s, struct sync_fence *fence) -{ - struct list_head *pos; - unsigned long flags; - - seq_printf(s, "[%p] %s: %s\n", fence, fence->name, - sync_status_str(fence->status)); - - list_for_each(pos, &fence->pt_list_head) { - struct sync_pt *pt = - container_of(pos, struct sync_pt, pt_list); - sync_print_pt(s, pt, true); - } - - spin_lock_irqsave(&fence->waiter_list_lock, flags); - list_for_each(pos, &fence->waiter_list_head) { - struct sync_fence_waiter *waiter = - container_of(pos, struct sync_fence_waiter, - waiter_list); - - seq_printf(s, "waiter %pF\n", waiter->callback); - } - spin_unlock_irqrestore(&fence->waiter_list_lock, flags); -} - -static int sync_debugfs_show(struct seq_file *s, void *unused) -{ - unsigned long flags; - struct list_head *pos; - - seq_puts(s, "objs:\n--------------\n"); - - spin_lock_irqsave(&sync_timeline_list_lock, flags); - list_for_each(pos, &sync_timeline_list_head) { - struct sync_timeline *obj = - container_of(pos, struct sync_timeline, - sync_timeline_list); - - sync_print_obj(s, obj); - seq_puts(s, "\n"); - } - spin_unlock_irqrestore(&sync_timeline_list_lock, flags); - - seq_puts(s, "fences:\n--------------\n"); - - spin_lock_irqsave(&sync_fence_list_lock, flags); - list_for_each(pos, &sync_fence_list_head) { - struct sync_fence *fence = - container_of(pos, struct sync_fence, sync_fence_list); - - sync_print_fence(s, fence); - seq_puts(s, "\n"); - } - spin_unlock_irqrestore(&sync_fence_list_lock, flags); - return 0; -} - -static int sync_debugfs_open(struct inode *inode, struct file *file) -{ - return single_open(file, sync_debugfs_show, inode->i_private); -} - -static const struct file_operations sync_debugfs_fops = { - .open = sync_debugfs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, +static const struct file_operations sync_fence_fops = { + .release = sync_fence_release, + .poll = sync_fence_poll, + .unlocked_ioctl = sync_fence_ioctl, + .compat_ioctl = sync_fence_ioctl, };
-static __init int sync_debugfs_init(void) -{ - debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops); - return 0; -} -late_initcall(sync_debugfs_init); - -#define DUMP_CHUNK 256 -static char sync_dump_buf[64 * 1024]; -static void sync_dump(void) -{ - struct seq_file s = { - .buf = sync_dump_buf, - .size = sizeof(sync_dump_buf) - 1, - }; - int i; - - sync_debugfs_show(&s, NULL); - - for (i = 0; i < s.count; i += DUMP_CHUNK) { - if ((s.count - i) > DUMP_CHUNK) { - char c = s.buf[i + DUMP_CHUNK]; - - s.buf[i + DUMP_CHUNK] = 0; - pr_cont("%s", s.buf + i); - s.buf[i + DUMP_CHUNK] = c; - } else { - s.buf[s.count] = 0; - pr_cont("%s", s.buf + i); - } - } -} -#else -static void sync_dump(void) -{ -} -#endif diff --git a/drivers/staging/android/sync.h b/drivers/staging/android/sync.h index eaf57cccf626..66b0f431f63e 100644 --- a/drivers/staging/android/sync.h +++ b/drivers/staging/android/sync.h @@ -19,6 +19,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/wait.h> +#include <linux/fence.h>
#include "uapi/sync.h"
@@ -40,8 +41,6 @@ struct sync_fence; * -1 if a will signal before b * @free_pt: called before sync_pt is freed * @release_obj: called before sync_timeline is freed - * @print_obj: deprecated - * @print_pt: deprecated * @fill_driver_data: write implementation specific driver data to data. * should return an error if there is not enough room * as specified by size. This information is returned @@ -67,13 +66,6 @@ struct sync_timeline_ops { /* optional */ void (*release_obj)(struct sync_timeline *sync_timeline);
- /* deprecated */ - void (*print_obj)(struct seq_file *s, - struct sync_timeline *sync_timeline); - - /* deprecated */ - void (*print_pt)(struct seq_file *s, struct sync_pt *sync_pt); - /* optional */ int (*fill_driver_data)(struct sync_pt *syncpt, void *data, int size);
@@ -104,19 +96,21 @@ struct sync_timeline {
/* protected by child_list_lock */ bool destroyed; + int context, value;
struct list_head child_list_head; spinlock_t child_list_lock;
struct list_head active_list_head; - spinlock_t active_list_lock;
+#ifdef CONFIG_DEBUG_FS struct list_head sync_timeline_list; +#endif };
/** * struct sync_pt - sync point - * @parent: sync_timeline to which this sync_pt belongs + * @fence: base fence class * @child_list: membership in sync_timeline.child_list_head * @active_list: membership in sync_timeline.active_list_head * @signaled_list: membership in temporary signaled_list on stack @@ -127,19 +121,22 @@ struct sync_timeline { * signaled or error. */ struct sync_pt { - struct sync_timeline *parent; - struct list_head child_list; + struct fence base;
+ struct list_head child_list; struct list_head active_list; - struct list_head signaled_list; - - struct sync_fence *fence; - struct list_head pt_list; +};
- /* protected by parent->active_list_lock */ - int status; +static inline struct sync_timeline *sync_pt_parent(struct sync_pt *pt) +{ + return container_of(pt->base.lock, struct sync_timeline, + child_list_lock); +}
- ktime_t timestamp; +struct sync_fence_cb { + struct fence_cb cb; + struct fence *sync_pt; + struct sync_fence *fence; };
/** @@ -149,9 +146,7 @@ struct sync_pt { * @name: name of sync_fence. Useful for debugging * @pt_list_head: list of sync_pts in the fence. immutable once fence * is created - * @waiter_list_head: list of asynchronous waiters on this fence - * @waiter_list_lock: lock protecting @waiter_list_head and @status - * @status: 1: signaled, 0:active, <0: error + * @status: 0: signaled, >0:active, <0: error * * @wq: wait queue for fence signaling * @sync_fence_list: membership in global fence list @@ -160,17 +155,15 @@ struct sync_fence { struct file *file; struct kref kref; char name[32]; - - /* this list is immutable once the fence is created */ - struct list_head pt_list_head; - - struct list_head waiter_list_head; - spinlock_t waiter_list_lock; /* also protects status */ - int status; +#ifdef CONFIG_DEBUG_FS + struct list_head sync_fence_list; +#endif + int num_fences;
wait_queue_head_t wq; + atomic_t status;
- struct list_head sync_fence_list; + struct sync_fence_cb cbs[]; };
struct sync_fence_waiter; @@ -184,14 +177,14 @@ typedef void (*sync_callback_t)(struct sync_fence *fence, * @callback_data: pointer to pass to @callback */ struct sync_fence_waiter { - struct list_head waiter_list; - - sync_callback_t callback; + wait_queue_t work; + sync_callback_t callback; };
static inline void sync_fence_waiter_init(struct sync_fence_waiter *waiter, sync_callback_t callback) { + INIT_LIST_HEAD(&waiter->work.task_list); waiter->callback = callback; }
@@ -341,4 +334,22 @@ int sync_fence_cancel_async(struct sync_fence *fence, */ int sync_fence_wait(struct sync_fence *fence, long timeout);
+#ifdef CONFIG_DEBUG_FS + +extern void sync_timeline_debug_add(struct sync_timeline *obj); +extern void sync_timeline_debug_remove(struct sync_timeline *obj); +extern void sync_fence_debug_add(struct sync_fence *fence); +extern void sync_fence_debug_remove(struct sync_fence *fence); +extern void sync_dump(void); + +#else +# define sync_timeline_debug_add(obj) +# define sync_timeline_debug_remove(obj) +# define sync_fence_debug_add(fence) +# define sync_fence_debug_remove(fence) +# define sync_dump() +#endif +int sync_fence_wake_up_wq(wait_queue_t *curr, unsigned mode, + int wake_flags, void *key); + #endif /* _LINUX_SYNC_H */ diff --git a/drivers/staging/android/sync_debug.c b/drivers/staging/android/sync_debug.c new file mode 100644 index 000000000000..a9f7fd506da3 --- /dev/null +++ b/drivers/staging/android/sync_debug.c @@ -0,0 +1,247 @@ +/* + * drivers/base/sync.c + * + * Copyright (C) 2012 Google, Inc. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#include <linux/debugfs.h> +#include <linux/export.h> +#include <linux/file.h> +#include <linux/fs.h> +#include <linux/kernel.h> +#include <linux/poll.h> +#include <linux/sched.h> +#include <linux/seq_file.h> +#include <linux/slab.h> +#include <linux/uaccess.h> +#include <linux/anon_inodes.h> +#include "sync.h" + +#ifdef CONFIG_DEBUG_FS + +static LIST_HEAD(sync_timeline_list_head); +static DEFINE_SPINLOCK(sync_timeline_list_lock); +static LIST_HEAD(sync_fence_list_head); +static DEFINE_SPINLOCK(sync_fence_list_lock); + +void sync_timeline_debug_add(struct sync_timeline *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&sync_timeline_list_lock, flags); + list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head); + spin_unlock_irqrestore(&sync_timeline_list_lock, flags); +} + +void sync_timeline_debug_remove(struct sync_timeline *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&sync_timeline_list_lock, flags); + list_del(&obj->sync_timeline_list); + spin_unlock_irqrestore(&sync_timeline_list_lock, flags); +} + +void sync_fence_debug_add(struct sync_fence *fence) +{ + unsigned long flags; + + spin_lock_irqsave(&sync_fence_list_lock, flags); + list_add_tail(&fence->sync_fence_list, &sync_fence_list_head); + spin_unlock_irqrestore(&sync_fence_list_lock, flags); +} + +void sync_fence_debug_remove(struct sync_fence *fence) +{ + unsigned long flags; + + spin_lock_irqsave(&sync_fence_list_lock, flags); + list_del(&fence->sync_fence_list); + spin_unlock_irqrestore(&sync_fence_list_lock, flags); +} + +static const char *sync_status_str(int status) +{ + if (status == 0) + return "signaled"; + else if (status > 0) + return "active"; + else + return "error"; +} + +static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence) +{ + int status = 1; + struct sync_timeline *parent = sync_pt_parent(pt); + + if (fence_is_signaled_locked(&pt->base)) + status = pt->base.status; + + seq_printf(s, " %s%spt %s", + fence ? parent->name : "", + fence ? "_" : "", + sync_status_str(status)); + + if (status <= 0) { + struct timeval tv = ktime_to_timeval(pt->base.timestamp); + seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec); + } + + if (parent->ops->timeline_value_str && + parent->ops->pt_value_str) { + char value[64]; + parent->ops->pt_value_str(pt, value, sizeof(value)); + seq_printf(s, ": %s", value); + if (fence) { + parent->ops->timeline_value_str(parent, value, + sizeof(value)); + seq_printf(s, " / %s", value); + } + } + + seq_puts(s, "\n"); +} + +static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj) +{ + struct list_head *pos; + unsigned long flags; + + seq_printf(s, "%s %s", obj->name, obj->ops->driver_name); + + if (obj->ops->timeline_value_str) { + char value[64]; + obj->ops->timeline_value_str(obj, value, sizeof(value)); + seq_printf(s, ": %s", value); + } + + seq_puts(s, "\n"); + + spin_lock_irqsave(&obj->child_list_lock, flags); + list_for_each(pos, &obj->child_list_head) { + struct sync_pt *pt = + container_of(pos, struct sync_pt, child_list); + sync_print_pt(s, pt, false); + } + spin_unlock_irqrestore(&obj->child_list_lock, flags); +} + +static void sync_print_fence(struct seq_file *s, struct sync_fence *fence) +{ + wait_queue_t *pos; + unsigned long flags; + int i; + + seq_printf(s, "[%p] %s: %s\n", fence, fence->name, + sync_status_str(atomic_read(&fence->status))); + + for (i = 0; i < fence->num_fences; ++i) { + struct sync_pt *pt = + container_of(fence->cbs[i].sync_pt, + struct sync_pt, base); + + sync_print_pt(s, pt, true); + } + + spin_lock_irqsave(&fence->wq.lock, flags); + list_for_each_entry(pos, &fence->wq.task_list, task_list) { + struct sync_fence_waiter *waiter; + + if (pos->func != &sync_fence_wake_up_wq) + continue; + + waiter = container_of(pos, struct sync_fence_waiter, work); + + seq_printf(s, "waiter %pF\n", waiter->callback); + } + spin_unlock_irqrestore(&fence->wq.lock, flags); +} + +static int sync_debugfs_show(struct seq_file *s, void *unused) +{ + unsigned long flags; + struct list_head *pos; + + seq_puts(s, "objs:\n--------------\n"); + + spin_lock_irqsave(&sync_timeline_list_lock, flags); + list_for_each(pos, &sync_timeline_list_head) { + struct sync_timeline *obj = + container_of(pos, struct sync_timeline, + sync_timeline_list); + + sync_print_obj(s, obj); + seq_puts(s, "\n"); + } + spin_unlock_irqrestore(&sync_timeline_list_lock, flags); + + seq_puts(s, "fences:\n--------------\n"); + + spin_lock_irqsave(&sync_fence_list_lock, flags); + list_for_each(pos, &sync_fence_list_head) { + struct sync_fence *fence = + container_of(pos, struct sync_fence, sync_fence_list); + + sync_print_fence(s, fence); + seq_puts(s, "\n"); + } + spin_unlock_irqrestore(&sync_fence_list_lock, flags); + return 0; +} + +static int sync_debugfs_open(struct inode *inode, struct file *file) +{ + return single_open(file, sync_debugfs_show, inode->i_private); +} + +static const struct file_operations sync_debugfs_fops = { + .open = sync_debugfs_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + +static __init int sync_debugfs_init(void) +{ + debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops); + return 0; +} +late_initcall(sync_debugfs_init); + +#define DUMP_CHUNK 256 +static char sync_dump_buf[64 * 1024]; +void sync_dump(void) +{ + struct seq_file s = { + .buf = sync_dump_buf, + .size = sizeof(sync_dump_buf) - 1, + }; + int i; + + sync_debugfs_show(&s, NULL); + + for (i = 0; i < s.count; i += DUMP_CHUNK) { + if ((s.count - i) > DUMP_CHUNK) { + char c = s.buf[i + DUMP_CHUNK]; + s.buf[i + DUMP_CHUNK] = 0; + pr_cont("%s", s.buf + i); + s.buf[i + DUMP_CHUNK] = c; + } else { + s.buf[s.count] = 0; + pr_cont("%s", s.buf + i); + } + } +} + +#endif diff --git a/drivers/staging/android/trace/sync.h b/drivers/staging/android/trace/sync.h index 95462359ba57..77edb977a7bf 100644 --- a/drivers/staging/android/trace/sync.h +++ b/drivers/staging/android/trace/sync.h @@ -45,7 +45,7 @@ TRACE_EVENT(sync_wait,
TP_fast_assign( __assign_str(name, fence->name); - __entry->status = fence->status; + __entry->status = atomic_read(&fence->status); __entry->begin = begin; ),
@@ -54,19 +54,19 @@ TRACE_EVENT(sync_wait, );
TRACE_EVENT(sync_pt, - TP_PROTO(struct sync_pt *pt), + TP_PROTO(struct fence *pt),
TP_ARGS(pt),
TP_STRUCT__entry( - __string(timeline, pt->parent->name) + __string(timeline, pt->ops->get_timeline_name(pt)) __array(char, value, 32) ),
TP_fast_assign( - __assign_str(timeline, pt->parent->name); - if (pt->parent->ops->pt_value_str) { - pt->parent->ops->pt_value_str(pt, __entry->value, + __assign_str(timeline, pt->ops->get_timeline_name(pt)); + if (pt->ops->fence_value_str) { + pt->ops->fence_value_str(pt, __entry->value, sizeof(__entry->value)); } else { __entry->value[0] = '\0';
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Reviewed-by: Rob Clark robdclark@gmail.com --- include/linux/reservation.h | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/include/linux/reservation.h b/include/linux/reservation.h index 813dae960ebd..f3f57460a205 100644 --- a/include/linux/reservation.h +++ b/include/linux/reservation.h @@ -6,7 +6,7 @@ * Copyright (C) 2012 Texas Instruments * * Authors: - * Rob Clark rob.clark@linaro.org + * Rob Clark robdclark@gmail.com * Maarten Lankhorst maarten.lankhorst@canonical.com * Thomas Hellstrom <thellstrom-at-vmware-dot-com> * @@ -40,22 +40,40 @@ #define _LINUX_RESERVATION_H
#include <linux/ww_mutex.h> +#include <linux/fence.h> +#include <linux/slab.h>
extern struct ww_class reservation_ww_class;
struct reservation_object { struct ww_mutex lock; + + struct fence *fence_excl; + struct fence **fence_shared; + u32 fence_shared_count, fence_shared_max; };
static inline void reservation_object_init(struct reservation_object *obj) { ww_mutex_init(&obj->lock, &reservation_ww_class); + + obj->fence_shared_count = obj->fence_shared_max = 0; + obj->fence_shared = NULL; + obj->fence_excl = NULL; }
static inline void reservation_object_fini(struct reservation_object *obj) { + int i; + + if (obj->fence_excl) + fence_put(obj->fence_excl); + for (i = 0; i < obj->fence_shared_count; ++i) + fence_put(obj->fence_shared[i]); + kfree(obj->fence_shared); + ww_mutex_destroy(&obj->lock); }
Thanks to Fengguang Wu for spotting a missing static cast.
v2: - Kill unused variable need_shared. v3: - Clarify the BUG() in dma_buf_release some more. (Rob Clark)
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com --- drivers/dma-buf/dma-buf.c | 108 +++++++++++++++++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 12 +++++ 2 files changed, 120 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index cd40ca22911f..25e8c4165936 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -30,6 +30,7 @@ #include <linux/export.h> #include <linux/debugfs.h> #include <linux/seq_file.h> +#include <linux/poll.h> #include <linux/reservation.h>
static inline int is_dma_buf_file(struct file *); @@ -52,6 +53,16 @@ static int dma_buf_release(struct inode *inode, struct file *file)
BUG_ON(dmabuf->vmapping_counter);
+ /* + * Any fences that a dma-buf poll can wait on should be signaled + * before releasing dma-buf. This is the responsibility of each + * driver that uses the reservation objects. + * + * If you hit this BUG() it means someone dropped their ref to the + * dma-buf while still having pending operation to the buffer. + */ + BUG_ON(dmabuf->cb_shared.active || dmabuf->cb_excl.active); + dmabuf->ops->release(dmabuf);
mutex_lock(&db_list.lock); @@ -108,10 +119,103 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) return base + offset; }
+static void dma_buf_poll_cb(struct fence *fence, struct fence_cb *cb) +{ + struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb; + unsigned long flags; + + spin_lock_irqsave(&dcb->poll->lock, flags); + wake_up_locked_poll(dcb->poll, dcb->active); + dcb->active = 0; + spin_unlock_irqrestore(&dcb->poll->lock, flags); +} + +static unsigned int dma_buf_poll(struct file *file, poll_table *poll) +{ + struct dma_buf *dmabuf; + struct reservation_object *resv; + unsigned long events; + + dmabuf = file->private_data; + if (!dmabuf || !dmabuf->resv) + return POLLERR; + + resv = dmabuf->resv; + + poll_wait(file, &dmabuf->poll, poll); + + events = poll_requested_events(poll) & (POLLIN | POLLOUT); + if (!events) + return 0; + + ww_mutex_lock(&resv->lock, NULL); + + if (resv->fence_excl && (!(events & POLLOUT) || + resv->fence_shared_count == 0)) { + struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl; + unsigned long pevents = POLLIN; + + if (resv->fence_shared_count == 0) + pevents |= POLLOUT; + + spin_lock_irq(&dmabuf->poll.lock); + if (dcb->active) { + dcb->active |= pevents; + events &= ~pevents; + } else + dcb->active = pevents; + spin_unlock_irq(&dmabuf->poll.lock); + + if (events & pevents) { + if (!fence_add_callback(resv->fence_excl, + &dcb->cb, dma_buf_poll_cb)) + events &= ~pevents; + else + /* + * No callback queued, wake up any additional + * waiters. + */ + dma_buf_poll_cb(NULL, &dcb->cb); + } + } + + if ((events & POLLOUT) && resv->fence_shared_count > 0) { + struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_shared; + int i; + + /* Only queue a new callback if no event has fired yet */ + spin_lock_irq(&dmabuf->poll.lock); + if (dcb->active) + events &= ~POLLOUT; + else + dcb->active = POLLOUT; + spin_unlock_irq(&dmabuf->poll.lock); + + if (!(events & POLLOUT)) + goto out; + + for (i = 0; i < resv->fence_shared_count; ++i) + if (!fence_add_callback(resv->fence_shared[i], + &dcb->cb, dma_buf_poll_cb)) { + events &= ~POLLOUT; + break; + } + + /* No callback queued, wake up any additional waiters. */ + if (i == resv->fence_shared_count) + dma_buf_poll_cb(NULL, &dcb->cb); + } + +out: + ww_mutex_unlock(&resv->lock); + return events; +} + static const struct file_operations dma_buf_fops = { .release = dma_buf_release, .mmap = dma_buf_mmap_internal, .llseek = dma_buf_llseek, + .poll = dma_buf_poll, };
/* @@ -171,6 +275,10 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops, dmabuf->ops = ops; dmabuf->size = size; dmabuf->exp_name = exp_name; + init_waitqueue_head(&dmabuf->poll); + dmabuf->cb_excl.poll = dmabuf->cb_shared.poll = &dmabuf->poll; + dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0; + if (!resv) { resv = (struct reservation_object *)&dmabuf[1]; reservation_object_init(resv); diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index fd7def2e0ae2..694e1fe1c4b4 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -30,6 +30,8 @@ #include <linux/list.h> #include <linux/dma-mapping.h> #include <linux/fs.h> +#include <linux/fence.h> +#include <linux/wait.h>
struct device; struct dma_buf; @@ -130,6 +132,16 @@ struct dma_buf { struct list_head list_node; void *priv; struct reservation_object *resv; + + /* poll support */ + wait_queue_head_t poll; + + struct dma_buf_poll_cb_t { + struct fence_cb cb; + wait_queue_head_t *poll; + + unsigned long active; + } cb_excl, cb_shared; };
/**
Move the list of shared fences to a struct, and return it in reservation_object_get_list(). Add reservation_object_get_excl to get the exclusive fence.
Add reservation_object_reserve_shared(), which reserves space in the reservation_object for 1 more shared fence.
reservation_object_add_shared_fence() and reservation_object_add_excl_fence() are used to assign a new fence to a reservation_object pointer, to complete a reservation.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
Changes since v1: - Add reservation_object_get_excl, reorder code a bit. --- Documentation/DocBook/device-drivers.tmpl | 1 drivers/dma-buf/dma-buf.c | 35 ++++--- drivers/dma-buf/reservation.c | 156 +++++++++++++++++++++++++++++ include/linux/reservation.h | 56 +++++++++- 4 files changed, 229 insertions(+), 19 deletions(-)
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index ed0ef00cd7bc..dd3f278faa8a 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -133,6 +133,7 @@ X!Edrivers/base/interface.c !Edrivers/dma-buf/seqno-fence.c !Iinclude/linux/fence.h !Iinclude/linux/seqno-fence.h +!Edrivers/dma-buf/reservation.c !Iinclude/linux/reservation.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 25e8c4165936..cb8379dfeed5 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -134,7 +134,10 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll) { struct dma_buf *dmabuf; struct reservation_object *resv; + struct reservation_object_list *fobj; + struct fence *fence_excl; unsigned long events; + unsigned shared_count;
dmabuf = file->private_data; if (!dmabuf || !dmabuf->resv) @@ -150,12 +153,18 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll)
ww_mutex_lock(&resv->lock, NULL);
- if (resv->fence_excl && (!(events & POLLOUT) || - resv->fence_shared_count == 0)) { + fobj = resv->fence; + if (!fobj) + goto out; + + shared_count = fobj->shared_count; + fence_excl = resv->fence_excl; + + if (fence_excl && (!(events & POLLOUT) || shared_count == 0)) { struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl; unsigned long pevents = POLLIN;
- if (resv->fence_shared_count == 0) + if (shared_count == 0) pevents |= POLLOUT;
spin_lock_irq(&dmabuf->poll.lock); @@ -167,19 +176,20 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll) spin_unlock_irq(&dmabuf->poll.lock);
if (events & pevents) { - if (!fence_add_callback(resv->fence_excl, - &dcb->cb, dma_buf_poll_cb)) + if (!fence_add_callback(fence_excl, &dcb->cb, + dma_buf_poll_cb)) { events &= ~pevents; - else + } else { /* * No callback queued, wake up any additional * waiters. */ dma_buf_poll_cb(NULL, &dcb->cb); + } } }
- if ((events & POLLOUT) && resv->fence_shared_count > 0) { + if ((events & POLLOUT) && shared_count > 0) { struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_shared; int i;
@@ -194,15 +204,18 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll) if (!(events & POLLOUT)) goto out;
- for (i = 0; i < resv->fence_shared_count; ++i) - if (!fence_add_callback(resv->fence_shared[i], - &dcb->cb, dma_buf_poll_cb)) { + for (i = 0; i < shared_count; ++i) { + struct fence *fence = fobj->shared[i]; + + if (!fence_add_callback(fence, &dcb->cb, + dma_buf_poll_cb)) { events &= ~POLLOUT; break; } + }
/* No callback queued, wake up any additional waiters. */ - if (i == resv->fence_shared_count) + if (i == shared_count) dma_buf_poll_cb(NULL, &dcb->cb); }
diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c index a73fbf3b8e56..e6166723a9ae 100644 --- a/drivers/dma-buf/reservation.c +++ b/drivers/dma-buf/reservation.c @@ -1,5 +1,5 @@ /* - * Copyright (C) 2012-2013 Canonical Ltd + * Copyright (C) 2012-2014 Canonical Ltd (Maarten Lankhorst) * * Based on bo.c which bears the following copyright notice, * but is dual licensed: @@ -37,3 +37,157 @@
DEFINE_WW_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class); + +/* + * Reserve space to add a shared fence to a reservation_object, + * must be called with obj->lock held. + */ +int reservation_object_reserve_shared(struct reservation_object *obj) +{ + struct reservation_object_list *fobj, *old; + u32 max; + + old = reservation_object_get_list(obj); + + if (old && old->shared_max) { + if (old->shared_count < old->shared_max) { + /* perform an in-place update */ + kfree(obj->staged); + obj->staged = NULL; + return 0; + } else + max = old->shared_max * 2; + } else + max = 4; + + /* + * resize obj->staged or allocate if it doesn't exist, + * noop if already correct size + */ + fobj = krealloc(obj->staged, offsetof(typeof(*fobj), shared[max]), + GFP_KERNEL); + if (!fobj) + return -ENOMEM; + + obj->staged = fobj; + fobj->shared_max = max; + return 0; +} +EXPORT_SYMBOL(reservation_object_reserve_shared); + +static void +reservation_object_add_shared_inplace(struct reservation_object *obj, + struct reservation_object_list *fobj, + struct fence *fence) +{ + u32 i; + + for (i = 0; i < fobj->shared_count; ++i) { + if (fobj->shared[i]->context == fence->context) { + struct fence *old_fence = fobj->shared[i]; + + fence_get(fence); + + fobj->shared[i] = fence; + + fence_put(old_fence); + return; + } + } + + fence_get(fence); + fobj->shared[fobj->shared_count] = fence; + /* + * make the new fence visible before incrementing + * fobj->shared_count + */ + smp_wmb(); + fobj->shared_count++; +} + +static void +reservation_object_add_shared_replace(struct reservation_object *obj, + struct reservation_object_list *old, + struct reservation_object_list *fobj, + struct fence *fence) +{ + unsigned i; + + fence_get(fence); + + if (!old) { + fobj->shared[0] = fence; + fobj->shared_count = 1; + goto done; + } + + /* + * no need to bump fence refcounts, rcu_read access + * requires the use of kref_get_unless_zero, and the + * references from the old struct are carried over to + * the new. + */ + fobj->shared_count = old->shared_count; + + for (i = 0; i < old->shared_count; ++i) { + if (fence && old->shared[i]->context == fence->context) { + fence_put(old->shared[i]); + fobj->shared[i] = fence; + fence = NULL; + } else + fobj->shared[i] = old->shared[i]; + } + if (fence) + fobj->shared[fobj->shared_count++] = fence; + +done: + obj->fence = fobj; + kfree(old); +} + +/* + * Add a fence to a shared slot, obj->lock must be held, and + * reservation_object_reserve_shared_fence has been called. + */ +void reservation_object_add_shared_fence(struct reservation_object *obj, + struct fence *fence) +{ + struct reservation_object_list *old, *fobj = obj->staged; + + old = reservation_object_get_list(obj); + obj->staged = NULL; + + if (!fobj) { + BUG_ON(old->shared_count == old->shared_max); + reservation_object_add_shared_inplace(obj, old, fence); + } else + reservation_object_add_shared_replace(obj, old, fobj, fence); +} +EXPORT_SYMBOL(reservation_object_add_shared_fence); + +void reservation_object_add_excl_fence(struct reservation_object *obj, + struct fence *fence) +{ + struct fence *old_fence = obj->fence_excl; + struct reservation_object_list *old; + u32 i = 0; + + old = reservation_object_get_list(obj); + if (old) { + i = old->shared_count; + old->shared_count = 0; + } + + if (fence) + fence_get(fence); + + obj->fence_excl = fence; + + /* inplace update, no shared fences */ + while (i--) + fence_put(old->shared[i]); + + if (old_fence) + fence_put(old_fence); +} +EXPORT_SYMBOL(reservation_object_add_excl_fence); diff --git a/include/linux/reservation.h b/include/linux/reservation.h index f3f57460a205..2affe67dea6e 100644 --- a/include/linux/reservation.h +++ b/include/linux/reservation.h @@ -45,36 +45,78 @@
extern struct ww_class reservation_ww_class;
+struct reservation_object_list { + u32 shared_count, shared_max; + struct fence *shared[]; +}; + struct reservation_object { struct ww_mutex lock;
struct fence *fence_excl; - struct fence **fence_shared; - u32 fence_shared_count, fence_shared_max; + struct reservation_object_list *fence; + struct reservation_object_list *staged; };
+#define reservation_object_assert_held(obj) \ + lockdep_assert_held(&(obj)->lock.base) + static inline void reservation_object_init(struct reservation_object *obj) { ww_mutex_init(&obj->lock, &reservation_ww_class);
- obj->fence_shared_count = obj->fence_shared_max = 0; - obj->fence_shared = NULL; obj->fence_excl = NULL; + obj->fence = NULL; + obj->staged = NULL; }
static inline void reservation_object_fini(struct reservation_object *obj) { int i; + struct reservation_object_list *fobj;
+ /* + * This object should be dead and all references must have + * been released to it. + */ if (obj->fence_excl) fence_put(obj->fence_excl); - for (i = 0; i < obj->fence_shared_count; ++i) - fence_put(obj->fence_shared[i]); - kfree(obj->fence_shared); + + fobj = obj->fence; + if (fobj) { + for (i = 0; i < fobj->shared_count; ++i) + fence_put(fobj->shared[i]); + + kfree(fobj); + } + kfree(obj->staged);
ww_mutex_destroy(&obj->lock); }
+static inline struct reservation_object_list * +reservation_object_get_list(struct reservation_object *obj) +{ + reservation_object_assert_held(obj); + + return obj->fence; +} + +static inline struct fence * +reservation_object_get_excl(struct reservation_object *obj) +{ + reservation_object_assert_held(obj); + + return obj->fence_excl; +} + +int reservation_object_reserve_shared(struct reservation_object *obj); +void reservation_object_add_shared_fence(struct reservation_object *obj, + struct fence *fence); + +void reservation_object_add_excl_fence(struct reservation_object *obj, + struct fence *fence); + #endif /* _LINUX_RESERVATION_H */
This adds some extra functions to deal with rcu.
reservation_object_get_fences_rcu() will obtain the list of shared and exclusive fences without obtaining the ww_mutex.
reservation_object_wait_timeout_rcu() will wait on all fences of the reservation_object, without obtaining the ww_mutex.
reservation_object_test_signaled_rcu() will test if all fences of the reservation_object are signaled without using the ww_mutex.
reservation_object_get_excl and reservation_object_get_list require the reservation object to be held, updating requires write_seqcount_begin/end. If only the exclusive fence is needed, rcu_dereference followed by fence_get_rcu can be used, if the shared fences are needed it's recommended to use the supplied functions.
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com Reviewed-By: Thomas Hellstrom thellstrom@vmware.com --- drivers/dma-buf/dma-buf.c | 47 ++++-- drivers/dma-buf/fence.c | 2 drivers/dma-buf/reservation.c | 336 ++++++++++++++++++++++++++++++++++++++--- include/linux/fence.h | 17 ++ include/linux/reservation.h | 52 ++++-- 5 files changed, 400 insertions(+), 54 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index cb8379dfeed5..f3014c448e1e 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -137,7 +137,7 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll) struct reservation_object_list *fobj; struct fence *fence_excl; unsigned long events; - unsigned shared_count; + unsigned shared_count, seq;
dmabuf = file->private_data; if (!dmabuf || !dmabuf->resv) @@ -151,14 +151,20 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll) if (!events) return 0;
- ww_mutex_lock(&resv->lock, NULL); +retry: + seq = read_seqcount_begin(&resv->seq); + rcu_read_lock();
- fobj = resv->fence; - if (!fobj) - goto out; - - shared_count = fobj->shared_count; - fence_excl = resv->fence_excl; + fobj = rcu_dereference(resv->fence); + if (fobj) + shared_count = fobj->shared_count; + else + shared_count = 0; + fence_excl = rcu_dereference(resv->fence_excl); + if (read_seqcount_retry(&resv->seq, seq)) { + rcu_read_unlock(); + goto retry; + }
if (fence_excl && (!(events & POLLOUT) || shared_count == 0)) { struct dma_buf_poll_cb_t *dcb = &dmabuf->cb_excl; @@ -176,14 +182,20 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll) spin_unlock_irq(&dmabuf->poll.lock);
if (events & pevents) { - if (!fence_add_callback(fence_excl, &dcb->cb, + if (!fence_get_rcu(fence_excl)) { + /* force a recheck */ + events &= ~pevents; + dma_buf_poll_cb(NULL, &dcb->cb); + } else if (!fence_add_callback(fence_excl, &dcb->cb, dma_buf_poll_cb)) { events &= ~pevents; + fence_put(fence_excl); } else { /* * No callback queued, wake up any additional * waiters. */ + fence_put(fence_excl); dma_buf_poll_cb(NULL, &dcb->cb); } } @@ -205,13 +217,26 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll) goto out;
for (i = 0; i < shared_count; ++i) { - struct fence *fence = fobj->shared[i]; + struct fence *fence = rcu_dereference(fobj->shared[i]);
+ if (!fence_get_rcu(fence)) { + /* + * fence refcount dropped to zero, this means + * that fobj has been freed + * + * call dma_buf_poll_cb and force a recheck! + */ + events &= ~POLLOUT; + dma_buf_poll_cb(NULL, &dcb->cb); + break; + } if (!fence_add_callback(fence, &dcb->cb, dma_buf_poll_cb)) { + fence_put(fence); events &= ~POLLOUT; break; } + fence_put(fence); }
/* No callback queued, wake up any additional waiters. */ @@ -220,7 +245,7 @@ static unsigned int dma_buf_poll(struct file *file, poll_table *poll) }
out: - ww_mutex_unlock(&resv->lock); + rcu_read_unlock(); return events; }
diff --git a/drivers/dma-buf/fence.c b/drivers/dma-buf/fence.c index 948bf00d955e..4222cb2aa96a 100644 --- a/drivers/dma-buf/fence.c +++ b/drivers/dma-buf/fence.c @@ -184,7 +184,7 @@ EXPORT_SYMBOL(fence_release);
void fence_free(struct fence *fence) { - kfree(fence); + kfree_rcu(fence, rcu); } EXPORT_SYMBOL(fence_free);
diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c index e6166723a9ae..3c97c8fa8d02 100644 --- a/drivers/dma-buf/reservation.c +++ b/drivers/dma-buf/reservation.c @@ -38,6 +38,11 @@ DEFINE_WW_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class);
+struct lock_class_key reservation_seqcount_class; +EXPORT_SYMBOL(reservation_seqcount_class); + +const char reservation_seqcount_string[] = "reservation_seqcount"; +EXPORT_SYMBOL(reservation_seqcount_string); /* * Reserve space to add a shared fence to a reservation_object, * must be called with obj->lock held. @@ -82,27 +87,37 @@ reservation_object_add_shared_inplace(struct reservation_object *obj, { u32 i;
+ fence_get(fence); + + preempt_disable(); + write_seqcount_begin(&obj->seq); + for (i = 0; i < fobj->shared_count; ++i) { - if (fobj->shared[i]->context == fence->context) { - struct fence *old_fence = fobj->shared[i]; + struct fence *old_fence;
- fence_get(fence); + old_fence = rcu_dereference_protected(fobj->shared[i], + reservation_object_held(obj));
- fobj->shared[i] = fence; + if (old_fence->context == fence->context) { + /* memory barrier is added by write_seqcount_begin */ + RCU_INIT_POINTER(fobj->shared[i], fence); + write_seqcount_end(&obj->seq); + preempt_enable();
fence_put(old_fence); return; } }
- fence_get(fence); - fobj->shared[fobj->shared_count] = fence; /* - * make the new fence visible before incrementing - * fobj->shared_count + * memory barrier is added by write_seqcount_begin, + * fobj->shared_count is protected by this lock too */ - smp_wmb(); + RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence); fobj->shared_count++; + + write_seqcount_end(&obj->seq); + preempt_enable(); }
static void @@ -112,11 +127,12 @@ reservation_object_add_shared_replace(struct reservation_object *obj, struct fence *fence) { unsigned i; + struct fence *old_fence = NULL;
fence_get(fence);
if (!old) { - fobj->shared[0] = fence; + RCU_INIT_POINTER(fobj->shared[0], fence); fobj->shared_count = 1; goto done; } @@ -130,19 +146,38 @@ reservation_object_add_shared_replace(struct reservation_object *obj, fobj->shared_count = old->shared_count;
for (i = 0; i < old->shared_count; ++i) { - if (fence && old->shared[i]->context == fence->context) { - fence_put(old->shared[i]); - fobj->shared[i] = fence; - fence = NULL; + struct fence *check; + + check = rcu_dereference_protected(old->shared[i], + reservation_object_held(obj)); + + if (!old_fence && check->context == fence->context) { + old_fence = check; + RCU_INIT_POINTER(fobj->shared[i], fence); } else - fobj->shared[i] = old->shared[i]; + RCU_INIT_POINTER(fobj->shared[i], check); + } + if (!old_fence) { + RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence); + fobj->shared_count++; } - if (fence) - fobj->shared[fobj->shared_count++] = fence;
done: - obj->fence = fobj; - kfree(old); + preempt_disable(); + write_seqcount_begin(&obj->seq); + /* + * RCU_INIT_POINTER can be used here, + * seqcount provides the necessary barriers + */ + RCU_INIT_POINTER(obj->fence, fobj); + write_seqcount_end(&obj->seq); + preempt_enable(); + + if (old) + kfree_rcu(old, rcu); + + if (old_fence) + fence_put(old_fence); }
/* @@ -158,7 +193,7 @@ void reservation_object_add_shared_fence(struct reservation_object *obj, obj->staged = NULL;
if (!fobj) { - BUG_ON(old->shared_count == old->shared_max); + BUG_ON(old->shared_count >= old->shared_max); reservation_object_add_shared_inplace(obj, old, fence); } else reservation_object_add_shared_replace(obj, old, fobj, fence); @@ -168,26 +203,275 @@ EXPORT_SYMBOL(reservation_object_add_shared_fence); void reservation_object_add_excl_fence(struct reservation_object *obj, struct fence *fence) { - struct fence *old_fence = obj->fence_excl; + struct fence *old_fence = reservation_object_get_excl(obj); struct reservation_object_list *old; u32 i = 0;
old = reservation_object_get_list(obj); - if (old) { + if (old) i = old->shared_count; - old->shared_count = 0; - }
if (fence) fence_get(fence);
- obj->fence_excl = fence; + preempt_disable(); + write_seqcount_begin(&obj->seq); + /* write_seqcount_begin provides the necessary memory barrier */ + RCU_INIT_POINTER(obj->fence_excl, fence); + if (old) + old->shared_count = 0; + write_seqcount_end(&obj->seq); + preempt_enable();
/* inplace update, no shared fences */ while (i--) - fence_put(old->shared[i]); + fence_put(rcu_dereference_protected(old->shared[i], + reservation_object_held(obj)));
if (old_fence) fence_put(old_fence); } EXPORT_SYMBOL(reservation_object_add_excl_fence); + +int reservation_object_get_fences_rcu(struct reservation_object *obj, + struct fence **pfence_excl, + unsigned *pshared_count, + struct fence ***pshared) +{ + unsigned shared_count = 0; + unsigned retry = 1; + struct fence **shared = NULL, *fence_excl = NULL; + int ret = 0; + + while (retry) { + struct reservation_object_list *fobj; + unsigned seq; + + seq = read_seqcount_begin(&obj->seq); + + rcu_read_lock(); + + fobj = rcu_dereference(obj->fence); + if (fobj) { + struct fence **nshared; + size_t sz = sizeof(*shared) * fobj->shared_max; + + nshared = krealloc(shared, sz, + GFP_NOWAIT | __GFP_NOWARN); + if (!nshared) { + rcu_read_unlock(); + nshared = krealloc(shared, sz, GFP_KERNEL); + if (nshared) { + shared = nshared; + continue; + } + + ret = -ENOMEM; + shared_count = 0; + break; + } + shared = nshared; + memcpy(shared, fobj->shared, sz); + shared_count = fobj->shared_count; + } else + shared_count = 0; + fence_excl = rcu_dereference(obj->fence_excl); + + retry = read_seqcount_retry(&obj->seq, seq); + if (retry) + goto unlock; + + if (!fence_excl || fence_get_rcu(fence_excl)) { + unsigned i; + + for (i = 0; i < shared_count; ++i) { + if (fence_get_rcu(shared[i])) + continue; + + /* uh oh, refcount failed, abort and retry */ + while (i--) + fence_put(shared[i]); + + if (fence_excl) { + fence_put(fence_excl); + fence_excl = NULL; + } + + retry = 1; + break; + } + } else + retry = 1; + +unlock: + rcu_read_unlock(); + } + *pshared_count = shared_count; + if (shared_count) + *pshared = shared; + else { + *pshared = NULL; + kfree(shared); + } + *pfence_excl = fence_excl; + + return ret; +} +EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu); + +long reservation_object_wait_timeout_rcu(struct reservation_object *obj, + bool wait_all, bool intr, + unsigned long timeout) +{ + struct fence *fence; + unsigned seq, shared_count, i = 0; + long ret = timeout; + +retry: + fence = NULL; + shared_count = 0; + seq = read_seqcount_begin(&obj->seq); + rcu_read_lock(); + + if (wait_all) { + struct reservation_object_list *fobj = rcu_dereference(obj->fence); + + if (fobj) + shared_count = fobj->shared_count; + + if (read_seqcount_retry(&obj->seq, seq)) + goto unlock_retry; + + for (i = 0; i < shared_count; ++i) { + struct fence *lfence = rcu_dereference(fobj->shared[i]); + + if (test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) + continue; + + if (!fence_get_rcu(lfence)) + goto unlock_retry; + + if (fence_is_signaled(lfence)) { + fence_put(lfence); + continue; + } + + fence = lfence; + break; + } + } + + if (!shared_count) { + struct fence *fence_excl = rcu_dereference(obj->fence_excl); + + if (read_seqcount_retry(&obj->seq, seq)) + goto unlock_retry; + + if (fence_excl && + !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence_excl->flags)) { + if (!fence_get_rcu(fence_excl)) + goto unlock_retry; + + if (fence_is_signaled(fence_excl)) + fence_put(fence_excl); + else + fence = fence_excl; + } + } + + rcu_read_unlock(); + if (fence) { + ret = fence_wait_timeout(fence, intr, ret); + fence_put(fence); + if (ret > 0 && wait_all && (i + 1 < shared_count)) + goto retry; + } + return ret; + +unlock_retry: + rcu_read_unlock(); + goto retry; +} +EXPORT_SYMBOL_GPL(reservation_object_wait_timeout_rcu); + + +static inline int +reservation_object_test_signaled_single(struct fence *passed_fence) +{ + struct fence *fence, *lfence = passed_fence; + int ret = 1; + + if (!test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) { + int ret; + + fence = fence_get_rcu(lfence); + if (!fence) + return -1; + + ret = !!fence_is_signaled(fence); + fence_put(fence); + } + return ret; +} + +bool reservation_object_test_signaled_rcu(struct reservation_object *obj, + bool test_all) +{ + unsigned seq, shared_count; + int ret = true; + +retry: + shared_count = 0; + seq = read_seqcount_begin(&obj->seq); + rcu_read_lock(); + + if (test_all) { + unsigned i; + + struct reservation_object_list *fobj = rcu_dereference(obj->fence); + + if (fobj) + shared_count = fobj->shared_count; + + if (read_seqcount_retry(&obj->seq, seq)) + goto unlock_retry; + + for (i = 0; i < shared_count; ++i) { + struct fence *fence = rcu_dereference(fobj->shared[i]); + + ret = reservation_object_test_signaled_single(fence); + if (ret < 0) + goto unlock_retry; + else if (!ret) + break; + } + + /* + * There could be a read_seqcount_retry here, but nothing cares + * about whether it's the old or newer fence pointers that are + * signaled. That race could still have happened after checking + * read_seqcount_retry. If you care, use ww_mutex_lock. + */ + } + + if (!shared_count) { + struct fence *fence_excl = rcu_dereference(obj->fence_excl); + + if (read_seqcount_retry(&obj->seq, seq)) + goto unlock_retry; + + if (fence_excl) { + ret = reservation_object_test_signaled_single(fence_excl); + if (ret < 0) + goto unlock_retry; + } + } + + rcu_read_unlock(); + return ret; + +unlock_retry: + rcu_read_unlock(); + goto retry; +} +EXPORT_SYMBOL_GPL(reservation_object_test_signaled_rcu); diff --git a/include/linux/fence.h b/include/linux/fence.h index b935cc650123..d174585b874b 100644 --- a/include/linux/fence.h +++ b/include/linux/fence.h @@ -28,6 +28,7 @@ #include <linux/kref.h> #include <linux/sched.h> #include <linux/printk.h> +#include <linux/rcupdate.h>
struct fence; struct fence_ops; @@ -37,6 +38,7 @@ struct fence_cb; * struct fence - software synchronization primitive * @refcount: refcount for this fence * @ops: fence_ops associated with this fence + * @rcu: used for releasing fence with kfree_rcu * @cb_list: list of all callbacks to call * @lock: spin_lock_irqsave used for locking * @context: execution context this fence belongs to, returned by @@ -70,6 +72,7 @@ struct fence_cb; struct fence { struct kref refcount; const struct fence_ops *ops; + struct rcu_head rcu; struct list_head cb_list; spinlock_t *lock; unsigned context, seqno; @@ -192,6 +195,20 @@ static inline struct fence *fence_get(struct fence *fence) }
/** + * fence_get_rcu - get a fence from a reservation_object_list with rcu read lock + * @fence: [in] fence to increase refcount of + * + * Function returns NULL if no refcount could be obtained, or the fence. + */ +static inline struct fence *fence_get_rcu(struct fence *fence) +{ + if (kref_get_unless_zero(&fence->refcount)) + return fence; + else + return NULL; +} + +/** * fence_put - decreases refcount of the fence * @fence: [in] fence to reduce refcount of */ diff --git a/include/linux/reservation.h b/include/linux/reservation.h index 2affe67dea6e..5a0b64cf68b4 100644 --- a/include/linux/reservation.h +++ b/include/linux/reservation.h @@ -42,22 +42,29 @@ #include <linux/ww_mutex.h> #include <linux/fence.h> #include <linux/slab.h> +#include <linux/seqlock.h> +#include <linux/rcupdate.h>
extern struct ww_class reservation_ww_class; +extern struct lock_class_key reservation_seqcount_class; +extern const char reservation_seqcount_string[];
struct reservation_object_list { + struct rcu_head rcu; u32 shared_count, shared_max; - struct fence *shared[]; + struct fence __rcu *shared[]; };
struct reservation_object { struct ww_mutex lock; + seqcount_t seq;
- struct fence *fence_excl; - struct reservation_object_list *fence; + struct fence __rcu *fence_excl; + struct reservation_object_list __rcu *fence; struct reservation_object_list *staged; };
+#define reservation_object_held(obj) lockdep_is_held(&(obj)->lock.base) #define reservation_object_assert_held(obj) \ lockdep_assert_held(&(obj)->lock.base)
@@ -66,8 +73,9 @@ reservation_object_init(struct reservation_object *obj) { ww_mutex_init(&obj->lock, &reservation_ww_class);
- obj->fence_excl = NULL; - obj->fence = NULL; + __seqcount_init(&obj->seq, reservation_seqcount_string, &reservation_seqcount_class); + RCU_INIT_POINTER(obj->fence, NULL); + RCU_INIT_POINTER(obj->fence_excl, NULL); obj->staged = NULL; }
@@ -76,18 +84,20 @@ reservation_object_fini(struct reservation_object *obj) { int i; struct reservation_object_list *fobj; + struct fence *excl;
/* * This object should be dead and all references must have - * been released to it. + * been released to it, so no need to be protected with rcu. */ - if (obj->fence_excl) - fence_put(obj->fence_excl); + excl = rcu_dereference_protected(obj->fence_excl, 1); + if (excl) + fence_put(excl);
- fobj = obj->fence; + fobj = rcu_dereference_protected(obj->fence, 1); if (fobj) { for (i = 0; i < fobj->shared_count; ++i) - fence_put(fobj->shared[i]); + fence_put(rcu_dereference_protected(fobj->shared[i], 1));
kfree(fobj); } @@ -99,17 +109,15 @@ reservation_object_fini(struct reservation_object *obj) static inline struct reservation_object_list * reservation_object_get_list(struct reservation_object *obj) { - reservation_object_assert_held(obj); - - return obj->fence; + return rcu_dereference_protected(obj->fence, + reservation_object_held(obj)); }
static inline struct fence * reservation_object_get_excl(struct reservation_object *obj) { - reservation_object_assert_held(obj); - - return obj->fence_excl; + return rcu_dereference_protected(obj->fence_excl, + reservation_object_held(obj)); }
int reservation_object_reserve_shared(struct reservation_object *obj); @@ -119,4 +127,16 @@ void reservation_object_add_shared_fence(struct reservation_object *obj, void reservation_object_add_excl_fence(struct reservation_object *obj, struct fence *fence);
+int reservation_object_get_fences_rcu(struct reservation_object *obj, + struct fence **pfence_excl, + unsigned *pshared_count, + struct fence ***pshared); + +long reservation_object_wait_timeout_rcu(struct reservation_object *obj, + bool wait_all, bool intr, + unsigned long timeout); + +bool reservation_object_test_signaled_rcu(struct reservation_object *obj, + bool test_all); + #endif /* _LINUX_RESERVATION_H */
On Tue, Jul 01, 2014 at 12:57:02PM +0200, Maarten Lankhorst wrote:
So after some more hacking I've moved dma-buf to its own subdirectory, drivers/dma-buf and applied the fence patches to its new place. I believe that the first patch should be applied regardless, and the rest should be ready now. :-)
Changes to the fence api:
- release_fence -> fence_release etc.
- __fence_init -> fence_init
- __fence_signal -> fence_signal_locked
- __fence_is_signaled -> fence_is_signaled_locked
- Changing BUG_ON to WARN_ON in fence_later, and return NULL if it triggers.
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
thanks,
greg k-h
op 02-07-14 07:37, Greg KH schreef:
On Tue, Jul 01, 2014 at 12:57:02PM +0200, Maarten Lankhorst wrote:
So after some more hacking I've moved dma-buf to its own subdirectory, drivers/dma-buf and applied the fence patches to its new place. I believe that the first patch should be applied regardless, and the rest should be ready now. :-)
Changes to the fence api:
- release_fence -> fence_release etc.
- __fence_init -> fence_init
- __fence_signal -> fence_signal_locked
- __fence_is_signaled -> fence_is_signaled_locked
- Changing BUG_ON to WARN_ON in fence_later, and return NULL if it triggers.
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
Sounds good to me, let me know when you pull it in, so I can rebase my drm conversion on top of it. :-)
~Maarten
On Wed, Jul 02, 2014 at 11:12:32AM +0200, Maarten Lankhorst wrote:
op 02-07-14 07:37, Greg KH schreef:
On Tue, Jul 01, 2014 at 12:57:02PM +0200, Maarten Lankhorst wrote:
So after some more hacking I've moved dma-buf to its own subdirectory, drivers/dma-buf and applied the fence patches to its new place. I believe that the first patch should be applied regardless, and the rest should be ready now. :-)
Changes to the fence api:
- release_fence -> fence_release etc.
- __fence_init -> fence_init
- __fence_signal -> fence_signal_locked
- __fence_is_signaled -> fence_is_signaled_locked
- Changing BUG_ON to WARN_ON in fence_later, and return NULL if it triggers.
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
Sounds good to me, let me know when you pull it in, so I can rebase my drm conversion on top of it. :-)
All are now queued up and in my driver-core tree in the driver-core-next branch, you should have gotten the emails about it.
thanks,
greg k-h
On Wed, Jul 2, 2014 at 7:37 AM, Greg KH gregkh@linuxfoundation.org wrote:
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
Ack from my side fwiw.
Just for the record (again) about android syncpts. I think using android syncpts stuff as the official userspace interfaces for fences for userspace that wants to do explicit syncing is the sane approach - after all the (only) big platform using explicit fencing is Android since opencl is kinda not yet there really on linux (which would be the other guy really interested in explicit fencing).
But before we de-stage android syncpts (and so bake in the userspace abi forever) I really want to see a full upstream gpu driver implementation using fences and running on both X+DRI userspace (implicit syncing) and android (expclicit syncing), including good test coverage for corner-cases to make sure we've addressed all warts and hidden dragons. i915 looks like the most likely candidate to get there first (we have the code for both use-cases out-of-tree after all) so I'll keep on pestering people. No promises though ;-)
Cheers, Daniel
On Mon, Jul 07, 2014 at 03:23:17PM +0200, Daniel Vetter wrote:
On Wed, Jul 2, 2014 at 7:37 AM, Greg KH gregkh@linuxfoundation.org wrote:
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
Ack from my side fwiw.
Thanks, I'll queue it up later today.
greg k-h
On Mon, Jul 07, 2014 at 10:30:52AM -0700, Greg KH wrote:
On Mon, Jul 07, 2014 at 03:23:17PM +0200, Daniel Vetter wrote:
On Wed, Jul 2, 2014 at 7:37 AM, Greg KH gregkh@linuxfoundation.org wrote:
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
Ack from my side fwiw.
Thanks, I'll queue it up later today.
btw should we add you as a (co)maintainer for driver/core/dma-buf since you seem to want to keep a closer tab on what the insane gfx folks are up to in there? -Daniel
On Tue, Jul 08, 2014 at 03:44:27PM +0200, Daniel Vetter wrote:
On Mon, Jul 07, 2014 at 10:30:52AM -0700, Greg KH wrote:
On Mon, Jul 07, 2014 at 03:23:17PM +0200, Daniel Vetter wrote:
On Wed, Jul 2, 2014 at 7:37 AM, Greg KH gregkh@linuxfoundation.org wrote:
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
Ack from my side fwiw.
Thanks, I'll queue it up later today.
btw should we add you as a (co)maintainer for driver/core/dma-buf since you seem to want to keep a closer tab on what the insane gfx folks are up to in there?
Sure, why not, what's one more maintainership...
Oh, does that mean you want me to be the one collecting the patches and forwarding them on to Linus? If so, that's fine, I can easily do that as well due to my infrastructure being set up for it.
thanks,
greg k-h
On 8 July 2014 20:09, Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Jul 08, 2014 at 03:44:27PM +0200, Daniel Vetter wrote:
On Mon, Jul 07, 2014 at 10:30:52AM -0700, Greg KH wrote:
On Mon, Jul 07, 2014 at 03:23:17PM +0200, Daniel Vetter wrote:
On Wed, Jul 2, 2014 at 7:37 AM, Greg KH gregkh@linuxfoundation.org wrote:
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
Ack from my side fwiw.
Thanks, I'll queue it up later today.
btw should we add you as a (co)maintainer for driver/core/dma-buf since you seem to want to keep a closer tab on what the insane gfx folks are up to in there?
Sure, why not, what's one more maintainership...
Oh, does that mean you want me to be the one collecting the patches and forwarding them on to Linus? If so, that's fine, I can easily do that as well due to my infrastructure being set up for it.
If you're ok, I could continue to do the collecting / forwarding business - I guess Daniel meant more from the 'not miss patches that need review'!
Upto you!
thanks,
greg k-h
Thanks and best regards, ~Sumit
On Tue, Jul 08, 2014 at 08:22:11PM +0530, Sumit Semwal wrote:
On 8 July 2014 20:09, Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Jul 08, 2014 at 03:44:27PM +0200, Daniel Vetter wrote:
On Mon, Jul 07, 2014 at 10:30:52AM -0700, Greg KH wrote:
On Mon, Jul 07, 2014 at 03:23:17PM +0200, Daniel Vetter wrote:
On Wed, Jul 2, 2014 at 7:37 AM, Greg KH gregkh@linuxfoundation.org wrote:
> Android can expose fences to userspace. It's possible to make the new fence > mechanism expose the same fences to userspace by changing sync_fence_create > to take a struct fence instead of a struct sync_pt. No other change is needed, > because only the fence parts of struct sync_pt are used. But because the > userspace fences are a separate problem and I haven't really looked at it yet > I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
Ack from my side fwiw.
Thanks, I'll queue it up later today.
btw should we add you as a (co)maintainer for driver/core/dma-buf since you seem to want to keep a closer tab on what the insane gfx folks are up to in there?
Sure, why not, what's one more maintainership...
Oh, does that mean you want me to be the one collecting the patches and forwarding them on to Linus? If so, that's fine, I can easily do that as well due to my infrastructure being set up for it.
If you're ok, I could continue to do the collecting / forwarding business - I guess Daniel meant more from the 'not miss patches that need review'!
Hey, I'm more than willing to have other people do the real work of collecting / forwarding :)
I'll just review stuff as it floats by, that's not enough of a role to put me in "MAINTAINERS" at all.
thanks,
greg k-h
Hi Greg,
On 2 July 2014 11:07, Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Jul 01, 2014 at 12:57:02PM +0200, Maarten Lankhorst wrote:
So after some more hacking I've moved dma-buf to its own subdirectory, drivers/dma-buf and applied the fence patches to its new place. I believe that the first patch should be applied regardless, and the rest should be ready now. :-)
Changes to the fence api:
- release_fence -> fence_release etc.
- __fence_init -> fence_init
- __fence_signal -> fence_signal_locked
- __fence_is_signaled -> fence_is_signaled_locked
- Changing BUG_ON to WARN_ON in fence_later, and return NULL if it triggers.
Android can expose fences to userspace. It's possible to make the new fence mechanism expose the same fences to userspace by changing sync_fence_create to take a struct fence instead of a struct sync_pt. No other change is needed, because only the fence parts of struct sync_pt are used. But because the userspace fences are a separate problem and I haven't really looked at it yet I feel it should stay in staging, for now.
Ok, that's reasonable.
At first glance, this all looks "sane" to me, any objection from anyone if I merge this through my driver-core tree for 3.17?
Fwiw, Ack from me as well!
thanks,
greg k-h
Best regards, ~Sumit.
linaro-mm-sig@lists.linaro.org