On Tue, Feb 26, 2019 at 11:20 PM Hyun Kwon hyun.kwon@xilinx.com wrote:
Hi Daniel,
Thanks for the comment.
On Tue, 2019-02-26 at 04:06:13 -0800, Daniel Vetter wrote:
On Tue, Feb 26, 2019 at 12:53 PM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Sat, Feb 23, 2019 at 12:28:17PM -0800, Hyun Kwon wrote:
Add the dmabuf map / unmap interfaces. This allows the user driver to be able to import the external dmabuf and use it from user space.
Signed-off-by: Hyun Kwon hyun.kwon@xilinx.com
drivers/uio/Makefile | 2 +- drivers/uio/uio.c | 43 +++++++++ drivers/uio/uio_dmabuf.c | 210 +++++++++++++++++++++++++++++++++++++++++++ drivers/uio/uio_dmabuf.h | 26 ++++++ include/uapi/linux/uio/uio.h | 33 +++++++ 5 files changed, 313 insertions(+), 1 deletion(-) create mode 100644 drivers/uio/uio_dmabuf.c create mode 100644 drivers/uio/uio_dmabuf.h create mode 100644 include/uapi/linux/uio/uio.h
diff --git a/drivers/uio/Makefile b/drivers/uio/Makefile index c285dd2..5da16c7 100644 --- a/drivers/uio/Makefile +++ b/drivers/uio/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_UIO) += uio.o +obj-$(CONFIG_UIO) += uio.o uio_dmabuf.o obj-$(CONFIG_UIO_CIF) += uio_cif.o obj-$(CONFIG_UIO_PDRV_GENIRQ) += uio_pdrv_genirq.o obj-$(CONFIG_UIO_DMEM_GENIRQ) += uio_dmem_genirq.o diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c index 1313422..6841f98 100644 --- a/drivers/uio/uio.c +++ b/drivers/uio/uio.c @@ -24,6 +24,12 @@ #include <linux/kobject.h> #include <linux/cdev.h> #include <linux/uio_driver.h> +#include <linux/list.h> +#include <linux/mutex.h>
+#include <uapi/linux/uio/uio.h>
+#include "uio_dmabuf.h"
#define UIO_MAX_DEVICES (1U << MINORBITS)
@@ -454,6 +460,8 @@ static irqreturn_t uio_interrupt(int irq, void *dev_id) struct uio_listener { struct uio_device *dev; s32 event_count;
struct list_head dbufs;
struct mutex dbufs_lock; /* protect @dbufs */
};
static int uio_open(struct inode *inode, struct file *filep) @@ -500,6 +508,9 @@ static int uio_open(struct inode *inode, struct file *filep) if (ret) goto err_infoopen;
INIT_LIST_HEAD(&listener->dbufs);
mutex_init(&listener->dbufs_lock);
return 0;
err_infoopen: @@ -529,6 +540,10 @@ static int uio_release(struct inode *inode, struct file *filep) struct uio_listener *listener = filep->private_data; struct uio_device *idev = listener->dev;
ret = uio_dmabuf_cleanup(idev, &listener->dbufs, &listener->dbufs_lock);
if (ret)
dev_err(&idev->dev, "failed to clean up the dma bufs\n");
mutex_lock(&idev->info_lock); if (idev->info && idev->info->release) ret = idev->info->release(idev->info, inode);
@@ -652,6 +667,33 @@ static ssize_t uio_write(struct file *filep, const char __user *buf, return retval ? retval : sizeof(s32); }
+static long uio_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
We have resisted adding a uio ioctl for a long time, can't you do this through sysfs somehow?
A meta-comment about your ioctl structure:
+#define UIO_DMABUF_DIR_BIDIR 1 +#define UIO_DMABUF_DIR_TO_DEV 2 +#define UIO_DMABUF_DIR_FROM_DEV 3 +#define UIO_DMABUF_DIR_NONE 4
enumerated type?
+struct uio_dmabuf_args {
__s32 dbuf_fd;
__u64 dma_addr;
__u64 size;
__u32 dir;
Why the odd alignment? Are you sure this is the best packing for such a structure?
Why is dbuf_fd __s32? dir can be __u8, right?
I don't know that dma layer very well, it would be good to get some review from others to see if this really is even a viable thing to do. The fd handling seems a bit "odd" here, but maybe I just do not understand it.
Frankly looks like a ploy to sidestep review by graphics folks. We'd ask for the userspace first :-)
Please refer to pull request [1].
For any interest in more details, the libmetal is the abstraction layer which provides platform independent APIs. The backend implementation can be selected per different platforms: ex, rtos, linux, standalone (xilinx),,,. For Linux, it supports UIO / vfio as of now. The actual user space drivers sit on top of libmetal. Such drivers can be found in [2]. This is why I try to avoid any device specific code in Linux kernel.
Also, exporting dma_addr to userspace is considered a very bad idea.
I agree, hence the RFC to pick some brains. :-) Would it make sense if this call doesn't export the physicall address, but instead takes only the dmabuf fd and register offsets to be programmed?
If you want to do this properly, you need a minimal in-kernel memory manager, and those tend to be based on top of drm_gem.c and merged through the gpu tree. The last place where we accidentally leaked a dma addr for gpu buffers was in the fbdev code, and we plugged that one with
Could you please help me understand how having a in-kernel memory manager helps? Isn't it just moving same dmabuf import / paddr export functionality in different modules: kernel memory manager vs uio. In fact, Xilinx does have such memory manager based on drm gem in downstream. But for this time we took the approach of implementing this through generic dmabuf allocator, ION, and enabling the import capability in the UIO infrastructure instead.
There's a group of people working on upstreaming a xilinx drm driver already. Which driver are we talking about? Can you pls provide a link to that xilinx drm driver?
Thanks, Daniel
Thanks, -hyun
[1] https://github.com/OpenAMP/libmetal/pull/82/commits/951e2762bd487c98919ad12f... [2] https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/driver...
commit 4be9bd10e22dfc7fc101c5cf5969ef2d3a042d8a (tag: drm-misc-next-fixes-2018-10-03) Author: Neil Armstrong narmstrong@baylibre.com Date: Fri Sep 28 14:05:55 2018 +0200
drm/fb_helper: Allow leaking fbdev smem_start
Together with cuse the above patch should be enough to implement a drm driver entirely in userspace at least.
Cheers, Daniel
Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch
linaro-mm-sig@lists.linaro.org