On Thu, Feb 28, 2019 at 04:18:57PM -0800, Hyun Kwon wrote:
Hi Daniel,
On Thu, 2019-02-28 at 02:01:46 -0800, Daniel Vetter wrote:
On Wed, Feb 27, 2019 at 04:36:06PM -0800, Hyun Kwon wrote:
Hi Daniel,
On Wed, 2019-02-27 at 06:13:45 -0800, Daniel Vetter wrote:
On Tue, Feb 26, 2019 at 11:20 PM Hyun Kwon hyun.kwon@xilinx.com wrote:
Hi Daniel,
Thanks for the comment.
On Tue, 2019-02-26 at 04:06:13 -0800, Daniel Vetter wrote:
On Tue, Feb 26, 2019 at 12:53 PM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote: > > On Sat, Feb 23, 2019 at 12:28:17PM -0800, Hyun Kwon wrote: > > Add the dmabuf map / unmap interfaces. This allows the user driver > > to be able to import the external dmabuf and use it from user space. > > > > Signed-off-by: Hyun Kwon hyun.kwon@xilinx.com > > --- > > drivers/uio/Makefile | 2 +- > > drivers/uio/uio.c | 43 +++++++++ > > drivers/uio/uio_dmabuf.c | 210 +++++++++++++++++++++++++++++++++++++++++++ > > drivers/uio/uio_dmabuf.h | 26 ++++++ > > include/uapi/linux/uio/uio.h | 33 +++++++ > > 5 files changed, 313 insertions(+), 1 deletion(-) > > create mode 100644 drivers/uio/uio_dmabuf.c > > create mode 100644 drivers/uio/uio_dmabuf.h > > create mode 100644 include/uapi/linux/uio/uio.h > > > > diff --git a/drivers/uio/Makefile b/drivers/uio/Makefile > > index c285dd2..5da16c7 100644 > > --- a/drivers/uio/Makefile > > +++ b/drivers/uio/Makefile > > @@ -1,5 +1,5 @@
[snip]
Frankly looks like a ploy to sidestep review by graphics folks. We'd ask for the userspace first :-)
Please refer to pull request [1].
For any interest in more details, the libmetal is the abstraction layer which provides platform independent APIs. The backend implementation can be selected per different platforms: ex, rtos, linux, standalone (xilinx),,,. For Linux, it supports UIO / vfio as of now. The actual user space drivers sit on top of libmetal. Such drivers can be found in [2]. This is why I try to avoid any device specific code in Linux kernel.
Also, exporting dma_addr to userspace is considered a very bad idea.
I agree, hence the RFC to pick some brains. :-) Would it make sense if this call doesn't export the physicall address, but instead takes only the dmabuf fd and register offsets to be programmed?
If you want to do this properly, you need a minimal in-kernel memory manager, and those tend to be based on top of drm_gem.c and merged through the gpu tree. The last place where we accidentally leaked a dma addr for gpu buffers was in the fbdev code, and we plugged that one with
Could you please help me understand how having a in-kernel memory manager helps? Isn't it just moving same dmabuf import / paddr export functionality in different modules: kernel memory manager vs uio. In fact, Xilinx does have such memory manager based on drm gem in downstream. But for this time we took the approach of implementing this through generic dmabuf allocator, ION, and enabling the import capability in the UIO infrastructure instead.
There's a group of people working on upstreaming a xilinx drm driver already. Which driver are we talking about? Can you pls provide a link to that xilinx drm driver?
The one I was pushing [1] is implemented purely for display, and not intended for anything other than that as of now. What I'm refering to above is part of Xilinx FPGA (acceleration) runtime [2]. As far as I know, it's planned to be upstreamed, but not yet started. The Xilinx runtime software has its own in-kernel memory manager based on drm_cma_gem with its own ioctls [3].
Thanks, -hyun
[1] https://patchwork.kernel.org/patch/10513001/ [2] https://github.com/Xilinx/XRT [3] https://github.com/Xilinx/XRT/tree/master/src/runtime_src/driver/zynq/drm
I've done a very quick look only, and yes this is kinda what I'd expect. Doing a small drm gem driver for an fpga/accelarator that needs lots of memories is the right architecture, since at the low level of kernel interfaces a gpu really isn't anything else than an accelarater.
And from a very cursory look the gem driver you mentioned (I only scrolled through the ioctl handler quickly) looks reasonable.
Thanks for taking time to look and share input. But still I'd like to understand why it's more reasonable if the similar ioctl exists with drm than with uio. Is it because such drm ioctl is vendor specific?
We do have quite a pile of shared infrastructure in drm beyond just the vendor specific ioctl. So putting accelerator drivers there makes sense, whether the programming is a GPU, some neural network folder, an FPGA or something else. The one issue is that we require open source userspace together with your driver, since just the accelerator shim in the kernel alone is fairly useless (both for review and for doing anything with it).
But there's also some kernel maintainers who disagree and happily take drivers originally written for drm and then rewritten for non-drm for upstream to avoid the drm folks (or at least it very much looks like that, and happens fairly regularly).
Cheers, Daniel
Thanks, -hyun
-Daniel
Thanks, Daniel
Thanks, -hyun
[1] https://github.com/OpenAMP/libmetal/pull/82/commits/951e2762bd487c98919ad12f... [2] https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/driver...
commit 4be9bd10e22dfc7fc101c5cf5969ef2d3a042d8a (tag: drm-misc-next-fixes-2018-10-03) Author: Neil Armstrong narmstrong@baylibre.com Date: Fri Sep 28 14:05:55 2018 +0200
drm/fb_helper: Allow leaking fbdev smem_start
Together with cuse the above patch should be enough to implement a drm driver entirely in userspace at least.
Cheers, Daniel
Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch
-- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
linaro-mm-sig@lists.linaro.org