[Linaro-mm-sig] Minutes from V4L2 update call
kochhar.abhinav at gmail.com
Tue Apr 17 11:58:53 UTC 2012
What about the mapping for importing devices which an IOMMU?
To update the mapping in page tables accessed by importing device's IOMMU
do we need to create a mapping in the exporter side or the importing device
must use the mapped sg returned by exporter and create a mapping for IOMMU?
On Tue, Apr 17, 2012 at 8:40 PM, Tomasz Stanislawski <
t.stanislaws at samsung.com> wrote:
> Hello everyone,
> I would like to ask about the agreement on a behavior of
> a DMABUF exporter for dma_buf_map_attachment.
> According to the DMABUF spec the exporter should return a scatterlist
> mapped into importers DMA space. However there were issues about the
> I made a short survey for mapping strategy for DMABUF patches for some
> 1. V4L2 - support for dmabuf importing hopefully consistent with dmabuf
> The patch "v4l: vb2-dma-contig: change map/unmap behaviour for importers"
> implement DMA mapping performed on the importer side. However the patch
> can be dropped at no cost.
> 2. Exynos DRM - the latest version implements mapping on the exporter side
> 3. Omap/DRM - 'mapping' is done on exporter side by setting a physical
> as DMA address in the scatterlist. The dma_map_sg should be used for this
> 4. nouveau/i915 by Dave Airlie - mapping for client is done on importer
> Does it mean that it is agreed that the exporter is responsible for
> mapping into the client space?
> Tomasz Stanislawski
> On 03/27/2012 11:39 AM, Laurent Pinchart wrote:
> > Hi Daniel,
> > On Thursday 22 March 2012 19:01:01 Daniel Vetter wrote:
> >> On Thu, Mar 22, 2012 at 11:54:55AM -0300, Christian Robottom Reis wrote:
> >>> Tomasz: proposed extension to DMA Mapping -- dma_get_pages
> >>> Currently difficult to change the camera address into list of
> >>> pages
> >>> DMA framework has the knowledge of this list and could do this
> >>> Depends on dma_get_pages
> >>> Needs to be merged first
> >>> Test application posted to dri-devel with dependencies to run
> >>> Many dependencies
> >> I kinda missed to yell at this patch when it first showed up, so I'll do
> >> that here ;-)
> >> I think this is a gross layering violation and I don't like it at all.
> >> entire point of the dma api is that device drivers only get to see
> >> addresses and can forget about all the remapping/contig-alloc madness.
> >> dma-buf should just follow this with it's map/unmap interfaces.
> >> Furthermore the exporter memory might simply not have any associated
> >> struct pages. The two examples I always bring up:
> >> - special purpose remapping units (like omap's TILER) which are managed
> >> the exporter and can do crazy things like tiling or rotation
> >> transparently for all devices.
> >> - special carve-out memory which is unknown to linux memory management.
> >> drm/i915 is totally abusing this, mostly because windows is lame and
> >> doesn't have decent largepage allocation support. This is just plain
> >> system memory, but there's no struct page for it (because it's not
> >> of the system map).
> > I agree with you that the DMA API is the proper layer to abstract
> > memory and provide devices with a DMA address. DMA addresses are
> specific to a
> > device, while dma-buf needs to share buffers between separate devices
> > (otherwise it would be pretty pointless). As DMA address are
> > they can't be used to describe a cross-device buffer.
> > When allocating a buffer using the DMA API, memory is "allocated" behind
> > scene and mapped to the device address space ("allocated" in this case
> > anything from plain physical memory allocation to reservation of a
> > purpose memory range, like in the OMAP TILER example). All the device
> > gets to see is the DMA address and/or the DMA scatter list. So far, so
> > Then, when we want to share the memory with a second device, we need a
> way to
> > map the memory to the second device's address space. There are several
> > here (and this is related to the "[RFCv2 PATCH 7/9] v4l: vb2-dma-contig:
> > change map/unmap behaviour" mail thread).
> > - Let the importer driver map the memory to its own address space. This
> > sense from the importer device's point of view, as that's where knowledge
> > about the importer device is located (although you could argue that
> > about the importer device is located in its struct device, which can be
> > around - and I could agree with that). The importer driver would thus
> need to
> > receive a cookie identifying the memory. As explained before, the
> > DMA address isn't enough. There are various options here as well (list of
> > pages or page frame numbers, exporter's DMA address + exporter's struct
> > device, a new kind of DMA API-related cookie, ... to just list a few).
> > importer driver would then use that cookie to map the memory to the
> > device's address space (and this should most probably be implemented in
> > DMA API, which would require extensions).
> > - Let the exporter driver map the memory to the importer device's address
> > space. This makes sense from the exporter device's point of view, as
> > where knowledge about the exported memory is located. In this case we
> > most probably want to extend the DMA API to handle the mapping
> operation, and
> > we will need to pass the same kind of cookie as in the first option to
> > API.
> >> Now the core dma api isn't fully up to snuff for everything yet and
> >> are things missing. But it's certainly not dma_get_pages, but more
> >> like mmap support for coherent memory or allocating coherent memroy
> >> doesn't have a static mapping in the kernel address space. I very much
> >> hope that the interfaces we develop for dma-buf (and the insights
> >> could help as examples here, so that in the further there's not such a
> >> gaping difference for the driver between dma_coherent allocations of
> >> own and imported buffer objects.
> Linaro-mm-sig mailing list
> Linaro-mm-sig at lists.linaro.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Linaro-mm-sig