Just as a heads-up, I've been in contact with Mauro who is a) still quite busy and b) still blocked on hardware with an operational sensor. I'm working on unblocking him at least on b) and I'm really hoping we're all thinking about 3.5 as the window for this work to go in.
On Fri, Mar 30, 2012 at 02:35:52PM +0200, Tomasz Stanislawski wrote:
Hi Dave,
Thank you for the suggestion of using vmap/vunmap extensions for dmabuf. It was exactly what I needed. It leads to trivial (circa ~60 lines) implementation of DMABUF importer for vmalloc.
I prepared a PoC implementation and successfully tested it using s5p-tv as exporter and VIVI as importer. I will prepare and post patches and test application for DRM vs VIVI version next week. The Exynos DRM makes use of dma_alloc_coherent for a buffer allocation making implementation of vmap/vunmap trivial.
Memory coherence fixes all caching problems :)
I decided to give up setup CPU access using sglist provided by dma_buf_map_attachment. I agree with you that mapping in vmalloc area are a scare resource. Therefore mapping a scatterlist into vmalloc area it not a good idea.
The kmap interface presented by Vetter is much more generic and robust but does not suit well to vb2-vmalloc's and VIVI's designs.
Adding request that consecutive pages are mapped into consecutive address will help little because VIVI has to touch the whole buffer anyway. Therefore vmalloc would have to kmap all pages during map_dmabuf operation. Moreover, requesting kmap to map consecutive pages into consecutive address may lead to great fragmentation of vmalloc area.
Calling simple vmap callback would exactly all what needs to be done for vmalloc.
Regards, Tomasz Stanislawski
On 03/27/2012 02:48 PM, Dave Airlie wrote:
On Tue, Mar 27, 2012 at 1:25 PM, Tomasz Stanislawski t.stanislaws@samsung.com wrote:
Hi Everyone,
I started preparing a support for DMABUF in VIVI allocator. I encountered a problem that may involve making an important design decision.
Option I
Use existing dma_buf_attach/dma_buf_map_attachment mechanism.
The allocator vb2-vmalloc (thus VIVI) would be relatively easy to implement if one would allow to call dma_buf_attach on NULL device. AFAIK, dmabuf code does not dereference this pointer. Permitting passing NULL as device pointer would allow DMABUF to be accessed by importer not associated with any device (like VIVI). After obtaining the sglist the importer would map it into its kernel address space using kmap or vm_map_ram or remap_pfn_range. Note that memory would be pinned after calling dma_buf_map_attachment, so the memory will not be freed in parallel.
The cache flushing would still be an unsolved problem but the same situation happens for non-NULL devices. It may be fixed by future extension to dmabuf API.
I prefer this approach because it is compatible with 'importer-maps-memory-for-importer' strategy.
Option II
Recently support for kernel access by CPU for DMABUF was proposed by Daniel Vetter. It seams to be suitable for VIVI and vb2-vmalloc allocator.
However there are some issues.
- VIVI requires that the whole kernel mapping is contiguous in VMALLOC area,
accessible by a single pointer pointing to the first byte of a buffer. The interface proposed by Daniel involves calling a dma_buf_kmap. This function takes a page number as the argument. However the spec does not guarantee if page n and page n+1 would be mapped into sequential addresses. I think that this requirement should be added to the spec.
- AFAIK, usage of dma_buf_kmap interface does not require any attach operation.
Therefore CPU access is more-or-less a parallel mechanism for memory access.
Calling dma_buf_map_attachment is seams to be equivalent for calling dma_buf_begin_cpu_access for the whole buffer. Plus kmap for all pages if an exporter is responsible for mapping memory at dma_buf_map_attachment
Is it really worth to introduce a parallel mechanism?
Could you give me a hint which solution is better?
Option III - write a vmap interface ala,
http://cgit.freedesktop.org/~airlied/linux/commit/?h=drm-dmabuf2&id=c481...
I'm using this for i915->udl mappings, though vmap is a limited resource and I'm sure on ARM its even more limited.
Dave.
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-mm-sig