[Linaro-mm-sig] Minutes from V4L2 update call
daeinki at gmail.com
Fri Mar 30 17:21:00 UTC 2012
I have updated drm prime for Exynos drm. for this you can refer to below link:
this branch is based on
drm-prime-dmabuf-initial posted by Dave recently.
I have already tested this version internally(just for import and
export) and worked fine. and then I will try to test it with v4l2
based mfc and fimc driver on Exynos4412 SoC next week.
please let me know if there is any problem.
2012년 3월 30일 오전 5:35, Tomasz Stanislawski <t.stanislaws at samsung.com>님의 말:
> Hi Dave,
> Thank you for the suggestion of using vmap/vunmap extensions for dmabuf.
> It was exactly what I needed. It leads to trivial (circa ~60 lines)
> implementation of DMABUF importer for vmalloc.
> I prepared a PoC implementation and successfully tested it using
> s5p-tv as exporter and VIVI as importer. I will prepare and
> post patches and test application for DRM vs VIVI version next week.
> The Exynos DRM makes use of dma_alloc_coherent for a buffer
> allocation making implementation of vmap/vunmap trivial.
> Memory coherence fixes all caching problems :)
> I decided to give up setup CPU access using sglist provided by
> dma_buf_map_attachment. I agree with you that mapping in vmalloc
> area are a scare resource. Therefore mapping a scatterlist into
> vmalloc area it not a good idea.
> The kmap interface presented by Vetter is much more generic and robust
> but does not suit well to vb2-vmalloc's and VIVI's designs.
> Adding request that consecutive pages are mapped into consecutive
> address will help little because VIVI has to touch the whole
> buffer anyway. Therefore vmalloc would have to kmap all pages during
> map_dmabuf operation. Moreover, requesting kmap to map consecutive
> pages into consecutive address may lead to great fragmentation of
> vmalloc area.
> Calling simple vmap callback would exactly all what needs to be done
> for vmalloc.
> Tomasz Stanislawski
> On 03/27/2012 02:48 PM, Dave Airlie wrote:
>> On Tue, Mar 27, 2012 at 1:25 PM, Tomasz Stanislawski
>> <t.stanislaws at samsung.com> wrote:
>>> Hi Everyone,
>>> I started preparing a support for DMABUF in VIVI allocator.
>>> I encountered a problem that may involve making an important
>>> design decision.
>>> Option I
>>> Use existing dma_buf_attach/dma_buf_map_attachment mechanism.
>>> The allocator vb2-vmalloc (thus VIVI) would be relatively easy to
>>> implement if one would allow to call dma_buf_attach on NULL device.
>>> AFAIK, dmabuf code does not dereference this pointer.
>>> Permitting passing NULL as device pointer would allow DMABUF to
>>> be accessed by importer not associated with any device (like VIVI).
>>> After obtaining the sglist the importer would map it into its
>>> kernel address space using kmap or vm_map_ram or remap_pfn_range.
>>> Note that memory would be pinned after calling dma_buf_map_attachment,
>>> so the memory will not be freed in parallel.
>>> The cache flushing would still be an unsolved problem but the same
>>> situation happens for non-NULL devices. It may be fixed by future
>>> extension to dmabuf API.
>>> I prefer this approach because it is compatible with
>>> 'importer-maps-memory-for-importer' strategy.
>>> Option II
>>> Recently support for kernel access by CPU for DMABUF was proposed by
>>> Daniel Vetter. It seams to be suitable for VIVI and vb2-vmalloc allocator.
>>> However there are some issues.
>>> 1. VIVI requires that the whole kernel mapping is contiguous in VMALLOC area,
>>> accessible by a single pointer pointing to the first byte of a buffer.
>>> The interface proposed by Daniel involves calling a dma_buf_kmap. This function
>>> takes a page number as the argument. However the spec does not guarantee if
>>> page n and page n+1 would be mapped into sequential addresses. I think that
>>> this requirement should be added to the spec.
>>> 2. AFAIK, usage of dma_buf_kmap interface does not require any attach operation.
>>> Therefore CPU access is more-or-less a parallel mechanism for memory access.
>>> Calling dma_buf_map_attachment is seams to be equivalent for calling
>>> dma_buf_begin_cpu_access for the whole buffer. Plus kmap for all pages
>>> if an exporter is responsible for mapping memory at dma_buf_map_attachment
>>> Is it really worth to introduce a parallel mechanism?
>>> Could you give me a hint which solution is better?
>> Option III - write a vmap interface ala,
>> I'm using this for i915->udl mappings, though vmap is a limited
>> resource and I'm sure on ARM its even more limited.
> Linaro-mm-sig mailing list
> Linaro-mm-sig at lists.linaro.org
More information about the Linaro-mm-sig