Hello,
The goal of those two patches is to add debug and trace capabilities to CMA
on going development.
The first patch allow to dump CMA bitmap status by a simple "cat
/sys/kernel/debug/cma" command line.
The second add events trace points that can be used for performance and/or
log with trace tools:
- to enable it "echo 1 > /sys/kernel/debug/tracing/events/cma/enable"
- to get the log "cat /sys/kernel/debug/tracing/events/trace"
Regards,
Benjamin
--
Benjamin Gaignard
Multimedia Working Group
Linaro.org <http://www.linaro.org/>* **│ *Open source software for ARM SoCs
**
Follow *Linaro: *Facebook <http://www.facebook.com/pages/Linaro> |
Twitter<http://twitter.com/#!/linaroorg>
| Blog <http://www.linaro.org/linaro-blog/>
Hello everyone,
It looks that the last patch series from me was not clearly described in terms
of their kernel base. Selecting a '-next' kernel as a base was not the best
idea. I'm really sorry for the confusion. I've rebased again these series and
prepared 3 new branches. Feel free to download and give them a try.
Here are the kernel trees with latest version of the patches, ready to use:
Linux v3.1-rc10 with CMA v16 (and a few fixes):
git://git.infradead.org/users/kmpark/linux-2.6-samsung 3.1-rc10-cma-v16
Linux v3.1-rc10 with DMA mapping v3 (with DMA-IOMMU integration):
git://git.infradead.org/users/kmpark/linux-2.6-samsung 3.1-rc10-dma-v3
Linux v3.1-rc10 with both CMA v16 and DMA-mapping v3:
git://git.infradead.org/users/kmpark/linux-2.6-samsung 3.1-rc10-cma-v16-dma-v3
Best regards
--
Marek Szyprowski
Samsung Poland R&D Center
The above message is intended solely for the named addressee and may contain trade secret, industrial technology or privileged and confidential information otherwise protected under applicable law. Any unauthorized dissemination, distribution, copying or use of the information contained in this communication is strictly prohibited. If you have received this communication in error, please notify sender by email and delete this communication immediately.
Powyższa wiadomość przeznaczona jest wyłącznie dla adresata niniejszej wiadomości i może zawierać informacje będące tajemnicą handlową, tajemnicą przedsiębiorstwa oraz informacje o charakterze poufnym chronione obowiązującymi przepisami prawa. Jakiekolwiek nieuprawnione ich rozpowszechnianie, dystrybucja, kopiowanie lub użycie informacji zawartych w powyższej wiadomości jest zabronione. Jeśli otrzymałeś powyższą wiadomość omyłkowo, uprzejmie proszę poinformuj o tym fakcie drogą mailową nadawcę tej wiadomości oraz niezwłocznie usuń powyższą wiadomość ze swojego komputera.
Welcome everyone again,
Once again I decided to post an updated version of the Contiguous Memory
Allocator patches.
This version provides mainly a bugfix for a very rare issue that might
have changed migration type of the CMA page blocks resulting in dropping
CMA features from the affected page block and causing memory allocation
to fail. Also the issue reported by Dave Hansen has been fixed.
This version also introduces basic support for x86 architecture, what
allows wide testing on KVM/QEMU emulators and all common x86 boxes. I
hope this will result in wider testing, comments and easier merging to
mainline.
I've also dropped an examplary patch for s5p-fimc platform device
private memory declaration and added the one from real life. CMA device
private memory regions are defined for s5p-mfc device to let it allocate
buffers from two memory banks.
ARM integration code has not been changed since last version, it
provides implementation of all the ideas that has been discussed during
Linaro Sprint meeting. Here are the details:
This version provides a solution for complete integration of CMA to
DMA mapping subsystem on ARM architecture. The issue caused by double
dma pages mapping and possible aliasing in coherent memory mapping has
been finally resolved, both for GFP_ATOMIC case (allocations comes from
coherent memory pool) and non-GFP_ATOMIC case (allocations comes from
CMA managed areas).
For coherent, nommu, ARMv4 and ARMv5 systems the current DMA-mapping
implementation has been kept.
For ARMv6+ systems, CMA has been enabled and a special pool of coherent
memory for atomic allocations has been created. The size of this pool
defaults to DEFAULT_CONSISTEN_DMA_SIZE/8, but can be changed with
coherent_pool kernel parameter (if really required).
All atomic allocations are served from this pool. I've did a little
simplification here, because there is no separate pool for writecombine
memory - such requests are also served from coherent pool. I don't
think that such simplification is a problem here - I found no driver
that use dma_alloc_writecombine with GFP_ATOMIC flags.
All non-atomic allocation are served from CMA area. Kernel mapping is
updated to reflect required memory attributes changes. This is possible
because during early boot, all CMA area are remapped with 4KiB pages in
kernel low-memory.
This version have been tested on Samsung S5PC110 based Goni machine and
Exynos4 UniversalC210 board with various V4L2 multimedia drivers.
Coherent atomic allocations has been tested by manually enabling the dma
bounce for the s3c-sdhci device.
All patches are prepared for Linux Kernel next-20111005, which is based
on v3.1-rc8.
I hope that patch 1-7 can be first merged to linux-mm kernel tree to
enable testing them in linux-next. Then, the ARM related patches 8-9 can
be scheduled for merging.
A few words for these who see CMA for the first time:
The Contiguous Memory Allocator (CMA) makes it possible for device
drivers to allocate big contiguous chunks of memory after the system
has booted.
The main difference from the similar frameworks is the fact that CMA
allows to transparently reuse memory region reserved for the big
chunk allocation as a system memory, so no memory is wasted when no
big chunk is allocated. Once the alloc request is issued, the
framework will migrate system pages to create a required big chunk of
physically contiguous memory.
For more information you can refer to nice LWN articles:
http://lwn.net/Articles/447405/ and http://lwn.net/Articles/450286/
as well as links to previous versions of the CMA framework.
The CMA framework has been initially developed by Michal Nazarewicz
at Samsung Poland R&D Center. Since version 9, I've taken over the
development, because Michal has left the company.
TODO (optional):
- implement support for contiguous memory areas placed in HIGHMEM zone
Best regards
Marek Szyprowski
Samsung Poland R&D Center
Links to previous versions of the patchset:
v15: <http://www.spinics.net/lists/linux-mm/msg23365.html>
v14: <http://www.spinics.net/lists/linux-media/msg36536.html>
v13: (internal, intentionally not released)
v12: <http://www.spinics.net/lists/linux-media/msg35674.html>
v11: <http://www.spinics.net/lists/linux-mm/msg21868.html>
v10: <http://www.spinics.net/lists/linux-mm/msg20761.html>
v9: <http://article.gmane.org/gmane.linux.kernel.mm/60787>
v8: <http://article.gmane.org/gmane.linux.kernel.mm/56855>
v7: <http://article.gmane.org/gmane.linux.kernel.mm/55626>
v6: <http://article.gmane.org/gmane.linux.kernel.mm/55626>
v5: (intentionally left out as CMA v5 was identical to CMA v4)
v4: <http://article.gmane.org/gmane.linux.kernel.mm/52010>
v3: <http://article.gmane.org/gmane.linux.kernel.mm/51573>
v2: <http://article.gmane.org/gmane.linux.kernel.mm/50986>
v1: <http://article.gmane.org/gmane.linux.kernel.mm/50669>
Changelog:
v16:
1. merged a fixup from Michal Nazarewicz to address comments from Dave
Hansen about checking if pfns belong to the same memory zone
2. merged a fix from Michal Nazarewicz for incorrect handling of pages
which belong to page block that is in MIGRATE_ISOLATE state, in very
rare cases the migrate type of page block might have been changed
from MIGRATE_CMA to MIGRATE_MOVABLE because of this bug
3. moved some common code to include/asm-generic
4. added support for x86 DMA-mapping framework for pci-dma hardware,
CMA can be now even more widely tested on KVM/QEMU and a lot of common
x86 boxes
5. rebased onto next-20111005 kernel tree, which includes changes in ARM
DMA-mapping subsystem (CONSISTENT_DMA_SIZE removal)
6. removed patch for CMA s5p-fimc device private regions (served only as
example) and provided the one that matches real life case - s5p-mfc
device
v15:
1. fixed calculation of the total memory after activating CMA area (was
broken from v12)
2. more code cleanup in drivers/base/dma-contiguous.c
3. added address limit for default CMA area
4. rewrote ARM DMA integration:
- removed "ARM: DMA: steal memory for DMA coherent mappings" patch
- kept current DMA mapping implementation for coherent, nommu and
ARMv4/ARMv5 systems
- enabled CMA for all ARMv6+ systems
- added separate, small pool for coherent atomic allocations, defaults
to CONSISTENT_DMA_SIZE/8, but can be changed with kernel parameter
coherent_pool=[size]
v14:
1. Merged with "ARM: DMA: steal memory for DMA coherent mappings"
patch, added support for GFP_ATOMIC allocations.
2. Added checks for NULL device pointer
v13: (internal, intentionally not released)
v12:
1. Fixed 2 nasty bugs in dma-contiguous allocator:
- alignment argument was not passed correctly
- range for dma_release_from_contiguous was not checked correctly
2. Added support for architecture specfic dma_contiguous_early_fixup()
function
3. CMA and DMA-mapping integration for ARM architechture has been
rewritten to take care of the memory aliasing issue that might
happen for newer ARM CPUs (mapping of the same pages with different
cache attributes is forbidden). TODO: add support for GFP_ATOMIC
allocations basing on the "ARM: DMA: steal memory for DMA coherent
mappings" patch and implement support for contiguous memory areas
that are placed in HIGHMEM zone
v11:
1. Removed genalloc usage and replaced it with direct calls to
bitmap_* functions, dropped patches that are not needed
anymore (genalloc extensions)
2. Moved all contiguous area management code from mm/cma.c
to drivers/base/dma-contiguous.c
3. Renamed cm_alloc/free to dma_alloc/release_from_contiguous
4. Introduced global, system wide (default) contiguous area
configured with kernel config and kernel cmdline parameters
5. Simplified initialization to just one function:
dma_declare_contiguous()
6. Added example of device private memory contiguous area
v10:
1. Rebased onto 3.0-rc2 and resolved all conflicts
2. Simplified CMA to be just a pure memory allocator, for use
with platfrom/bus specific subsystems, like dma-mapping.
Removed all device specific functions are calls.
3. Integrated with ARM DMA-mapping subsystem.
4. Code cleanup here and there.
5. Removed private context support.
v9: 1. Rebased onto 2.6.39-rc1 and resolved all conflicts
2. Fixed a bunch of nasty bugs that happened when the allocation
failed (mainly kernel oops due to NULL ptr dereference).
3. Introduced testing code: cma-regions compatibility layer and
videobuf2-cma memory allocator module.
v8: 1. The alloc_contig_range() function has now been separated from
CMA and put in page_allocator.c. This function tries to
migrate all LRU pages in specified range and then allocate the
range using alloc_contig_freed_pages().
2. Support for MIGRATE_CMA has been separated from the CMA code.
I have not tested if CMA works with ZONE_MOVABLE but I see no
reasons why it shouldn't.
3. I have added a @private argument when creating CMA contexts so
that one can reserve memory and not share it with the rest of
the system. This way, CMA acts only as allocation algorithm.
v7: 1. A lot of functionality that handled driver->allocator_context
mapping has been removed from the patchset. This is not to say
that this code is not needed, it's just not worth posting
everything in one patchset.
Currently, CMA is "just" an allocator. It uses it's own
migratetype (MIGRATE_CMA) for defining ranges of pageblokcs
which behave just like ZONE_MOVABLE but dispite the latter can
be put in arbitrary places.
2. The migration code that was introduced in the previous version
actually started working.
v6: 1. Most importantly, v6 introduces support for memory migration.
The implementation is not yet complete though.
Migration support means that when CMA is not using memory
reserved for it, page allocator can allocate pages from it.
When CMA wants to use the memory, the pages have to be moved
and/or evicted as to make room for CMA.
To make it possible it must be guaranteed that only movable and
reclaimable pages are allocated in CMA controlled regions.
This is done by introducing a MIGRATE_CMA migrate type that
guarantees exactly that.
Some of the migration code is "borrowed" from Kamezawa
Hiroyuki's alloc_contig_pages() implementation. The main
difference is that thanks to MIGRATE_CMA migrate type CMA
assumes that memory controlled by CMA are is always movable or
reclaimable so that it makes allocation decisions regardless of
the whether some pages are actually allocated and migrates them
if needed.
The most interesting patches from the patchset that implement
the functionality are:
09/13: mm: alloc_contig_free_pages() added
10/13: mm: MIGRATE_CMA migration type added
11/13: mm: MIGRATE_CMA isolation functions added
12/13: mm: cma: Migration support added [wip]
Currently, kernel panics in some situations which I am trying
to investigate.
2. cma_pin() and cma_unpin() functions has been added (after
a conversation with Johan Mossberg). The idea is that whenever
hardware does not use the memory (no transaction is on) the
chunk can be moved around. This would allow defragmentation to
be implemented if desired. No defragmentation algorithm is
provided at this time.
3. Sysfs support has been replaced with debugfs. I always felt
unsure about the sysfs interface and when Greg KH pointed it
out I finally got to rewrite it to debugfs.
v5: (intentionally left out as CMA v5 was identical to CMA v4)
v4: 1. The "asterisk" flag has been removed in favour of requiring
that platform will provide a "*=<regions>" rule in the map
attribute.
2. The terminology has been changed slightly renaming "kind" to
"type" of memory. In the previous revisions, the documentation
indicated that device drivers define memory kinds and now,
v3: 1. The command line parameters have been removed (and moved to
a separate patch, the fourth one). As a consequence, the
cma_set_defaults() function has been changed -- it no longer
accepts a string with list of regions but an array of regions.
2. The "asterisk" attribute has been removed. Now, each region
has an "asterisk" flag which lets one specify whether this
region should by considered "asterisk" region.
3. SysFS support has been moved to a separate patch (the third one
in the series) and now also includes list of regions.
v2: 1. The "cma_map" command line have been removed. In exchange,
a SysFS entry has been created under kernel/mm/contiguous.
The intended way of specifying the attributes is
a cma_set_defaults() function called by platform initialisation
code. "regions" attribute (the string specified by "cma"
command line parameter) can be overwritten with command line
parameter; the other attributes can be changed during run-time
using the SysFS entries.
2. The behaviour of the "map" attribute has been modified
slightly. Currently, if no rule matches given device it is
assigned regions specified by the "asterisk" attribute. It is
by default built from the region names given in "regions"
attribute.
3. Devices can register private regions as well as regions that
can be shared but are not reserved using standard CMA
mechanisms. A private region has no name and can be accessed
only by devices that have the pointer to it.
4. The way allocators are registered has changed. Currently,
a cma_allocator_register() function is used for that purpose.
Moreover, allocators are attached to regions the first time
memory is registered from the region or when allocator is
registered which means that allocators can be dynamic modules
that are loaded after the kernel booted (of course, it won't be
possible to allocate a chunk of memory from a region if
allocator is not loaded).
5. Index of new functions:
+static inline dma_addr_t __must_check
+cma_alloc_from(const char *regions, size_t size,
+ dma_addr_t alignment)
+static inline int
+cma_info_about(struct cma_info *info, const const char *regions)
+int __must_check cma_region_register(struct cma_region *reg);
+dma_addr_t __must_check
+cma_alloc_from_region(struct cma_region *reg,
+ size_t size, dma_addr_t alignment);
+static inline dma_addr_t __must_check
+cma_alloc_from(const char *regions,
+ size_t size, dma_addr_t alignment);
+int cma_allocator_register(struct cma_allocator *alloc);
Patches in this patchset:
mm: move some functions from memory_hotplug.c to page_isolation.c
mm: alloc_contig_freed_pages() added
Code "stolen" from Kamezawa. The first patch just moves code
around and the second provide function for "allocates" already
freed memory.
mm: alloc_contig_range() added
This is what Kamezawa asked: a function that tries to migrate all
pages from given range and then use alloc_contig_freed_pages()
(defined by the previous commit) to allocate those pages.
mm: MIGRATE_CMA migration type added
mm: MIGRATE_CMA isolation functions added
Introduction of the new migratetype and support for it in CMA.
MIGRATE_CMA works similar to ZONE_MOVABLE expect almost any
memory range can be marked as one.
mm: cma: Contiguous Memory Allocator added
The code CMA code. Manages CMA contexts and performs memory
allocations.
X86: integrate CMA with DMA-mapping subsystem
ARM: integrate CMA with dma-mapping subsystem
Main clients of CMA framework. CMA serves as a alloc_pages()
replacement.
ARM: Samsung: use CMA for 2 memory banks for s5p-mfc device
Use CMA device private memory regions instead of custom solution
based on memblock_reserve() + dma_declare_coherent().
Patch summary:
KAMEZAWA Hiroyuki (2):
mm: move some functions from memory_hotplug.c to page_isolation.c
mm: alloc_contig_freed_pages() added
Marek Szyprowski (4):
drivers: add Contiguous Memory Allocator
ARM: integrate CMA with DMA-mapping subsystem
ARM: Samsung: use CMA for 2 memory banks for s5p-mfc device
X86: integrate CMA with DMA-mapping subsystem
Michal Nazarewicz (3):
mm: alloc_contig_range() added
mm: MIGRATE_CMA migration type added
mm: MIGRATE_CMA isolation functions added
arch/Kconfig | 3 +
arch/arm/Kconfig | 2 +
arch/arm/include/asm/dma-contiguous.h | 16 ++
arch/arm/include/asm/mach/map.h | 1 +
arch/arm/mm/dma-mapping.c | 362 +++++++++++++++++++++++++------
arch/arm/mm/init.c | 8 +
arch/arm/mm/mm.h | 3 +
arch/arm/mm/mmu.c | 29 ++-
arch/arm/plat-s5p/dev-mfc.c | 51 +----
arch/x86/Kconfig | 1 +
arch/x86/include/asm/dma-contiguous.h | 13 +
arch/x86/include/asm/dma-mapping.h | 4 +
arch/x86/kernel/pci-dma.c | 18 ++-
arch/x86/kernel/pci-nommu.c | 8 +-
arch/x86/kernel/setup.c | 2 +
drivers/base/Kconfig | 79 +++++++
drivers/base/Makefile | 1 +
drivers/base/dma-contiguous.c | 386 +++++++++++++++++++++++++++++++++
include/asm-generic/dma-contiguous.h | 27 +++
include/linux/device.h | 4 +
include/linux/dma-contiguous.h | 106 +++++++++
include/linux/mmzone.h | 57 +++++-
include/linux/page-isolation.h | 53 ++++-
mm/Kconfig | 8 +-
mm/compaction.c | 10 +
mm/memory_hotplug.c | 111 ----------
mm/page_alloc.c | 317 +++++++++++++++++++++++++--
mm/page_isolation.c | 131 +++++++++++-
28 files changed, 1522 insertions(+), 289 deletions(-)
create mode 100644 arch/arm/include/asm/dma-contiguous.h
create mode 100644 arch/x86/include/asm/dma-contiguous.h
create mode 100644 drivers/base/dma-contiguous.c
create mode 100644 include/asm-generic/dma-contiguous.h
create mode 100644 include/linux/dma-contiguous.h
--
1.7.1.569.g6f426
The original message was received at Thu, 3 Nov 2011 12:25:58 +0300
from sun.epa.gov.tw [25.14.172.233]
----- The following addresses had permanent fatal errors -----
linaro-mm-sig(a)lists.linaro.org
----- Transcript of the session follows -----
... while talking to lists.linaro.org.:
554 Service unavailable; [152.79.124.186] blocked using bl.spamcop.net
Session aborted
Dear user of lists.linaro.org,
We have received reports that your account has been used to send a huge amount of spam messages during the recent week.
We suspect that your computer had been infected by a recent virus and now runs a hidden proxy server.
We recommend that you follow instructions in order to keep your computer safe.
Have a nice day,
lists.linaro.org user support team.
Hello,
This short patch series is a snapshot of my proof-of-concept integration
of the generic IOMMU interface with DMA-mapping framework for ARM
architecture and Samsung IOMMU driver.
In this version I rebased the code onto the updated DMA-mapping
framework posted a few minutes ago. Management of io address space have
been moved from genalloc to pure bitmap-based allocator. I've also added
support for mapping a scatterlist with dma_map_sg/dma_unmap_sg. DMA
scatterlist interface turned out to be a bit tricky task. Scatterlist
may describe a set of disjoint buffers that cannot be easily merged
together if they don't start and end on page boundary. In such case we
need to allocate more than one buffer in io address space and map
respective pages. This results in a code that might be bit hard to
understand in the first try.
Right now the code support only 4KiB pages.
The patches have been tested on Samsung Exynos4 platform and FIMC
device. Samsung IOMMU driver has been provided for the reference. It is
still a work-in-progress code, but because of my holidays I wanted to
avoid delaying it further.
Here is the link to the intial version of my ARM & DMA-mapping
integration patches: http://www.spinics.net/lists/linux-mm/msg19856.html
All the patches will be available on the following GIT tree:
git://git.infradead.org/users/kmpark/linux-2.6-samsung dma-mapping-v3
Git web interface:
http://git.infradead.org/users/kmpark/linux-2.6-samsung/shortlog/refs/heads…
Future:
1. Add all missing operations for IOMMU mappings (map_single/page/sync_*)
2. Move sync_* operations into separate function for better code sharing
between iommu and non-iommu dma-mapping code
3. Rebase onto CMA patches and solve the issue with double mapping and
page attributes
4. Add support for pages larger than 4KiB.
Please note that this is very early version of patches, definitely NOT
intended for merging. I just wanted to make sure that the direction is
right and share the code with others that might want to cooperate on
dma-mapping improvements.
Best regards
--
Marek Szyprowski
Samsung Poland R&D Center
Patch summary:
Andrzej Pietrasiewicz (1):
ARM: Samsung: update/rewrite Samsung SYSMMU (IOMMU) driver
Marek Szyprowski (1):
ARM: initial proof-of-concept IOMMU mapper for DMA-mapping
arch/arm/Kconfig | 7 +
arch/arm/include/asm/device.h | 4 +
arch/arm/include/asm/dma-iommu.h | 29 +
arch/arm/mach-exynos4/Kconfig | 5 -
arch/arm/mach-exynos4/Makefile | 2 +-
arch/arm/mach-exynos4/clock.c | 47 +-
arch/arm/mach-exynos4/dev-sysmmu.c | 609 +++++++++++------
arch/arm/mach-exynos4/include/mach/irqs.h | 34 +-
arch/arm/mach-exynos4/include/mach/sysmmu.h | 46 --
arch/arm/mm/dma-mapping.c | 504 ++++++++++++++-
arch/arm/mm/vmregion.h | 2 +-
arch/arm/plat-s5p/Kconfig | 21 +-
arch/arm/plat-s5p/include/plat/sysmmu.h | 119 ++--
arch/arm/plat-s5p/sysmmu.c | 855 ++++++++++++++++++------
arch/arm/plat-samsung/include/plat/devs.h | 1 -
arch/arm/plat-samsung/include/plat/fimc-core.h | 25 +
16 files changed, 1724 insertions(+), 586 deletions(-)
create mode 100644 arch/arm/include/asm/dma-iommu.h
delete mode 100644 arch/arm/mach-exynos4/include/mach/sysmmu.h
--
1.7.1.569.g6f426
[Hopefully the last version to linaro-mm-sig before I send it out to
upstream linux lists]
This is the first step in defining a buffer sharing framework.
A new dma_buf buffer object is added, with hooks to allow for easy sharing of
this buffer object across devices.
The framework allows:
- a new buffer-object to be created with fixed size.
- different devices to 'attach' themselves to this buffer, to facilitate
backing storage negotiation, using dma_buf_attach() API.
- association of a file pointer with each user-buffer and associated
allocator-defined operations on that buffer. This operation is called the
'export' operation.
- this exported buffer-object to be shared with the other entity by asking for
its 'file-descriptor (fd)', and sharing the fd across.
- a received fd to get the buffer object back, where it can be accessed using
the associated exporter-defined operations.
- the exporter and user to share the scatterlist using get_scatterlist and
put_scatterlist operations.
Atleast one 'dma_buf_attach()' call is required to be made prior to calling the
get_scatterlist() operation.
Couple of building blocks in get_scatterlist() are added to ease introduction
of sync'ing across exporter and users, and late allocation by the exporter.
mmap() file operation is provided for the associated 'fd', as wrapper over the
optional allocator defined mmap(), to be used by devices that might need one.
Please read documentation added in this patch for more details.
The idea was first mooted at the Linaro memory management mini-summit in
Budapest in May 2011, as part of multiple things needed for a 'unified memory
management framework'. It took a more concrete shape at Linaro memory-management
mini-summit in Cambridge, Aug 2011.
This is based on design suggestions from many people at both the mini-summits,
most notably from Arnd Bergmann <arnd(a)arndb.de>, Rob Clark <rob(a)ti.com> and
Daniel Vetter <daniel(a)ffwll.ch>.
The implementation is inspired from proof-of-concept patch-set from
Tomasz Stanislawski <t.stanislaws(a)samsung.com>, who demonstrated buffer sharing
between two v4l2 devices.
------
v3:
- added dma_buf to dma_attachment and updated {get,put}_scatterlist accordingly.
- added locking mechanism in struct dma_buf, and used around attach-detach APIs.
- dmabuf->ops->attach/detach made optional.
- removed dma_buf_attr_flags and replaced dma_buf_optye with dma_data_direction.
- made dma_buf->ops const in struct dma_buf.
- updated comments for get_scatterlist.
- added documentation.
v2:
- added attach() / detach() dma_buf_ops, and dma_buf_attach(),dma_buf_detach().
- added handling of list of attachment in the dma_buf central API itself.
- corrected copyright information.
v1: initial RFC.
Signed-off-by: Sumit Semwal <sumit.semwal(a)linaro.org>
Signed-off-by: Sumit Semwal <sumit.semwal(a)ti.com>
---
Documentation/dma-buf-sharing.txt | 210 ++++++++++++++++++++++++++++++++
drivers/base/Kconfig | 10 ++
drivers/base/Makefile | 1 +
drivers/base/dma-buf.c | 242 +++++++++++++++++++++++++++++++++++++
include/linux/dma-buf.h | 162 +++++++++++++++++++++++++
5 files changed, 625 insertions(+), 0 deletions(-)
create mode 100644 Documentation/dma-buf-sharing.txt
create mode 100644 drivers/base/dma-buf.c
create mode 100644 include/linux/dma-buf.h
diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
new file mode 100644
index 0000000..4da6644
--- /dev/null
+++ b/Documentation/dma-buf-sharing.txt
@@ -0,0 +1,210 @@
+ DMA Buffer Sharing API Guide
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ Sumit Semwal
+ <sumit dot semwal at linaro dot org>
+ <sumit dot semwal at ti dot com>
+
+This document serves as a guide to device-driver writers on what is the dma-buf
+buffer sharing API, how to use it for exporting and using shared buffers.
+
+Any device driver which wishes to be a part of dma buffer sharing, can do so as
+either the 'exporter' of buffers, or the 'user' of buffers.
+
+Say a driver A wants to use buffers created by driver B, then we call B as the
+exporter, and B as buffer-user.
+
+The exporter
+- implements and manages operations[1] for the buffer
+- allows other users to share the buffer by using dma_buf sharing APIs,
+- manages the details of buffer allocation,
+- decides about the actual backing storage where this allocation happens,
+- takes care of any migration of scatterlist - for all (shared) users of this
+ buffer,
+- optionally, provides mmap capability for drivers that need it.
+
+The buffer-user
+- is one of (many) sharing users of the buffer.
+- doesn't need to worry about how the buffer is allocated, or where.
+- needs a mechanism to get access to the scatterlist that makes up this buffer
+ in memory, mapped into its own address space, so it can access the same area
+ of memory.
+
+
+The dma_buf buffer sharing API usage contains the following steps:
+
+1. Exporter announces that it wishes to export a buffer
+2. Userspace gets the file descriptor associated with the exported buffer, and
+ passes it around to potential buffer-users based on use case
+3. Each buffer-user 'connects' itself to the buffer
+4. When needed, buffer-user requests access to the buffer from exporter
+5. When finished with its use, the buffer-user notifies end-of-dma to exporter
+6. when buffer-user is done using this buffer completely, it 'disconnects'
+ itself from the buffer.
+
+
+1. Exporter's announcement of buffer export
+
+ The buffer exporter announces its wish to export a buffer. In this, it
+ connects its own private buffer data, provides implementation for operations
+ that can be performed on the exported dma_buf, and flags for the file
+ associated with this buffer.
+
+ Interface:
+ struct dma_buf *dma_buf_export(void *priv, struct dma_buf_ops *ops,
+ int flags)
+
+ If this succeeds, dma_buf_export allocates a dma_buf structure, and returns a
+ pointer to the same. It also associates an anon file with this buffer, so it
+ can be exported. On failure to allocate the dma_buf object, it returns NULL.
+
+2. Userspace gets a handle to pass around to potential buffer-users
+
+ Userspace entity requests for a file-descriptor (fd) which is a handle to the
+ anon file associated with the buffer. It can then share the fd with other
+ drivers and/or processes.
+
+ Interface:
+ int dma_buf_fd(struct dma_buf *dmabuf)
+
+ This API installs an fd for the anon file associated with this buffer;
+ returns either 'fd', or error.
+
+3. Each buffer-user 'connects' itself to the buffer
+
+ Each buffer-user now gets a reference to the buffer, using the fd passed to
+ it.
+
+ Interface:
+ struct dma_buf *dma_buf_get(int fd)
+
+ This API will return a reference to the dma_buf, and increment refcount for
+ it.
+
+ After this, the buffer-user needs to attach its device with the buffer, which
+ helps the exporter to know of device buffer constraints.
+
+ Interface:
+ struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
+ struct device *dev)
+
+ This API returns reference to an attachment structure, which is then used
+ for scatterlist operations. It will optionally call the 'attach' dma_buf
+ operation, if provided by the exporter.
+
+ The dma-buf sharing framework does the book-keeping bits related to keeping
+ the list of all attachments to a buffer.
+
+Till this stage, the buffer-exporter has the option to choose not to actually
+allocate the backing storage for this buffer, but wait for the first buffer-user
+to request use of buffer for allocation.
+
+
+4. When needed, buffer-user requests access to the buffer
+
+ Whenever a buffer-user wants to use the buffer for any dma, it asks for
+ access to the buffer using dma_buf->ops->get_scatterlist operation. Atleast
+ one attach to the buffer should have happened before get_scatterlist can be
+ called.
+
+ Interface: [member of struct dma_buf_ops]
+ struct scatterlist * (*get_scatterlist)(struct dma_buf_attachment *,
+ enum dma_data_direction,
+ int* nents);
+
+ It is one of the buffer operations that must be implemented by the exporter.
+ It should return the scatterlist for this buffer, mapped into caller's address
+ space.
+
+ If this is being called for the first time, the exporter can now choose to
+ scan through the list of attachments for this buffer, collate the requirements
+ of the attached devices, and choose an appropriate backing storage for the
+ buffer.
+
+ Based on enum dma_data_direction, it might be possible to have multiple users
+ accessing at the same time (for reading, maybe), or any other kind of sharing
+ that the exporter might wish to make available to buffer-users.
+
+
+5. When finished, the buffer-user notifies end-of-dma to exporter
+
+ Once the dma for the current buffer-user is over, it signals 'end-of-dma' to
+ the exporter using the dma_buf->ops->put_scatterlist() operation.
+
+ Interface:
+ void (*put_scatterlist)(struct dma_buf_attachment *, struct scatterlist *,
+ int nents);
+
+ put_scatterlist signifies the end-of-dma for the attachment provided.
+
+
+6. when buffer-user is done using this buffer, it 'disconnects' itself from the
+ buffer.
+
+ After the buffer-user has no more interest in using this buffer, it should
+ disconnect itself from the buffer:
+
+ - it first detaches itself from the buffer.
+
+ Interface:
+ void dma_buf_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *dmabuf_attach);
+
+ This API removes the attachment from the list in dmabuf, and optionally calls
+ dma_buf->ops->detach(), if provided by exporter, for any housekeeping bits.
+
+ - Then, the buffer-user returns the buffer reference to exporter.
+
+ Interface:
+ void dma_buf_put(struct dma_buf *dmabuf);
+
+ This API then reduces the refcount for this buffer.
+
+ If, as a result of this call, the refcount becomes 0, the 'release' file
+ operation related to this fd is called. It calls the dmabuf->ops->release()
+ operation in turn, and frees the memory allocated for dmabuf when exported.
+
+NOTES:
+- Importance of attach-detach and {get,put}_scatterlist operation pairs
+ The attach-detach calls allow the exporter to figure out backing-storage
+ constraints for the currently-interested devices. This allows preferential
+ allocation, and/or migration of pages across different types of storage
+ available, if possible.
+
+ Bracketing of dma access with {get,put}_scatterlist operations is essential
+ to allow just-in-time backing of storage, and migration mid-way through a
+ use-case.
+
+- Migration of backing storage if needed
+ After
+ - atleast one get_scatterlist has happened,
+ - and the backing storage has been allocated for this buffer,
+ If another new buffer-user intends to attach itself to this buffer, it might
+ be allowed, if possible for the exporter.
+
+ In case it is allowed by the exporter:
+ if the new buffer-user has stricter 'backing-storage constraints', and the
+ exporter can handle these constraints, the exporter can just stall on the
+ get_scatterlist till all outstanding access is completed (as signalled by
+ put_scatterlist).
+ Once all ongoing access is completed, the exporter could potentially move
+ the buffer to the stricter backing-storage, and then allow further
+ {get,put}_scatterlist operations from any buffer-user from the migrated
+ backing-storage.
+
+ If the exporter cannot fulfill the backing-storage constraints of the new
+ buffer-user device as requested, dma_buf_attach() would return an error to
+ denote non-compatibility of the new buffer-sharing request with the current
+ buffer.
+
+ If the exporter chooses not to allow an attach() operation once a
+ get_scatterlist has been called, it simply returns an error.
+
+- mmap file operation
+ An mmap() file operation is provided for the fd associated with the buffer.
+ If the exporter defines an mmap operation, the mmap() fop calls this to allow
+ mmap for devices that might need it; if not, it returns an error.
+
+References:
+[1] struct dma_buf_ops in include/linux/dma-buf.h
+[2] All interfaces mentioned above defined in include/linux/dma-buf.h
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 21cf46f..07d8095 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -174,4 +174,14 @@ config SYS_HYPERVISOR
source "drivers/base/regmap/Kconfig"
+config DMA_SHARED_BUFFER
+ bool "Buffer framework to be shared between drivers"
+ default n
+ depends on ANON_INODES
+ help
+ This option enables the framework for buffer-sharing between
+ multiple drivers. A buffer is associated with a file using driver
+ APIs extension; the file's descriptor can then be passed on to other
+ driver.
+
endmenu
diff --git a/drivers/base/Makefile b/drivers/base/Makefile
index 99a375a..d0df046 100644
--- a/drivers/base/Makefile
+++ b/drivers/base/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_DEVTMPFS) += devtmpfs.o
obj-y += power/
obj-$(CONFIG_HAS_DMA) += dma-mapping.o
obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
+obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o
obj-$(CONFIG_ISA) += isa.o
obj-$(CONFIG_FW_LOADER) += firmware_class.o
obj-$(CONFIG_NUMA) += node.o
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
new file mode 100644
index 0000000..58c51a0
--- /dev/null
+++ b/drivers/base/dma-buf.c
@@ -0,0 +1,242 @@
+/*
+ * Framework for buffer objects that can be shared across devices/subsystems.
+ *
+ * Copyright(C) 2011 Linaro Limited. All rights reserved.
+ * Author: Sumit Semwal <sumit.semwal(a)ti.com>
+ *
+ * Many thanks to linaro-mm-sig list, and specially
+ * Arnd Bergmann <arnd(a)arndb.de>, Rob Clark <rob(a)ti.com> and
+ * Daniel Vetter <daniel(a)ffwll.ch> for their support in creation and
+ * refining of this idea.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/fs.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include <linux/anon_inodes.h>
+
+static inline int is_dma_buf_file(struct file *);
+
+static int dma_buf_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct dma_buf *dmabuf;
+
+ if (!is_dma_buf_file(file))
+ return -EINVAL;
+
+ dmabuf = file->private_data;
+
+ if (!dmabuf->ops->mmap)
+ return -EINVAL;
+
+ return dmabuf->ops->mmap(dmabuf, vma);
+}
+
+static int dma_buf_release(struct inode *inode, struct file *file)
+{
+ struct dma_buf *dmabuf;
+
+ if (!is_dma_buf_file(file))
+ return -EINVAL;
+
+ dmabuf = file->private_data;
+
+ dmabuf->ops->release(dmabuf);
+ kfree(dmabuf);
+ return 0;
+}
+
+static const struct file_operations dma_buf_fops = {
+ .mmap = dma_buf_mmap,
+ .release = dma_buf_release,
+};
+
+/*
+ * is_dma_buf_file - Check if struct file* is associated with dma_buf
+ */
+static inline int is_dma_buf_file(struct file *file)
+{
+ return file->f_op == &dma_buf_fops;
+}
+
+/**
+ * dma_buf_export - Creates a new dma_buf, and associates an anon file
+ * with this buffer,so it can be exported.
+ * Also connect the allocator specific data and ops to the buffer.
+ *
+ * @priv: [in] Attach private data of allocator to this buffer
+ * @ops: [in] Attach allocator-defined dma buf ops to the new buffer.
+ * @flags: [in] mode flags for the file.
+ *
+ * Returns, on success, a newly created dma_buf object, which wraps the
+ * supplied private data and operations for dma_buf_ops. On failure to
+ * allocate the dma_buf object, it can return NULL.
+ *
+ */
+struct dma_buf *dma_buf_export(void *priv, struct dma_buf_ops *ops,
+ int flags)
+{
+ struct dma_buf *dmabuf;
+ struct file *file;
+
+ BUG_ON(!priv || !ops);
+
+ dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL);
+ if (dmabuf == NULL)
+ return dmabuf;
+
+ dmabuf->priv = priv;
+ dmabuf->ops = ops;
+
+ file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags);
+
+ dmabuf->file = file;
+
+ mutex_init(&dmabuf->lock);
+ INIT_LIST_HEAD(&dmabuf->attachments);
+
+ return dmabuf;
+}
+EXPORT_SYMBOL(dma_buf_export);
+
+
+/**
+ * dma_buf_fd - returns a file descriptor for the given dma_buf
+ * @dmabuf: [in] pointer to dma_buf for which fd is required.
+ *
+ * On success, returns an associated 'fd'. Else, returns error.
+ */
+int dma_buf_fd(struct dma_buf *dmabuf)
+{
+ int error, fd;
+
+ if (!dmabuf->file)
+ return -EINVAL;
+
+ error = get_unused_fd_flags(0);
+ if (error < 0)
+ return error;
+ fd = error;
+
+ fd_install(fd, dmabuf->file);
+
+ return fd;
+}
+EXPORT_SYMBOL(dma_buf_fd);
+
+/**
+ * dma_buf_get - returns the dma_buf structure related to an fd
+ * @fd: [in] fd associated with the dma_buf to be returned
+ *
+ * On success, returns the dma_buf structure associated with an fd; uses
+ * file's refcounting done by fget to increase refcount. returns ERR_PTR
+ * otherwise.
+ */
+struct dma_buf *dma_buf_get(int fd)
+{
+ struct file *file;
+
+ file = fget(fd);
+
+ if (!file)
+ return ERR_PTR(-EBADF);
+
+ if (!is_dma_buf_file(file)) {
+ fput(file);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return file->private_data;
+}
+EXPORT_SYMBOL(dma_buf_get);
+
+/**
+ * dma_buf_put - decreases refcount of the buffer
+ * @dmabuf: [in] buffer to reduce refcount of
+ *
+ * Uses file's refcounting done implicitly by fput()
+ */
+void dma_buf_put(struct dma_buf *dmabuf)
+{
+ BUG_ON(!dmabuf->file);
+
+ fput(dmabuf->file);
+
+ return;
+}
+EXPORT_SYMBOL(dma_buf_put);
+
+/**
+ * dma_buf_attach - Add the device to dma_buf's attachments list; optionally,
+ * calls attach() of dma_buf_ops to allow device-specific attach functionality
+ * @dmabuf: [in] buffer to attach device to.
+ * @dev: [in] device to be attached.
+ *
+ * Returns struct dma_buf_attachment * for this attachment; may return NULL.
+ *
+ */
+struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
+ struct device *dev)
+{
+ struct dma_buf_attachment *attach;
+ int ret;
+
+ BUG_ON(!dmabuf || !dev);
+
+ mutex_lock(&dmabuf->lock);
+
+ attach = kzalloc(sizeof(struct dma_buf_attachment), GFP_KERNEL);
+ if (attach == NULL)
+ goto err_alloc;
+
+ attach->dev = dev;
+ if (dmabuf->ops->attach) {
+ ret = dmabuf->ops->attach(dmabuf, dev, attach);
+ if (!ret)
+ goto err_attach;
+ }
+ list_add(&attach->node, &dmabuf->attachments);
+
+err_alloc:
+ mutex_unlock(&dmabuf->lock);
+ return attach;
+err_attach:
+ kfree(attach);
+ mutex_unlock(&dmabuf->lock);
+ return ERR_PTR(ret);
+}
+EXPORT_SYMBOL(dma_buf_attach);
+
+/**
+ * dma_buf_detach - Remove the given attachment from dmabuf's attachments list;
+ * optionally calls detach() of dma_buf_ops for device-specific detach
+ * @dmabuf: [in] buffer to detach from.
+ * @attach: [in] attachment to be detached; is free'd after this call.
+ *
+ */
+void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
+{
+ BUG_ON(!dmabuf || !attach);
+
+ mutex_lock(&dmabuf->lock);
+ list_del(&attach->node);
+ if (dmabuf->ops->detach)
+ dmabuf->ops->detach(dmabuf, attach);
+
+ kfree(attach);
+ mutex_unlock(&dmabuf->lock);
+ return;
+}
+EXPORT_SYMBOL(dma_buf_detach);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
new file mode 100644
index 0000000..5bdf16a
--- /dev/null
+++ b/include/linux/dma-buf.h
@@ -0,0 +1,162 @@
+/*
+ * Header file for dma buffer sharing framework.
+ *
+ * Copyright(C) 2011 Linaro Limited. All rights reserved.
+ * Author: Sumit Semwal <sumit.semwal(a)ti.com>
+ *
+ * Many thanks to linaro-mm-sig list, and specially
+ * Arnd Bergmann <arnd(a)arndb.de>, Rob Clark <rob(a)ti.com> and
+ * Daniel Vetter <daniel(a)ffwll.ch> for their support in creation and
+ * refining of this idea.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __DMA_BUF_H__
+#define __DMA_BUF_H__
+
+#include <linux/file.h>
+#include <linux/err.h>
+#include <linux/device.h>
+#include <linux/scatterlist.h>
+#include <linux/list.h>
+#include <linux/dma-mapping.h>
+
+struct dma_buf;
+
+/**
+ * struct dma_buf_attachment - holds device-buffer attachment data
+ * @dmabuf: buffer for this attachment.
+ * @dev: device attached to the buffer.
+ * @node: list_head to allow manipulation of list of dma_buf_attachment.
+ * @priv: exporter-specific attachment data.
+ */
+struct dma_buf_attachment {
+ struct dma_buf *dmabuf;
+ struct device *dev;
+ struct list_head node;
+ void *priv;
+};
+
+/**
+ * struct dma_buf_ops - operations possible on struct dma_buf
+ * @create: creates a struct dma_buf of a fixed size. Actual allocation
+ * does not happen here.
+ * @attach: allows different devices to 'attach' themselves to the given
+ * buffer. It might return -EBUSY to signal that backing storage
+ * is already allocated and incompatible with the requirements
+ * of requesting device. [optional]
+ * @detach: detach a given device from this buffer. [optional]
+ * @get_scatterlist: returns list of scatter pages allocated, increases
+ * usecount of the buffer. Requires atleast one attach to be
+ * called before. Returned sg list should already be mapped
+ * into _device_ address space.
+ * @put_scatterlist: decreases usecount of buffer, might deallocate scatter
+ * pages.
+ * @mmap: memory map this buffer - optional.
+ * @release: release this buffer; to be called after the last dma_buf_put.
+ * @sync_sg_for_cpu: sync the sg list for cpu.
+ * @sync_sg_for_device: synch the sg list for device.
+ */
+struct dma_buf_ops {
+ int (*attach)(struct dma_buf *, struct device *,
+ struct dma_buf_attachment *);
+
+ void (*detach)(struct dma_buf *, struct dma_buf_attachment *);
+
+ /* For {get,put}_scatterlist below, any specific buffer attributes
+ * required should get added to device_dma_parameters accessible
+ * via dev->dma_params.
+ */
+ struct scatterlist * (*get_scatterlist)(struct dma_buf_attachment *,
+ enum dma_data_direction,
+ int *nents);
+ void (*put_scatterlist)(struct dma_buf_attachment *,
+ struct scatterlist *,
+ int nents);
+ /* TODO: Add interruptible and interruptible_timeout versions */
+
+ /* allow mmap optionally for devices that need it */
+ int (*mmap)(struct dma_buf *, struct vm_area_struct *);
+ /* after final dma_buf_put() */
+ void (*release)(struct dma_buf *);
+
+ /* allow allocator to take care of cache ops */
+ void (*sync_sg_for_cpu) (struct dma_buf *, struct device *);
+ void (*sync_sg_for_device)(struct dma_buf *, struct device *);
+};
+
+/**
+ * struct dma_buf - shared buffer object
+ * @file: file pointer used for sharing buffers across, and for refcounting.
+ * @attachments: list of dma_buf_attachment that denotes all devices attached.
+ * @ops: dma_buf_ops associated with this buffer object
+ * @priv: user specific private data
+ */
+struct dma_buf {
+ size_t size;
+ struct file *file;
+ struct list_head attachments;
+ const struct dma_buf_ops *ops;
+ /* mutex to serialize list manipulation and other ops */
+ struct mutex lock;
+ void *priv;
+};
+
+#ifdef CONFIG_DMA_SHARED_BUFFER
+struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
+ struct device *dev);
+void dma_buf_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *dmabuf_attach);
+struct dma_buf *dma_buf_export(void *priv, struct dma_buf_ops *ops, int flags);
+int dma_buf_fd(struct dma_buf *dmabuf);
+struct dma_buf *dma_buf_get(int fd);
+void dma_buf_put(struct dma_buf *dmabuf);
+
+#else
+
+static inline struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
+ struct device *dev)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline void dma_buf_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *dmabuf_attach)
+{
+ return;
+}
+
+static inline struct dma_buf *dma_buf_export(void *priv,
+ struct dma_buf_ops *ops,
+ int flags)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline int dma_buf_fd(struct dma_buf *dmabuf)
+{
+ return -ENODEV;
+}
+
+static inline struct dma_buf *dma_buf_get(int fd)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline void dma_buf_put(struct dma_buf *dmabuf)
+{
+ return;
+}
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+
+#endif /* __DMA_BUF_H__ */
--
1.7.4.1
This is the first step in defining a buffer sharing framework.
A new dma_buf buffer object is added, with hooks to allow for easy sharing of
this buffer object across devices.
The framework allows:
- a new buffer-object to be created with fixed size.
- different devices to 'attach' themselves to this buffer, to facilitate
backing storage negotiation, using dma_buf_attach() API.
- association of a file pointer with each user-buffer and associated
allocator-defined operations on that buffer. This operation is called the
'export' operation.
- this exported buffer-object to be shared with the other entity by asking for
its 'file-descriptor (fd)', and sharing the fd across.
- a received fd to get the buffer object back, where it can be accessed using
the associated exporter-defined operations.
- the exporter and importer to share the scatterlist using get_scatterlist and
put_scatterlist operations.
Atleast one 'attach()' call is required to be made prior to calling the
buffer_map() callback.
Couple of building blocks in get_scatterlist() are added to ease introduction
of sync'ing across exporter and importers, and late allocation by the exporter.
Optionally, mmap() file operation is provided for the associated 'fd', as
wrapper over the allocator defined mmap()[optional], to be used by devices that
might need one.
The idea was first mooted at the Linaro memory management mini-summit in
Budapest in May 2011, as part of multiple things needed for a 'unified memory
management framework'. It took a more concrete shape at Linaro memory-management
mini-summit in Cambridge, Aug 2011.
This is based on design suggestions from many people at both the mini-summits,
most notably from Arnd Bergmann <arnd(a)arndb.de>, Rob Clark <rob(a)ti.com> and
Daniel Vetter <daniel(a)ffwll.ch>.
The implementation is inspired from proof-of-concept patch-set from
Tomasz Stanislawski <t.stanislaws(a)samsung.com>, who demonstrated buffer sharing
between two v4l2 devices.
------
v1: initial RFC.
v2:
- added attach() / detach() dma_buf_ops, and dma_buf_attach(),dma_buf_detach().
- added handling of list of attachment in the dma_buf central API itself.
- corrected copyright information.
Signed-off-by: Sumit Semwal <sumit.semwal(a)linaro.org>
Signed-off-by: Sumit Semwal <sumit.semwal(a)ti.com>
---
drivers/base/Kconfig | 10 ++
drivers/base/Makefile | 1 +
drivers/base/dma-buf.c | 224 +++++++++++++++++++++++++++++++++++++++++++++++
include/linux/dma-buf.h | 179 +++++++++++++++++++++++++++++++++++++
4 files changed, 414 insertions(+), 0 deletions(-)
create mode 100644 drivers/base/dma-buf.c
create mode 100644 include/linux/dma-buf.h
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 21cf46f..07d8095 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -174,4 +174,14 @@ config SYS_HYPERVISOR
source "drivers/base/regmap/Kconfig"
+config DMA_SHARED_BUFFER
+ bool "Buffer framework to be shared between drivers"
+ default n
+ depends on ANON_INODES
+ help
+ This option enables the framework for buffer-sharing between
+ multiple drivers. A buffer is associated with a file using driver
+ APIs extension; the file's descriptor can then be passed on to other
+ driver.
+
endmenu
diff --git a/drivers/base/Makefile b/drivers/base/Makefile
index 99a375a..d0df046 100644
--- a/drivers/base/Makefile
+++ b/drivers/base/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_DEVTMPFS) += devtmpfs.o
obj-y += power/
obj-$(CONFIG_HAS_DMA) += dma-mapping.o
obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
+obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o
obj-$(CONFIG_ISA) += isa.o
obj-$(CONFIG_FW_LOADER) += firmware_class.o
obj-$(CONFIG_NUMA) += node.o
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
new file mode 100644
index 0000000..8936da7
--- /dev/null
+++ b/drivers/base/dma-buf.c
@@ -0,0 +1,224 @@
+/*
+ * Framework for buffer objects that can be shared across devices/subsystems.
+ *
+ * Copyright(C) 2011 Linaro Limited. All rights reserved.
+ * Author: Sumit Semwal <sumit.semwal(a)ti.com>
+ *
+ * Many thanks to linaro-mm-sig list, and specially
+ * Arnd Bergmann <arnd(a)arndb.de>, Rob Clark <rob(a)ti.com> and
+ * Daniel Vetter <daniel(a)ffwll.ch> for their support in creation and
+ * refining of this idea.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/fs.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include <linux/anon_inodes.h>
+
+static inline int is_dma_buf_file(struct file *);
+
+static int dma_buf_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct dma_buf *dmabuf;
+
+ if (!is_dma_buf_file(file))
+ return -EINVAL;
+
+ dmabuf = file->private_data;
+
+ if (!dmabuf->ops->mmap)
+ return -EINVAL;
+
+ return dmabuf->ops->mmap(dmabuf, vma);
+}
+
+static int dma_buf_release(struct inode *inode, struct file *file)
+{
+ struct dma_buf *dmabuf;
+
+ if (!is_dma_buf_file(file))
+ return -EINVAL;
+
+ dmabuf = file->private_data;
+
+ dmabuf->ops->release(dmabuf);
+ kfree(dmabuf);
+ return 0;
+}
+
+static const struct file_operations dma_buf_fops = {
+ .mmap = dma_buf_mmap,
+ .release = dma_buf_release,
+};
+
+/*
+ * is_dma_buf_file - Check if struct file* is associated with dma_buf
+ */
+static inline int is_dma_buf_file(struct file *file)
+{
+ return file->f_op == &dma_buf_fops;
+}
+
+/**
+ * dma_buf_export - Creates a new dma_buf, and associates an anon file
+ * with this buffer,so it can be exported.
+ * Also connect the allocator specific data and ops to the buffer.
+ *
+ * @priv: [in] Attach private data of allocator to this buffer
+ * @ops: [in] Attach allocator-defined dma buf ops to the new buffer.
+ * @flags: [in] mode flags for the file.
+ *
+ * Returns, on success, a newly created dma_buf object, which wraps the
+ * supplied private data and operations for dma_buf_ops. On failure to
+ * allocate the dma_buf object, it can return NULL.
+ *
+ */
+struct dma_buf *dma_buf_export(void *priv, struct dma_buf_ops *ops,
+ int flags)
+{
+ struct dma_buf *dmabuf;
+ struct file *file;
+
+ BUG_ON(!priv || !ops);
+
+ dmabuf = kzalloc(sizeof(struct dma_buf), GFP_KERNEL);
+ if (dmabuf == NULL)
+ return dmabuf;
+
+ dmabuf->priv = priv;
+ dmabuf->ops = ops;
+
+ file = anon_inode_getfile("dmabuf", &dma_buf_fops, dmabuf, flags);
+
+ dmabuf->file = file;
+
+ INIT_LIST_HEAD(&dmabuf->attachments);
+
+ return dmabuf;
+}
+EXPORT_SYMBOL(dma_buf_export);
+
+
+/**
+ * dma_buf_fd - returns a file descriptor for the given dma_buf
+ * @dmabuf: [in] pointer to dma_buf for which fd is required.
+ *
+ * On success, returns an associated 'fd'. Else, returns error.
+ */
+int dma_buf_fd(struct dma_buf *dmabuf)
+{
+ int error, fd;
+
+ if (!dmabuf->file)
+ return -EINVAL;
+
+ error = get_unused_fd_flags(0);
+ if (error < 0)
+ return error;
+ fd = error;
+
+ fd_install(fd, dmabuf->file);
+
+ return fd;
+}
+EXPORT_SYMBOL(dma_buf_fd);
+
+/**
+ * dma_buf_get - returns the dma_buf structure related to an fd
+ * @fd: [in] fd associated with the dma_buf to be returned
+ *
+ * On success, returns the dma_buf structure associated with an fd; uses
+ * file's refcounting done by fget to increase refcount. returns ERR_PTR
+ * otherwise.
+ */
+struct dma_buf *dma_buf_get(int fd)
+{
+ struct file *file;
+
+ file = fget(fd);
+
+ if (!file)
+ return ERR_PTR(-EBADF);
+
+ if (!is_dma_buf_file(file)) {
+ fput(file);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return file->private_data;
+}
+EXPORT_SYMBOL(dma_buf_get);
+
+/**
+ * dma_buf_put - decreases refcount of the buffer
+ * @dmabuf: [in] buffer to reduce refcount of
+ *
+ * Uses file's refcounting done implicitly by fput()
+ */
+void dma_buf_put(struct dma_buf *dmabuf)
+{
+ BUG_ON(!dmabuf->file);
+
+ fput(dmabuf->file);
+
+ return;
+}
+EXPORT_SYMBOL(dma_buf_put);
+
+/**
+ * dma_buf_attach - Add the device to dma_buf's attachments list; calls the
+ * attach() of dma_buf_ops to allow device-specific attach functionality
+ * @dmabuf: [in] buffer to attach device to.
+ * @dev: [in] device to be attached.
+ *
+ * Returns struct dma_buf_attachment * for this attachment; may return NULL.
+ *
+ */
+struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
+ struct device *dev)
+{
+ struct dma_buf_attachment *attach;
+
+ BUG_ON(!dmabuf || !dev);
+
+ attach = kzalloc(sizeof(struct dma_buf_attachment), GFP_KERNEL);
+ if (attach == NULL)
+ return attach;
+
+ dmabuf->ops->attach(dmabuf, dev, attach);
+ list_add(&attach->node, &dmabuf->attachments);
+
+ return attach;
+}
+EXPORT_SYMBOL(dma_buf_attach);
+
+/**
+ * dma_buf_detach - Remove the given attachment from dmabuf's attachments list;
+ * calls detach() of dma_buf_ops for device-specific detach
+ * @dmabuf: [in] buffer to detach from.
+ * @attach: [in] attachment to be detached; is free'd after this call.
+ *
+ */
+void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
+{
+ BUG_ON(!dmabuf || !attach);
+
+ list_del(&attach->node);
+ dmabuf->ops->detach(dmabuf, attach);
+
+ kfree(attach);
+ return;
+}
+EXPORT_SYMBOL(dma_buf_detach);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
new file mode 100644
index 0000000..2894b45
--- /dev/null
+++ b/include/linux/dma-buf.h
@@ -0,0 +1,179 @@
+/*
+ * Header file for dma buffer sharing framework.
+ *
+ * Copyright(C) 2011 Linaro Limited. All rights reserved.
+ * Author: Sumit Semwal <sumit.semwal(a)ti.com>
+ *
+ * Many thanks to linaro-mm-sig list, and specially
+ * Arnd Bergmann <arnd(a)arndb.de>, Rob Clark <rob(a)ti.com> and
+ * Daniel Vetter <daniel(a)ffwll.ch> for their support in creation and
+ * refining of this idea.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __DMA_BUF_H__
+#define __DMA_BUF_H__
+
+#include <linux/file.h>
+#include <linux/err.h>
+#include <linux/device.h>
+#include <linux/scatterlist.h>
+#include <linux/list.h>
+
+struct dma_buf;
+
+/**
+ * enum dma_buf_optype - Used to denote the operation for which
+ * get_scatterlist() is called. This will help in implementing a wait(op)
+ * for sync'ing.
+ * @DMA_BUF_OP_READ: read operation will be done on this scatterlist
+ * @DMA_BUF_OP_WRITE: write operation will be done on this scatterlist
+ */
+enum dma_buf_optype {
+ DMA_BUF_OP_READ = (1 << 0),
+ DMA_BUF_OP_WRITE = (1 << 1),
+ DMA_BUF_OP_MAX,
+};
+
+/**
+ * enum dma_buf_attr_flags - Defines the attributes for this buffer. This
+ * can allow 'late backing of buffer' based on first get_scatterlist() call.
+ * @DMA_BUF_ATTR_CONTIG: Contiguous buffer
+ * @DMA_BUF_ATTR_DISCONTIG: Discontiguous buffer
+ * @DMA_BUF_ATTR_CUSTOM: Platform specific buffer; should evolve into some
+ * attributes that define buffers on given specific platform
+ */
+enum dma_buf_attr_flags {
+ DMA_BUF_ATTR_CONTIG,
+ DMA_BUF_ATTR_DISCONTIG,
+ DMA_BUF_ATTR_CUSTOM,
+ DMA_BUF_ATTR_MAX,
+};
+
+/**
+ * struct dma_buf_attachment - holds device-buffer attachment data
+ * @node: list_head to allow manipulation of list of dma_buf_attachment.
+ * @dev: device attached to the buffer.
+ * @priv: exporter-specific attachment data.
+ */
+struct dma_buf_attachment {
+ struct list_head node;
+ struct device *dev;
+ void *priv;
+};
+
+/**
+ * struct dma_buf_ops - operations possible on struct dma_buf
+ * @create: creates a struct dma_buf of a fixed size. Actual allocation
+ * does not happen here.
+ * @attach: allows different devices to 'attach' themselves to the given
+ * buffer. It might return -EBUSY to signal that backing storage
+ * is already allocated and incompatible with the requirements
+ * of requesting device.
+ * @detach: detach a given device from this buffer.
+ * @get_scatterlist: returns list of scatter pages allocated, increases
+ * usecount of the buffer. Requires atleast one attach to be
+ * called before.
+ * @put_scatterlist: decreases usecount of buffer, might deallocate scatter
+ * pages.
+ * @mmap: memory map this buffer - optional.
+ * @release: release this buffer; to be called after the last dma_buf_put.
+ * @sync_sg_for_cpu: sync the sg list for cpu.
+ * @sync_sg_for_device: synch the sg list for device.
+ */
+struct dma_buf_ops {
+ void (*attach)(struct dma_buf *, struct device *,
+ struct dma_buf_attachment *);
+
+ void (*detach)(struct dma_buf *, struct dma_buf_attachment *);
+
+ struct scatterlist * (*get_scatterlist)(struct dma_buf *,
+ struct dma_buf_attachment *,
+ enum dma_buf_optype,
+ enum dma_buf_attr_flags);
+ void (*put_scatterlist)(struct dma_buf *, struct dma_buf_attachment *,
+ struct scatterlist *);
+
+ /* allow mmap optionally for devices that need it */
+ int (*mmap)(struct dma_buf *, struct vm_area_struct *);
+ /* after final dma_buf_put() */
+ void (*release)(struct dma_buf *);
+
+ /* allow allocator to take care of cache ops */
+ void (*sync_sg_for_cpu) (struct dma_buf *, struct device *);
+ void (*sync_sg_for_device)(struct dma_buf *, struct device *);
+};
+
+/**
+ * struct dma_buf - shared buffer object
+ * @file: file pointer used for sharing buffers across, and for refcounting.
+ * @attachments: list of dma_buf_attachment that denotes all devices attached.
+ * @ops: dma_buf_ops associated with this buffer object
+ * @priv: user specific private data
+ */
+struct dma_buf {
+ size_t size;
+ struct file *file;
+ struct list_head attachments;
+ struct dma_buf_ops *ops;
+ void *priv;
+};
+
+#ifdef CONFIG_DMA_SHARED_BUFFER
+struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
+ struct device *dev);
+void dma_buf_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *dmabuf_attach);
+struct dma_buf *dma_buf_export(void *priv, struct dma_buf_ops *ops, int flags);
+int dma_buf_fd(struct dma_buf *dmabuf);
+struct dma_buf *dma_buf_get(int fd);
+void dma_buf_put(struct dma_buf *dmabuf);
+
+#else
+
+static inline struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
+ struct device *dev)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline void dma_buf_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *dmabuf_attach)
+{
+ return;
+}
+
+static inline struct dma_buf *dma_buf_export(void *priv,
+ struct dma_buf_ops *ops,
+ int flags)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline int dma_buf_fd(struct dma_buf *dmabuf)
+{
+ return -ENODEV;
+}
+
+static inline struct dma_buf *dma_buf_get(int fd)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline void dma_buf_put(struct dma_buf *dmabuf)
+{
+ return;
+}
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+
+#endif /* __DMA_BUF_H__ */
--
1.7.4.1
Hi all,
As a result of the outcomes of the last face-to-face sessions in
Cambourne in August, I've been putting together a set of topic
branches that reflect the current state of all the efforts of the work
that's been going on. In general, I will be preparing a set of topic
branches for each monthly component releases that we do in Linaro, but
I am certainly open to additional "checkpoints" if folks feel that
there is a piece of work that is significant enough to push out
mid-cycle. As a rule, the overall Linaro releases happen on the last
Thursday of the month, with the component releases happening the
Thursday before that to allow for sane release management. This means
that the component releases for 2011.09 effectively went out this past
Thursday (2011-09-22). Here's what is included there:
http://git.linaro.org/gitweb?p=people/jessebarker/linaro-mm-sig/linux-2.6.g…
cma-v15: Marek Szyprowski's CMA v15 patchset based on v3.1-rc2
(current rc when the patchset went out).
omapdrm: Rob Clark's DRM/KMS driver for OMAP4 based on v3.1-rc6. It
also includes CMA v15, which is enabled by the OMAP driver along with
a couple of other for-next patchsets required to support the DRM
driver.
dma-buf-v2: Sumit Semwal's struct dma_buf v2 proposal based on v3.1-rc6
Please let me know if you encounter problems with any of this, or have
suggestions on how to improve the ease of access for this work (apart
from moving it off of git.linaro.org :-). The goal here is to offer a
single access point to all of this great work to generate better
feedback and ensure that the solutions are robust and upstreamable.
cheers,
Jesse