From: "joro(a)8bytes.org" <joro(a)8bytes.org>
Subject: Re: [PATCH 2/2] ARM: IOMMU: Tegra30: Add iommu_ops for SMMU driver
Date: Tue, 24 Jan 2012 15:25:21 +0100
Message-ID: <20120124142521.GE6269(a)8bytes.org>
> On Tue, Jan 24, 2012 at 03:46:01PM +0200, Felipe Balbi wrote:
> > On Tue, Jan 24, 2012 at 02:41:21PM +0100, Hiroshi Doyu wrote:
> > > Actually I really like the concept of this "domain" now, which hides
> > > the H/W hierarchy from users.
> > >
> > > But in Tegra SMMU/GART case, there's a single one IOMMU device in the
> > > system. Keeping a iommu device list in a domain and iterating iommu
> > > device list in each iommu_ops seem to be so nice, but I'm afraid that
> > > this may be a bit too much when one already knows that there's only
> > > one IOMMU device in the system.
> > >
> > > If there's no actual problem for 1-1 mapping between IOMMU H/Ws and
> > > domains, I think that it may not so bad to keep the original code(1-1)
> > > for GART and SMMU. What do you think?
> >
> > I think it boils down to "extensability". If you can truly/fully
> > guarantee that there will *always* be a single IOMMU on all upcoming
> > Tegras, then it's really overkill.
> >
> > But if there's even a remote possibility of the HW being changed and you
> > end up with more IOMMUs, things start to feel necessary for the sake of
> > making it easy to extend.
>
> Right. But I am fine with the logic as-is when there is only one SMMU in
> the system. But please also change the IOMMU driver so that it really
> only initializes a single SMMU. When boards pop up with more than one
> you we notice that assumption in the code again and are reminded to
> change it.
Fixed.
I'll revisit 4MB pagesize support and the above multiple iommu device
support in a domain later.
Attached the update patch.
Hello Everyone,
Post some discussion as an RFC, here is the patch for introducing
DMA buffer sharing mechanism - change history is in the changelog below.
Various subsystems - V4L2, GPU-accessors, DRI to name a few - have felt the
need to have a common mechanism to share memory buffers across different
devices - ARM, video hardware, GPU.
This need comes forth from a variety of use cases including cameras, image
processing, video recorders, sound processing, DMA engines, GPU and display
buffers, amongst others.
This patch attempts to define such a buffer sharing mechanism - it is the
result of discussions from a couple of memory-management mini-summits held by
Linaro to understand and address common needs around memory management. [1]
A new dma_buf buffer object is added, with operations and API to allow easy
sharing of this buffer object across devices.
The framework allows:
- a new buffer object to be created with fixed size, associated with a file
pointer and allocator-defined operations for this buffer object. This
operation is called the 'export' operation.
- different devices to 'attach' themselves to this buffer object, to facilitate
backing storage negotiation, using dma_buf_attach() API.
- this exported buffer object to be shared with the other entity by asking for
its 'file-descriptor (fd)', and sharing the fd across.
- a received fd to get the buffer object back, where it can be accessed using
the associated exporter-defined operations.
- the exporter and user to share the buffer object's scatterlist using
map_dma_buf and unmap_dma_buf operations.
Documentation present in the patch-set gives more details.
For 1st version, dma-buf is marked as an EXPERIMENTAL driver, which we can
remove for later versions with additional usage and testing.
*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
For this first version, A buffer shared using the dma_buf sharing API:
- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
this framework.
- may be used *ONLY* by importers that do not need CPU access to the buffer.
This is based on design suggestions from many people at the mini-summits,
most notably from Arnd Bergmann <arnd(a)arndb.de>, Rob Clark <rob(a)ti.com> and
Daniel Vetter <daniel(a)ffwll.ch>.
The implementation is inspired from proof-of-concept patch-set from
Tomasz Stanislawski <t.stanislaws(a)samsung.com>, who demonstrated buffer sharing
between two v4l2 devices. [2]
Some sample implementations and WIP for dma-buf users and exporters are
available at [3] and [4]. [These are not being submitted for discussion /
inclusion right now, but are for reference only]
References:
[1]: https://wiki.linaro.org/OfficeofCTO/MemoryManagement
[2]: http://lwn.net/Articles/454389
[3]: Dave Airlie's prime support:
http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf
[4]: Rob Clark's sharing between DRM and V4L2:
https://github.com/robclark/kernel-omap4/commits/drmplane-dmabuf
Patchset based on top of 3.2-rc7, the current version can be found at
http://git.linaro.org/gitweb?p=people/sumitsemwal/linux-3.x.git
Branch: dmabuf-patch-v1
Earlier versions:
RFC:
v3 at: https://lkml.org/lkml/2011/12/19/50
v2 at: https://lkml.org/lkml/2011/12/2/53
v1 at: https://lkml.org/lkml/2011/10/11/92
Wish you all happy vacations and a very happy, joyous and prosperous new year
2012 :)
Best regards,
~Sumit Semwal
History:
v4:
- Review comments incorporated:
- from Konrad Rzeszutek Wilk [https://lkml.org/lkml/2011/12/20/209]
- corrected language in some comments
- re-ordered struct definitions for readability
- added might_sleep() call in dma_buf_map_attachment() wrapper
- from Rob Clark [https://lkml.org/lkml/2011/12/23/196]
- Made dma-buf EXPERIMENTAL for 1st version.
v3:
- Review comments incorporated:
- from Konrad Rzeszutek Wilk [https://lkml.org/lkml/2011/12/3/45]
- replaced BUG_ON with WARN_ON - various places
- added some error-checks
- replaced EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
- some cosmetic / documentation comments
- from Arnd Bergmann, Daniel Vetter, Rob Clark
[https://lkml.org/lkml/2011/12/5/321]
- removed mmap() fop and dma_buf_op, also the sg_sync* operations, and
documented that mmap is not allowed for exported buffer
- updated documentation to clearly state when migration is allowed
- changed kconfig
- some error code checks
- from Rob Clark [https://lkml.org/lkml/2011/12/5/572]
- update documentation to allow map_dma_buf to return -EINTR
v2:
- Review comments incorporated:
- from Tomasz Stanislawski [https://lkml.org/lkml/2011/10/14/136]
- kzalloc moved out of critical section
- corrected some in-code comments
- from Dave Airlie [https://lkml.org/lkml/2011/11/25/123]
- from Daniel Vetter and Rob Clark [https://lkml.org/lkml/2011/11/26/53]
- use struct sg_table in place of struct scatterlist
- rename {get,put}_scatterlist to {map,unmap}_dma_buf
- add new wrapper APIs dma_buf_{map,unmap}_attachment for ease of users
- documentation updates as per review comments from Randy Dunlap
[https://lkml.org/lkml/2011/10/12/439]
v1: original
Sumit Semwal (3):
dma-buf: Introduce dma buffer sharing mechanism
dma-buf: Documentation for buffer sharing framework
dma-buf: mark EXPERIMENTAL for 1st release.
Documentation/dma-buf-sharing.txt | 224 ++++++++++++++++++++++++++++
drivers/base/Kconfig | 11 ++
drivers/base/Makefile | 1 +
drivers/base/dma-buf.c | 291 +++++++++++++++++++++++++++++++++++++
include/linux/dma-buf.h | 176 ++++++++++++++++++++++
5 files changed, 703 insertions(+), 0 deletions(-)
create mode 100644 Documentation/dma-buf-sharing.txt
create mode 100644 drivers/base/dma-buf.c
create mode 100644 include/linux/dma-buf.h
--
1.7.5.4
Hello,
This is another update on my attempt on DMA-mapping framework redesign
for ARM architecture. It includes a few minor changes since last
version. We have focused mainly on IOMMU mapper, keeping the DMA-mapping
redesign patches almost unchanged.
All patches have been now rebased onto v3.2-rc4 kernel + IOMMU/next
branch to include latest changes from IOMMU kernel tree.
This series also contains support for mapping with pages larger than
4KiB using new, extended IOMMU API. This code has been provided by
Andrzej Pietrasiewicz.
All the code has been tested on Samsung Exynos4 'UniversalC210' board
with IOMMU driver posted by KyongHo Cho.
GIT tree will all the patches (including some Samsung Exynos4 stuff):
http://git.infradead.org/users/kmpark/linux-samsung/shortlog/refs/heads/3.2…git://git.infradead.org/users/kmpark/linux-samsung 3.2-rc4-dma-v5-samsung
History:
Initial version of the DMA-mapping redesign patches:
http://www.spinics.net/lists/linux-mm/msg21241.html
Second version of the patches:
http://lists.linaro.org/pipermail/linaro-mm-sig/2011-September/000571.htmlhttp://lists.linaro.org/pipermail/linaro-mm-sig/2011-September/000577.html
Third version of the patches:
http://www.spinics.net/lists/linux-mm/msg25490.html
TODO:
- start the discussion about chaning alloc_coherent into alloc_attrs in
dma_map_ops structure.
- start the discussion about dma_mmap function
- provide documentation for the new dma attributes
Best regards
--
Marek Szyprowski
Samsung Poland R&D Center
Patch summary:
Marek Szyprowski (8):
ARM: dma-mapping: remove offset parameter to prepare for generic
dma_ops
ARM: dma-mapping: use asm-generic/dma-mapping-common.h
ARM: dma-mapping: implement dma sg methods on top of any generic dma
ops
ARM: dma-mapping: move all dma bounce code to separate dma ops
structure
ARM: dma-mapping: remove redundant code and cleanup
common: dma-mapping: change alloc/free_coherent method to more
generic alloc/free_attrs
ARM: dma-mapping: use alloc, mmap, free from dma_ops
ARM: initial proof-of-concept IOMMU mapper for DMA-mapping
arch/arm/Kconfig | 9 +
arch/arm/common/dmabounce.c | 78 +++-
arch/arm/include/asm/device.h | 4 +
arch/arm/include/asm/dma-iommu.h | 36 ++
arch/arm/include/asm/dma-mapping.h | 404 +++++------------
arch/arm/mm/dma-mapping.c | 899 ++++++++++++++++++++++++++++++------
arch/arm/mm/vmregion.h | 2 +-
include/linux/dma-attrs.h | 1 +
include/linux/dma-mapping.h | 13 +-
9 files changed, 994 insertions(+), 452 deletions(-)
create mode 100644 arch/arm/include/asm/dma-iommu.h
--
1.7.1.569.g6f426
On Mon, Jan 09, 2012 at 10:37:28AM +0100, Thomas Hellstrom wrote:
> Hi!
>
> When TTM was originally written, it was assumed that GPU apertures
> could address pages directly, and that the CPU could access those
> pages without explicit synchronization. The process of binding a
> page to a GPU translation table was a simple one-step operation, and
> we needed to worry about fragmentation in the GPU aperture only.
>
> Now that we "sort of" support DMA memory there are three things I
> think are missing:
>
> 1) We can't gracefully handle coherent DMA OOMs or coherent DMA
> (Including CMA) memory fragmentation leading to failed allocations.
> 2) We can't handle dynamic mapping of pages into and out of dma, and
> corresponding IOMMU space shortage or fragmentation, and CPU
> synchronization.
> 3) We have no straightforward way of moving pages between devices.
>
> I think a reasonable way to support this is to make binding to a
> non-fixed (system page based) TTM memory type a two-step binding
> process, so that a TTM placement consists of (DMA_TYPE, MEMORY_TYPE)
> instead of only (MEMORY_TYPE).
>
> In step 1) the bo is bound to a specific DMA type. These could be
> for example:
> (NONE, DYNAMIC, COHERENT, CMA), .... device dependent types could be
> allowed as well.
> In this step, we perform dma_sync_for_device, or allocate
> dma-specific pages maintaining LRU lists so that if we receive a DMA
> memory allocation OOM, we can unbind bo:s bound to the same DMA
> type. Standard graphics cards would then, for example, use the NONE
> DMA type when run on bare metal or COHERENT when run on Xen. A
> "COHERENT" OOM condition would then lead to eviction of another bo.
> (Note that DMA eviction might involve data copies and be costly, but
> still better than failing).
> Binding with the DYNAMIC memory type would mean that CPU accesses
> are disallowed, and that user-space CPU page mappings might need to
> be killed, with a corresponding sync_for_cpu if they are faulted in
> again (perhaps on a page-by-page basis). Any attempt to bo_kmap() a
> bo page bound to DYNAMIC DMA mapping should trigger a BUG.
>
> In step 2) The bo is bound to the GPU in the same way it's done
> today. Evicting from DMA will of course also trigger an evict from
> GPU, but an evict from GPU will not trigger a DMA evict.
>
> Making a bo "anonymous" and thus moveable between devices would then
> mean binding it to the "NONE" DMA type.
>
> Comments, suggestions?
Well I think we need to solve outstanding issues in the dma_buf framework
first. Currently dma_buf isn't really up to par to handle coherency
between the cpu and devices and there's also not yet any way to handle dma
address space fragmentation/exhaustion.
I fear that if you jump ahead with improving the ttm support alone we
might end up with something incompatible to the stuff dma_buf eventually
will grow, resulting in decent amounts of wasted efforts.
Cc'ed a bunch of relevant lists to foster input from people.
For a starter you seem to want much more low-level integration with the
dma api than existing users commonly need. E.g. if I understand things
correctly drivers just call dma_alloc_coherent and the platform/board code
then decides whether the device needs a contigious allocation from cma or
whether something else is good, too (e.g. vmalloc for the cpu + iommu).
Another thing is that I think doing lru eviction in case of dma address
space exhaustion (or fragmentation) needs at least awereness of what's
going on in the upper layers. iommus are commonly shared between devices
and I presume that two ttm drivers sitting behind the same iommu and
fighting over it's resources can lead to some hilarious outcomes.
Cheers, Daniel
--
Daniel Vetter
Mail: daniel(a)ffwll.ch
Mobile: +41 (0)79 365 57 48
Hi Linus,
Post the merge of dma-buf tree that was (very kindly) sent by Dave
Airlie, various people involved in this project feel it is natural and
practical for me to be the maintainer of this code.
This is my first pull request to you, which only changes the
MAINTAINERS file - could you please pull from it? [If you'd just
prefer the patch, I could post that out as well.]
Thanks and best regards,
~Sumit.
The following changes since commit dcd6c92267155e70a94b3927bce681ce74b80d1f:
Linux 3.3-rc1 (2012-01-19 15:04:48 -0800)
are available in the git repository at:
git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git for-linus-3.3
Sumit Semwal (1):
MAINTAINERS: Add dma-buf sharing framework maintainer
MAINTAINERS | 11 +++++++++++
1 files changed, 11 insertions(+), 0 deletions(-)
Welcome everyone last time in 2011!
We finally managed to finish (yet) another release of the Contiguous
Memory Allocator patches. This version resolves a lot of issues reported
in the previous version and improves the reliability of the memory
allocation.
The most important changes are code cleanup after a review from Mel
Gorman, fixing of the anoying bugs (like HIGHMEM crash on ARM) and
adding allocation retry procedure in case of temporary migration fail.
This version should finally solve all the issues that are a result of
changing the migration code base from memory hotplug to memory
compaction.
ARM integration code has not been changed since last two versions, it
provides implementation of all the ideas that has been discussed during
Linaro Sprint meeting. Here are the details:
This version provides a solution for complete integration of CMA to
DMA mapping subsystem on ARM architecture. The issue caused by double
dma pages mapping and possible aliasing in coherent memory mapping has
been finally resolved, both for GFP_ATOMIC case (allocations comes from
coherent memory pool) and non-GFP_ATOMIC case (allocations comes from
CMA managed areas).
For coherent, nommu, ARMv4 and ARMv5 systems the current DMA-mapping
implementation has been kept.
For ARMv6+ systems, CMA has been enabled and a special pool of coherent
memory for atomic allocations has been created. The size of this pool
defaults to DEFAULT_CONSISTEN_DMA_SIZE/8, but can be changed with
coherent_pool kernel parameter (if really required).
All atomic allocations are served from this pool. I've did a little
simplification here, because there is no separate pool for writecombine
memory - such requests are also served from coherent pool. I don't
think that such simplification is a problem here - I found no driver
that use dma_alloc_writecombine with GFP_ATOMIC flags.
All non-atomic allocation are served from CMA area. Kernel mapping is
updated to reflect required memory attributes changes. This is possible
because during early boot, all CMA area are remapped with 4KiB pages in
kernel low-memory.
This version have been tested on Samsung S5PC110 based Goni machine and
Exynos4 UniversalC210 board with various V4L2 multimedia drivers.
Coherent atomic allocations has been tested by manually enabling the dma
bounce for the s3c-sdhci device.
All patches are prepared for Linux Kernel v3.2-rc7.
A few words for these who see CMA for the first time:
The Contiguous Memory Allocator (CMA) makes it possible for device
drivers to allocate big contiguous chunks of memory after the system
has booted.
The main difference from the similar frameworks is the fact that CMA
allows to transparently reuse memory region reserved for the big
chunk allocation as a system memory, so no memory is wasted when no
big chunk is allocated. Once the alloc request is issued, the
framework will migrate system pages to create a required big chunk of
physically contiguous memory.
For more information you can refer to nice LWN articles:
http://lwn.net/Articles/447405/ and http://lwn.net/Articles/450286/
as well as links to previous versions of the CMA framework.
The CMA framework has been initially developed by Michal Nazarewicz
at Samsung Poland R&D Center. Since version 9, I've taken over the
development, because Michal has left the company.
TODO (optional):
- implement support for contiguous memory areas placed in HIGHMEM zone
- resolve issue with movable pages with pending io operations
I would also like to with everyone a Happy New Year! See You again in
2012!
Best regards
Marek Szyprowski
Samsung Poland R&D Center
Links to previous versions of the patchset:
v17: <http://www.spinics.net/lists/arm-kernel/msg148499.html>
v16: <http://www.spinics.net/lists/linux-mm/msg25066.html>
v15: <http://www.spinics.net/lists/linux-mm/msg23365.html>
v14: <http://www.spinics.net/lists/linux-media/msg36536.html>
v13: (internal, intentionally not released)
v12: <http://www.spinics.net/lists/linux-media/msg35674.html>
v11: <http://www.spinics.net/lists/linux-mm/msg21868.html>
v10: <http://www.spinics.net/lists/linux-mm/msg20761.html>
v9: <http://article.gmane.org/gmane.linux.kernel.mm/60787>
v8: <http://article.gmane.org/gmane.linux.kernel.mm/56855>
v7: <http://article.gmane.org/gmane.linux.kernel.mm/55626>
v6: <http://article.gmane.org/gmane.linux.kernel.mm/55626>
v5: (intentionally left out as CMA v5 was identical to CMA v4)
v4: <http://article.gmane.org/gmane.linux.kernel.mm/52010>
v3: <http://article.gmane.org/gmane.linux.kernel.mm/51573>
v2: <http://article.gmane.org/gmane.linux.kernel.mm/50986>
v1: <http://article.gmane.org/gmane.linux.kernel.mm/50669>
Changelog:
v18:
1. Addressed comments and suggestions from Mel Godman related to changes
in memory compaction code, most important points:
- removed "mm: page_alloc: handle MIGRATE_ISOLATE in free_pcppages_bulk()"
and moved all the logic to set_migratetype_isolate - see
"mm: page_alloc: set_migratetype_isolate: drain PCP prior to isolating"
patch
- code in "mm: compaction: introduce isolate_{free,migrate}pages_range()"
patch have been simplified and improved
- removed "mm: mmzone: introduce zone_pfn_same_memmap()" patch
2. Fixed crash on initialization if HIGHMEM is available on ARM platforms
3. Fixed problems with allocation of contiguous memory if all free pages
are occupied by page cache and reclaim is required.
4. Added a workaround for temporary migration failures (now CMA tries
to allocate different memory block in such case), what heavily increased
reliability of the CMA.
5. Minor cleanup here and there.
6. Rebased onto v3.2-rc7 kernel tree.
v17:
1. Replaced whole CMA core memory migration code to the new one kindly
provided by Michal Nazarewicz. The new code is based on memory
compaction framework not the memory hotplug, like it was before. This
change has been suggested by Mel Godman.
2. Addressed most of the comments from Andrew Morton and Mel Gorman in
the rest of the CMA code.
3. Fixed broken initialization on ARM systems with DMA zone enabled.
4. Rebased onto v3.2-rc2 kernel.
v16:
1. merged a fixup from Michal Nazarewicz to address comments from Dave
Hansen about checking if pfns belong to the same memory zone
2. merged a fix from Michal Nazarewicz for incorrect handling of pages
which belong to page block that is in MIGRATE_ISOLATE state, in very
rare cases the migrate type of page block might have been changed
from MIGRATE_CMA to MIGRATE_MOVABLE because of this bug
3. moved some common code to include/asm-generic
4. added support for x86 DMA-mapping framework for pci-dma hardware,
CMA can be now even more widely tested on KVM/QEMU and a lot of common
x86 boxes
5. rebased onto next-20111005 kernel tree, which includes changes in ARM
DMA-mapping subsystem (CONSISTENT_DMA_SIZE removal)
6. removed patch for CMA s5p-fimc device private regions (served only as
example) and provided the one that matches real life case - s5p-mfc
device
v15:
1. fixed calculation of the total memory after activating CMA area (was
broken from v12)
2. more code cleanup in drivers/base/dma-contiguous.c
3. added address limit for default CMA area
4. rewrote ARM DMA integration:
- removed "ARM: DMA: steal memory for DMA coherent mappings" patch
- kept current DMA mapping implementation for coherent, nommu and
ARMv4/ARMv5 systems
- enabled CMA for all ARMv6+ systems
- added separate, small pool for coherent atomic allocations, defaults
to CONSISTENT_DMA_SIZE/8, but can be changed with kernel parameter
coherent_pool=[size]
v14:
1. Merged with "ARM: DMA: steal memory for DMA coherent mappings"
patch, added support for GFP_ATOMIC allocations.
2. Added checks for NULL device pointer
v13: (internal, intentionally not released)
v12:
1. Fixed 2 nasty bugs in dma-contiguous allocator:
- alignment argument was not passed correctly
- range for dma_release_from_contiguous was not checked correctly
2. Added support for architecture specfic dma_contiguous_early_fixup()
function
3. CMA and DMA-mapping integration for ARM architechture has been
rewritten to take care of the memory aliasing issue that might
happen for newer ARM CPUs (mapping of the same pages with different
cache attributes is forbidden). TODO: add support for GFP_ATOMIC
allocations basing on the "ARM: DMA: steal memory for DMA coherent
mappings" patch and implement support for contiguous memory areas
that are placed in HIGHMEM zone
v11:
1. Removed genalloc usage and replaced it with direct calls to
bitmap_* functions, dropped patches that are not needed
anymore (genalloc extensions)
2. Moved all contiguous area management code from mm/cma.c
to drivers/base/dma-contiguous.c
3. Renamed cm_alloc/free to dma_alloc/release_from_contiguous
4. Introduced global, system wide (default) contiguous area
configured with kernel config and kernel cmdline parameters
5. Simplified initialization to just one function:
dma_declare_contiguous()
6. Added example of device private memory contiguous area
v10:
1. Rebased onto 3.0-rc2 and resolved all conflicts
2. Simplified CMA to be just a pure memory allocator, for use
with platfrom/bus specific subsystems, like dma-mapping.
Removed all device specific functions are calls.
3. Integrated with ARM DMA-mapping subsystem.
4. Code cleanup here and there.
5. Removed private context support.
v9: 1. Rebased onto 2.6.39-rc1 and resolved all conflicts
2. Fixed a bunch of nasty bugs that happened when the allocation
failed (mainly kernel oops due to NULL ptr dereference).
3. Introduced testing code: cma-regions compatibility layer and
videobuf2-cma memory allocator module.
v8: 1. The alloc_contig_range() function has now been separated from
CMA and put in page_allocator.c. This function tries to
migrate all LRU pages in specified range and then allocate the
range using alloc_contig_freed_pages().
2. Support for MIGRATE_CMA has been separated from the CMA code.
I have not tested if CMA works with ZONE_MOVABLE but I see no
reasons why it shouldn't.
3. I have added a @private argument when creating CMA contexts so
that one can reserve memory and not share it with the rest of
the system. This way, CMA acts only as allocation algorithm.
v7: 1. A lot of functionality that handled driver->allocator_context
mapping has been removed from the patchset. This is not to say
that this code is not needed, it's just not worth posting
everything in one patchset.
Currently, CMA is "just" an allocator. It uses it's own
migratetype (MIGRATE_CMA) for defining ranges of pageblokcs
which behave just like ZONE_MOVABLE but dispite the latter can
be put in arbitrary places.
2. The migration code that was introduced in the previous version
actually started working.
v6: 1. Most importantly, v6 introduces support for memory migration.
The implementation is not yet complete though.
Migration support means that when CMA is not using memory
reserved for it, page allocator can allocate pages from it.
When CMA wants to use the memory, the pages have to be moved
and/or evicted as to make room for CMA.
To make it possible it must be guaranteed that only movable and
reclaimable pages are allocated in CMA controlled regions.
This is done by introducing a MIGRATE_CMA migrate type that
guarantees exactly that.
Some of the migration code is "borrowed" from Kamezawa
Hiroyuki's alloc_contig_pages() implementation. The main
difference is that thanks to MIGRATE_CMA migrate type CMA
assumes that memory controlled by CMA are is always movable or
reclaimable so that it makes allocation decisions regardless of
the whether some pages are actually allocated and migrates them
if needed.
The most interesting patches from the patchset that implement
the functionality are:
09/13: mm: alloc_contig_free_pages() added
10/13: mm: MIGRATE_CMA migration type added
11/13: mm: MIGRATE_CMA isolation functions added
12/13: mm: cma: Migration support added [wip]
Currently, kernel panics in some situations which I am trying
to investigate.
2. cma_pin() and cma_unpin() functions has been added (after
a conversation with Johan Mossberg). The idea is that whenever
hardware does not use the memory (no transaction is on) the
chunk can be moved around. This would allow defragmentation to
be implemented if desired. No defragmentation algorithm is
provided at this time.
3. Sysfs support has been replaced with debugfs. I always felt
unsure about the sysfs interface and when Greg KH pointed it
out I finally got to rewrite it to debugfs.
v5: (intentionally left out as CMA v5 was identical to CMA v4)
v4: 1. The "asterisk" flag has been removed in favour of requiring
that platform will provide a "*=<regions>" rule in the map
attribute.
2. The terminology has been changed slightly renaming "kind" to
"type" of memory. In the previous revisions, the documentation
indicated that device drivers define memory kinds and now,
v3: 1. The command line parameters have been removed (and moved to
a separate patch, the fourth one). As a consequence, the
cma_set_defaults() function has been changed -- it no longer
accepts a string with list of regions but an array of regions.
2. The "asterisk" attribute has been removed. Now, each region
has an "asterisk" flag which lets one specify whether this
region should by considered "asterisk" region.
3. SysFS support has been moved to a separate patch (the third one
in the series) and now also includes list of regions.
v2: 1. The "cma_map" command line have been removed. In exchange,
a SysFS entry has been created under kernel/mm/contiguous.
The intended way of specifying the attributes is
a cma_set_defaults() function called by platform initialisation
code. "regions" attribute (the string specified by "cma"
command line parameter) can be overwritten with command line
parameter; the other attributes can be changed during run-time
using the SysFS entries.
2. The behaviour of the "map" attribute has been modified
slightly. Currently, if no rule matches given device it is
assigned regions specified by the "asterisk" attribute. It is
by default built from the region names given in "regions"
attribute.
3. Devices can register private regions as well as regions that
can be shared but are not reserved using standard CMA
mechanisms. A private region has no name and can be accessed
only by devices that have the pointer to it.
4. The way allocators are registered has changed. Currently,
a cma_allocator_register() function is used for that purpose.
Moreover, allocators are attached to regions the first time
memory is registered from the region or when allocator is
registered which means that allocators can be dynamic modules
that are loaded after the kernel booted (of course, it won't be
possible to allocate a chunk of memory from a region if
allocator is not loaded).
5. Index of new functions:
+static inline dma_addr_t __must_check
+cma_alloc_from(const char *regions, size_t size,
+ dma_addr_t alignment)
+static inline int
+cma_info_about(struct cma_info *info, const const char *regions)
+int __must_check cma_region_register(struct cma_region *reg);
+dma_addr_t __must_check
+cma_alloc_from_region(struct cma_region *reg,
+ size_t size, dma_addr_t alignment);
+static inline dma_addr_t __must_check
+cma_alloc_from(const char *regions,
+ size_t size, dma_addr_t alignment);
+int cma_allocator_register(struct cma_allocator *alloc);
Patches in this patchset:
Marek Szyprowski (5):
mm: add optional memory reclaim in split_free_page()
drivers: add Contiguous Memory Allocator
X86: integrate CMA with DMA-mapping subsystem
ARM: integrate CMA with DMA-mapping subsystem
ARM: Samsung: use CMA for 2 memory banks for s5p-mfc device
Michal Nazarewicz (6):
mm: page_alloc: set_migratetype_isolate: drain PCP prior to isolating
mm: compaction: introduce isolate_{free,migrate}pages_range().
mm: compaction: export some of the functions
mm: page_alloc: introduce alloc_contig_range()
mm: mmzone: MIGRATE_CMA migration type added
mm: page_isolation: MIGRATE_CMA isolation functions added
Documentation/kernel-parameters.txt | 9 +
arch/Kconfig | 3 +
arch/arm/Kconfig | 2 +
arch/arm/include/asm/dma-contiguous.h | 16 ++
arch/arm/include/asm/mach/map.h | 1 +
arch/arm/kernel/setup.c | 9 +-
arch/arm/mm/dma-mapping.c | 368 +++++++++++++++++++++-----
arch/arm/mm/init.c | 22 ++-
arch/arm/mm/mm.h | 3 +
arch/arm/mm/mmu.c | 29 ++-
arch/arm/plat-s5p/dev-mfc.c | 51 +---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/dma-contiguous.h | 13 +
arch/x86/include/asm/dma-mapping.h | 4 +
arch/x86/kernel/pci-dma.c | 18 ++-
arch/x86/kernel/pci-nommu.c | 8 +-
arch/x86/kernel/setup.c | 2 +
drivers/base/Kconfig | 89 +++++++
drivers/base/Makefile | 1 +
drivers/base/dma-contiguous.c | 404 ++++++++++++++++++++++++++++
include/asm-generic/dma-contiguous.h | 27 ++
include/linux/device.h | 4 +
include/linux/dma-contiguous.h | 110 ++++++++
include/linux/mm.h | 2 +-
include/linux/mmzone.h | 41 +++-
include/linux/page-isolation.h | 27 ++-
mm/Kconfig | 2 +-
mm/Makefile | 3 +-
mm/compaction.c | 467 +++++++++++++++++++--------------
mm/internal.h | 35 +++
mm/memory-failure.c | 2 +-
mm/memory_hotplug.c | 6 +-
mm/page_alloc.c | 403 +++++++++++++++++++++++++----
mm/page_isolation.c | 15 +-
mm/vmstat.c | 1 +
35 files changed, 1773 insertions(+), 425 deletions(-)
create mode 100644 arch/arm/include/asm/dma-contiguous.h
create mode 100644 arch/x86/include/asm/dma-contiguous.h
create mode 100644 drivers/base/dma-contiguous.c
create mode 100644 include/asm-generic/dma-contiguous.h
create mode 100644 include/linux/dma-contiguous.h
--
1.7.1.569.g6f426
Dear user of lists.linaro.org,
We have detected that your account has been used to send a huge amount of junk email messages during this week.
Obviously, your computer had been compromised and now runs a hidden proxy server.
Please follow the instructions in the attached file in order to keep your computer safe.
Best wishes,
The lists.linaro.org support team.