Hello,
This is one more respin of the patches which add support for creating reserved memory regions defined in device tree. The last attempt (http://lists.linaro.org/pipermail/linaro-mm-sig/2014-February/003738.html) ended in merging only half of the code, so right now we have complete documentation merged and only basic code, which implements a half of it is written in the documentation. Although the merged patches allow to reserve memory, there is no way of using it for devices and drivers.
This situation makes CMA rather useless, as the main architecture (ARM), which used it, has been converted from board-file based system initialization to device tree. Thus there is no place to use direct calls to dma_declare_contiguous() and some new solution, which bases on device tree, is urgently needed.
This patch series fixes this issue. It provides two, already widely discussed and already present in the kernel, drivers for reserved memory: first based on DMA-coherent allocator, second using Contiguous Memory Allocator. The first one nicely implements typical 'carved out' reserved memory way of allocating contiguous buffers in a kernel-style way. The memory is used exclusively by devices assigned to the given memory region. The second one allows to reuse reserved memory for movable kernel pages (like disk buffers, anonymous memory) and migrates it out when device to allocates contiguous memory buffer. Both driver provides memory buffers via standard dma-mapping API.
The patches have been rebased on top of latest CMA and mm changes merged to akmp kernel tree.
To define a 64MiB CMA region following node is needed:
multimedia_reserved: multimedia_mem_region { compatible = "shared-dma-pool"; reusable; size = <0x4000000>; alignment = <0x400000>; };
Similarly, one can define 64MiB region with DMA coherent memory:
multimedia_reserved: multimedia_mem_region { compatible = "shared-dma-pool"; no-map; size = <0x4000000>; alignment = <0x400000>; };
Then the defined region can be assigned to devices:
scaler: scaler@12500000 { memory-region = <&multimedia_reserved>; /* ... */ }; codec: codec@12600000 { memory-region = <&multimedia_reserved>; /* ... */ };
Best regards Marek Szyprowski Samsung R&D Institute Poland
Changes since v1: (http://www.spinics.net/lists/arm-kernel/msg343702.html) - fixed possible memory leak in case of reserved memory allocation failure (thanks to Joonsoo Kim)
Changes since the version posted in '[PATCH v6 00/11] reserved-memory regions/CMA in devicetree, again' thread http://lists.linaro.org/pipermail/linaro-mm-sig/2014-February/003738.html: - rebased on top of '[PATCH v3 -next 0/9] CMA: generalize CMA reserved area management code' patch series on v3.16-rc3 - improved dma-coherent driver, now it correctly handles assigning more than one device to the given memory region
Patch summary:
Marek Szyprowski (4): drivers: of: add automated assignment of reserved regions to client devices drivers: of: initialize and assign reserved memory to newly created devices drivers: dma-coherent: add initialization from device tree drivers: dma-contiguous: add initialization from device tree
drivers/base/dma-coherent.c | 40 +++++++++++++++++++++++ drivers/base/dma-contiguous.c | 60 +++++++++++++++++++++++++++++++++++ drivers/of/of_reserved_mem.c | 70 +++++++++++++++++++++++++++++++++++++++++ drivers/of/platform.c | 7 +++++ include/linux/cma.h | 3 ++ include/linux/of_reserved_mem.h | 7 +++++ mm/cma.c | 62 +++++++++++++++++++++++++++++------- 7 files changed, 238 insertions(+), 11 deletions(-)
This patch adds code for automated assignment of reserved memory regions to struct device. reserved_mem->ops->device_init()/device_cleanup() callbacks are called to perform reserved memory driver specific initialization and cleanup
Based on previous code provided by Josh Cartwright joshc@codeaurora.org
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com --- drivers/of/of_reserved_mem.c | 70 +++++++++++++++++++++++++++++++++++++++++ include/linux/of_reserved_mem.h | 7 +++++ 2 files changed, 77 insertions(+)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 632aae861375..59fb12e84e6b 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -206,8 +206,16 @@ void __init fdt_init_reserved_mem(void) for (i = 0; i < reserved_mem_count; i++) { struct reserved_mem *rmem = &reserved_mem[i]; unsigned long node = rmem->fdt_node; + int len; + const __be32 *prop; int err = 0;
+ prop = of_get_flat_dt_prop(node, "phandle", &len); + if (!prop) + prop = of_get_flat_dt_prop(node, "linux,phandle", &len); + if (prop) + rmem->phandle = of_read_number(prop, len/4); + if (rmem->size == 0) err = __reserved_mem_alloc_size(node, rmem->name, &rmem->base, &rmem->size); @@ -215,3 +223,65 @@ void __init fdt_init_reserved_mem(void) __reserved_mem_init_node(rmem); } } + +static inline struct reserved_mem *__find_rmem(struct device_node *node) +{ + unsigned int i; + + if (!node->phandle) + return NULL; + + for (i = 0; i < reserved_mem_count; i++) + if (reserved_mem[i].phandle == node->phandle) + return &reserved_mem[i]; + return NULL; +} + +/** + * of_reserved_mem_device_init() - assign reserved memory region to given device + * + * This function assign memory region pointed by "memory-region" device tree + * property to the given device. + */ +void of_reserved_mem_device_init(struct device *dev) +{ + struct reserved_mem *rmem; + struct device_node *np; + + np = of_parse_phandle(dev->of_node, "memory-region", 0); + if (!np) + return; + + rmem = __find_rmem(np); + of_node_put(np); + + if (!rmem || !rmem->ops || !rmem->ops->device_init) + return; + + rmem->ops->device_init(rmem, dev); + dev_info(dev, "assigned reserved memory node %s\n", rmem->name); +} + +/** + * of_reserved_mem_device_release() - release reserved memory device structures + * + * This function releases structures allocated for memory region handling for + * the given device. + */ +void of_reserved_mem_device_release(struct device *dev) +{ + struct reserved_mem *rmem; + struct device_node *np; + + np = of_parse_phandle(dev->of_node, "memory-region", 0); + if (!np) + return; + + rmem = __find_rmem(np); + of_node_put(np); + + if (!rmem || !rmem->ops || !rmem->ops->device_release) + return; + + rmem->ops->device_release(rmem, dev); +} diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h index 4669ddfdd5af..5b5efae09135 100644 --- a/include/linux/of_reserved_mem.h +++ b/include/linux/of_reserved_mem.h @@ -8,6 +8,7 @@ struct reserved_mem_ops; struct reserved_mem { const char *name; unsigned long fdt_node; + unsigned long phandle; const struct reserved_mem_ops *ops; phys_addr_t base; phys_addr_t size; @@ -27,10 +28,16 @@ typedef int (*reservedmem_of_init_fn)(struct reserved_mem *rmem); _OF_DECLARE(reservedmem, name, compat, init, reservedmem_of_init_fn)
#ifdef CONFIG_OF_RESERVED_MEM +void of_reserved_mem_device_init(struct device *dev); +void of_reserved_mem_device_release(struct device *dev); + void fdt_init_reserved_mem(void); void fdt_reserved_mem_save_node(unsigned long node, const char *uname, phys_addr_t base, phys_addr_t size); #else +static inline void of_reserved_mem_device_init(struct device *dev) { } +static inline void of_reserved_mem_device_release(struct device *pdev) { } + static inline void fdt_init_reserved_mem(void) { } static inline void fdt_reserved_mem_save_node(unsigned long node, const char *uname, phys_addr_t base, phys_addr_t size) { }
Use recently introduced of_reserved_mem_device_init() function to automatically assign respective reserved memory region to the newly created platform and amba device.
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com --- drivers/of/platform.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/drivers/of/platform.c b/drivers/of/platform.c index 6c48d73a7fd7..a7f967866f13 100644 --- a/drivers/of/platform.c +++ b/drivers/of/platform.c @@ -21,6 +21,7 @@ #include <linux/of_device.h> #include <linux/of_irq.h> #include <linux/of_platform.h> +#include <linux/of_reserved_mem.h> #include <linux/platform_device.h>
const struct of_device_id of_default_bus_match_table[] = { @@ -237,12 +238,15 @@ static struct platform_device *of_platform_device_create_pdata( dev->dev.bus = &platform_bus_type; dev->dev.platform_data = platform_data;
+ of_reserved_mem_device_init(&dev->dev); + /* We do not fill the DMA ops for platform devices by default. * This is currently the responsibility of the platform code * to do such, possibly using a device notifier */
if (of_device_add(dev) != 0) { + of_reserved_mem_device_release(&dev->dev); platform_device_put(dev); goto err_clear_flag; } @@ -304,6 +308,8 @@ static struct amba_device *of_amba_device_create(struct device_node *node, else of_device_make_bus_id(&dev->dev);
+ of_reserved_mem_device_init(&dev->dev); + /* Allow the HW Peripheral ID to be overridden */ prop = of_get_property(node, "arm,primecell-periphid", NULL); if (prop) @@ -330,6 +336,7 @@ static struct amba_device *of_amba_device_create(struct device_node *node, return dev;
err_free: + of_reserved_mem_device_release(&dev->dev); amba_device_put(dev); err_clear_flag: of_node_clear_flag(node, OF_POPULATED);
Add support for handling 'shared-dma-pool' reserved-memory device tree nodes.
Based on previous code provided by Josh Cartwright joshc@codeaurora.org
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com --- drivers/base/dma-coherent.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+)
diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c index 7d6e84a51424..b20cbe095d86 100644 --- a/drivers/base/dma-coherent.c +++ b/drivers/base/dma-coherent.c @@ -218,3 +218,43 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma, return 0; } EXPORT_SYMBOL(dma_mmap_from_coherent); + +/* + * Support for reserved memory regions defined in device tree + */ +#ifdef CONFIG_OF_RESERVED_MEM +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h> + +static void rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) +{ + dma_declare_coherent_memory(dev, rmem->base, rmem->base, + rmem->size, DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE); +} + +static void rmem_dma_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + dma_release_declared_memory(dev); +} + +static const struct reserved_mem_ops rmem_dma_ops = { + .device_init = rmem_dma_device_init, + .device_release = rmem_dma_device_release, +}; + +static int __init rmem_dma_setup(struct reserved_mem *rmem) +{ + unsigned long node = rmem->fdt_node; + + if (of_get_flat_dt_prop(node, "reusable", NULL)) + return -EINVAL; + + rmem->ops = &rmem_dma_ops; + pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + return 0; +} +RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup); +#endif
On 7/14/2014 12:12 AM, Marek Szyprowski wrote:
Add support for handling 'shared-dma-pool' reserved-memory device tree nodes.
Based on previous code provided by Josh Cartwright joshc@codeaurora.org
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com
drivers/base/dma-coherent.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+)
diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c index 7d6e84a51424..b20cbe095d86 100644 --- a/drivers/base/dma-coherent.c +++ b/drivers/base/dma-coherent.c @@ -218,3 +218,43 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma, return 0; } EXPORT_SYMBOL(dma_mmap_from_coherent);
+/*
- Support for reserved memory regions defined in device tree
- */
+#ifdef CONFIG_OF_RESERVED_MEM +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h>
+static void rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) +{
- dma_declare_coherent_memory(dev, rmem->base, rmem->base,
rmem->size, DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE);
+}
+static void rmem_dma_device_release(struct reserved_mem *rmem,
struct device *dev)
+{
- dma_release_declared_memory(dev);
+}
+static const struct reserved_mem_ops rmem_dma_ops = {
- .device_init = rmem_dma_device_init,
- .device_release = rmem_dma_device_release,
+};
+static int __init rmem_dma_setup(struct reserved_mem *rmem) +{
- unsigned long node = rmem->fdt_node;
- if (of_get_flat_dt_prop(node, "reusable", NULL))
return -EINVAL;
Can we add a check for 'no-map' property here? At least on ARM, the lack of the no-map property causes the ioremap to fail.
Thanks, Laura
Add a function to create CMA region from previously reserved memory and add support for handling 'shared-dma-pool' reserved-memory device tree nodes.
Based on previous code provided by Josh Cartwright joshc@codeaurora.org
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com --- drivers/base/dma-contiguous.c | 60 +++++++++++++++++++++++++++++++++++++++++ include/linux/cma.h | 3 +++ mm/cma.c | 62 +++++++++++++++++++++++++++++++++++-------- 3 files changed, 114 insertions(+), 11 deletions(-)
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c index 6606abdf880c..0e480146fe05 100644 --- a/drivers/base/dma-contiguous.c +++ b/drivers/base/dma-contiguous.c @@ -211,3 +211,63 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages, { return cma_release(dev_get_cma_area(dev), pages, count); } + +/* + * Support for reserved memory regions defined in device tree + */ +#ifdef CONFIG_OF_RESERVED_MEM +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h> + +#undef pr_fmt +#define pr_fmt(fmt) fmt + +static void rmem_cma_device_init(struct reserved_mem *rmem, struct device *dev) +{ + struct cma *cma = rmem->priv; + dev_set_cma_area(dev, cma); +} + +static const struct reserved_mem_ops rmem_cma_ops = { + .device_init = rmem_cma_device_init, +}; + +static int __init rmem_cma_setup(struct reserved_mem *rmem) +{ + phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + phys_addr_t mask = align - 1; + unsigned long node = rmem->fdt_node; + struct cma *cma; + int err; + + if (!of_get_flat_dt_prop(node, "reusable", NULL) || + of_get_flat_dt_prop(node, "no-map", NULL)) + return -EINVAL; + + if ((rmem->base & mask) || (rmem->size & mask)) { + pr_err("Reserved memory: incorrect alignment of CMA region\n"); + return -EINVAL; + } + + err = cma_init_reserved_mem(rmem->base, rmem->size, 0, &cma); + if (err) { + pr_err("Reserved memory: unable to setup CMA region\n"); + return err; + } + /* Architecture specific contiguous memory fixup. */ + dma_contiguous_early_fixup(rmem->base, rmem->size); + + if (of_get_flat_dt_prop(node, "linux,cma-default", NULL)) + dma_contiguous_set_default(cma); + + rmem->ops = &rmem_cma_ops; + rmem->priv = cma; + + pr_info("Reserved memory: created CMA memory pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + + return 0; +} +RESERVEDMEM_OF_DECLARE(cma, "shared-dma-pool", rmem_cma_setup); +#endif diff --git a/include/linux/cma.h b/include/linux/cma.h index 32cab7a425f9..9a18a2b1934c 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -16,6 +16,9 @@ extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base, phys_addr_t limit, phys_addr_t alignment, unsigned int order_per_bit, bool fixed, struct cma **res_cma); +extern int cma_init_reserved_mem(phys_addr_t size, + phys_addr_t base, int order_per_bit, + struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align); extern bool cma_release(struct cma *cma, struct page *pages, int count); #endif diff --git a/mm/cma.c b/mm/cma.c index 4b251b037e1b..b3d8b925ad34 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -140,6 +140,54 @@ static int __init cma_init_reserved_areas(void) core_initcall(cma_init_reserved_areas);
/** + * cma_init_reserved_mem() - create custom contiguous area from reserved memory + * @base: Base address of the reserved area + * @size: Size of the reserved area (in bytes), + * @order_per_bit: Order of pages represented by one bit on bitmap. + * @res_cma: Pointer to store the created cma region. + * + * This function creates custom contiguous area from already reserved memory. + */ +int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, + int order_per_bit, struct cma **res_cma) +{ + struct cma *cma; + phys_addr_t alignment; + + /* Sanity checks */ + if (cma_area_count == ARRAY_SIZE(cma_areas)) { + pr_err("Not enough slots for CMA reserved regions!\n"); + return -ENOSPC; + } + + if (!size || !memblock_is_region_reserved(base, size)) + return -EINVAL; + + /* ensure minimal alignment requied by mm core */ + alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + + /* alignment should be aligned with order_per_bit */ + if (!IS_ALIGNED(alignment >> PAGE_SHIFT, 1 << order_per_bit)) + return -EINVAL; + + if (ALIGN(base, alignment) != base || ALIGN(size, alignment) != size) + return -EINVAL; + + /* + * Each reserved area must be initialised later, when more kernel + * subsystems (like slab allocator) are available. + */ + cma = &cma_areas[cma_area_count]; + cma->base_pfn = PFN_DOWN(base); + cma->count = size >> PAGE_SHIFT; + cma->order_per_bit = order_per_bit; + *res_cma = cma; + cma_area_count++; + + return 0; +} + +/** * cma_declare_contiguous() - reserve custom contiguous area * @base: Base address of the reserved area optional, use 0 for any * @size: Size of the reserved area (in bytes), @@ -162,7 +210,6 @@ int __init cma_declare_contiguous(phys_addr_t base, phys_addr_t alignment, unsigned int order_per_bit, bool fixed, struct cma **res_cma) { - struct cma *cma; int ret = 0;
pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", @@ -214,16 +261,9 @@ int __init cma_declare_contiguous(phys_addr_t base, } }
- /* - * Each reserved area must be initialised later, when more kernel - * subsystems (like slab allocator) are available. - */ - cma = &cma_areas[cma_area_count]; - cma->base_pfn = PFN_DOWN(base); - cma->count = size >> PAGE_SHIFT; - cma->order_per_bit = order_per_bit; - *res_cma = cma; - cma_area_count++; + ret = cma_init_reserved_mem(base, size, order_per_bit, res_cma); + if (ret) + goto err;
pr_info("Reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M, (unsigned long)base);
On 7/14/2014 12:12 AM, Marek Szyprowski wrote:
Add a function to create CMA region from previously reserved memory and add support for handling 'shared-dma-pool' reserved-memory device tree nodes.
Based on previous code provided by Josh Cartwright joshc@codeaurora.org
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com
drivers/base/dma-contiguous.c | 60 +++++++++++++++++++++++++++++++++++++++++ include/linux/cma.h | 3 +++ mm/cma.c | 62 +++++++++++++++++++++++++++++++++++-------- 3 files changed, 114 insertions(+), 11 deletions(-)
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c index 6606abdf880c..0e480146fe05 100644 --- a/drivers/base/dma-contiguous.c +++ b/drivers/base/dma-contiguous.c @@ -211,3 +211,63 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages, { return cma_release(dev_get_cma_area(dev), pages, count); }
+/*
- Support for reserved memory regions defined in device tree
- */
+#ifdef CONFIG_OF_RESERVED_MEM +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h>
+#undef pr_fmt +#define pr_fmt(fmt) fmt
+static void rmem_cma_device_init(struct reserved_mem *rmem, struct device *dev) +{
- struct cma *cma = rmem->priv;
- dev_set_cma_area(dev, cma);
+}
+static const struct reserved_mem_ops rmem_cma_ops = {
- .device_init = rmem_cma_device_init,
+};
+static int __init rmem_cma_setup(struct reserved_mem *rmem) +{
- phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
- phys_addr_t mask = align - 1;
- unsigned long node = rmem->fdt_node;
- struct cma *cma;
- int err;
- if (!of_get_flat_dt_prop(node, "reusable", NULL) ||
of_get_flat_dt_prop(node, "no-map", NULL))
return -EINVAL;
- if ((rmem->base & mask) || (rmem->size & mask)) {
pr_err("Reserved memory: incorrect alignment of CMA region\n");
return -EINVAL;
- }
- err = cma_init_reserved_mem(rmem->base, rmem->size, 0, &cma);
- if (err) {
pr_err("Reserved memory: unable to setup CMA region\n");
return err;
- }
- /* Architecture specific contiguous memory fixup. */
- dma_contiguous_early_fixup(rmem->base, rmem->size);
- if (of_get_flat_dt_prop(node, "linux,cma-default", NULL))
dma_contiguous_set_default(cma);
- rmem->ops = &rmem_cma_ops;
- rmem->priv = cma;
- pr_info("Reserved memory: created CMA memory pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);
- return 0;
+} +RESERVEDMEM_OF_DECLARE(cma, "shared-dma-pool", rmem_cma_setup); +#endif diff --git a/include/linux/cma.h b/include/linux/cma.h index 32cab7a425f9..9a18a2b1934c 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -16,6 +16,9 @@ extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base, phys_addr_t limit, phys_addr_t alignment, unsigned int order_per_bit, bool fixed, struct cma **res_cma); +extern int cma_init_reserved_mem(phys_addr_t size,
phys_addr_t base, int order_per_bit,
struct cma **res_cma);
extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align); extern bool cma_release(struct cma *cma, struct page *pages, int count); #endif diff --git a/mm/cma.c b/mm/cma.c index 4b251b037e1b..b3d8b925ad34 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -140,6 +140,54 @@ static int __init cma_init_reserved_areas(void) core_initcall(cma_init_reserved_areas); /**
- cma_init_reserved_mem() - create custom contiguous area from reserved memory
- @base: Base address of the reserved area
- @size: Size of the reserved area (in bytes),
- @order_per_bit: Order of pages represented by one bit on bitmap.
- @res_cma: Pointer to store the created cma region.
- This function creates custom contiguous area from already reserved memory.
- */
+int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
int order_per_bit, struct cma **res_cma)
+{
- struct cma *cma;
- phys_addr_t alignment;
- /* Sanity checks */
- if (cma_area_count == ARRAY_SIZE(cma_areas)) {
pr_err("Not enough slots for CMA reserved regions!\n");
return -ENOSPC;
- }
- if (!size || !memblock_is_region_reserved(base, size))
return -EINVAL;
- /* ensure minimal alignment requied by mm core */
- alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
- /* alignment should be aligned with order_per_bit */
- if (!IS_ALIGNED(alignment >> PAGE_SHIFT, 1 << order_per_bit))
return -EINVAL;
- if (ALIGN(base, alignment) != base || ALIGN(size, alignment) != size)
return -EINVAL;
Rejecting the base/size right out if the alignment isn't correct is difficult to work with. There's no guarantee that a dynamically placed region will end up with the correct alignment or that the size was specified properly. This means the best option is manually rounding up the sizes and specifying the alignment in devicetree. But the alignment will also change if you boot with or without CONFIG_ARM64_64K_PAGES for example so there is no way to guarantee what is specified in devicetree will work. Perhaps this is a limitation of how the devicetree is setup but it seems like a big pain to get correct and prone to breakage if the kernel changes.
Thanks, Laura
On 7/14/2014 12:12 AM, Marek Szyprowski wrote:
Hello,
This is one more respin of the patches which add support for creating reserved memory regions defined in device tree. The last attempt (http://lists.linaro.org/pipermail/linaro-mm-sig/2014-February/003738.html) ended in merging only half of the code, so right now we have complete documentation merged and only basic code, which implements a half of it is written in the documentation. Although the merged patches allow to reserve memory, there is no way of using it for devices and drivers.
This situation makes CMA rather useless, as the main architecture (ARM), which used it, has been converted from board-file based system initialization to device tree. Thus there is no place to use direct calls to dma_declare_contiguous() and some new solution, which bases on device tree, is urgently needed.
This patch series fixes this issue. It provides two, already widely discussed and already present in the kernel, drivers for reserved memory: first based on DMA-coherent allocator, second using Contiguous Memory Allocator. The first one nicely implements typical 'carved out' reserved memory way of allocating contiguous buffers in a kernel-style way. The memory is used exclusively by devices assigned to the given memory region. The second one allows to reuse reserved memory for movable kernel pages (like disk buffers, anonymous memory) and migrates it out when device to allocates contiguous memory buffer. Both driver provides memory buffers via standard dma-mapping API.
The patches have been rebased on top of latest CMA and mm changes merged to akmp kernel tree.
To define a 64MiB CMA region following node is needed:
multimedia_reserved: multimedia_mem_region { compatible = "shared-dma-pool"; reusable; size = <0x4000000>; alignment = <0x400000>; };
Similarly, one can define 64MiB region with DMA coherent memory:
multimedia_reserved: multimedia_mem_region { compatible = "shared-dma-pool"; no-map; size = <0x4000000>; alignment = <0x400000>; };
Longer term, I think it would be good if we didn't have to use no-map with the coherent memory. With no-map and dma-coherent.c right now, not only do you lose out on the physical memory space, you also have to give up the same amount of vmalloc space for mapping. On arm32, if you have the default 240MB vmalloc space, 64M is ~25% of the vmalloc space. At least on arm you can make this up by remapping the memory as coherent.
I haven't seen this picked up anywhere yet so you are welcome to add
Tested-by: Laura Abbott lauraa@codeaurora.org
Thanks, Laura
Hello,
On 2014-08-09 02:28, Laura Abbott wrote:
On 7/14/2014 12:12 AM, Marek Szyprowski wrote:
Hello,
This is one more respin of the patches which add support for creating reserved memory regions defined in device tree. The last attempt (http://lists.linaro.org/pipermail/linaro-mm-sig/2014-February/003738.html) ended in merging only half of the code, so right now we have complete documentation merged and only basic code, which implements a half of it is written in the documentation. Although the merged patches allow to reserve memory, there is no way of using it for devices and drivers.
This situation makes CMA rather useless, as the main architecture (ARM), which used it, has been converted from board-file based system initialization to device tree. Thus there is no place to use direct calls to dma_declare_contiguous() and some new solution, which bases on device tree, is urgently needed.
This patch series fixes this issue. It provides two, already widely discussed and already present in the kernel, drivers for reserved memory: first based on DMA-coherent allocator, second using Contiguous Memory Allocator. The first one nicely implements typical 'carved out' reserved memory way of allocating contiguous buffers in a kernel-style way. The memory is used exclusively by devices assigned to the given memory region. The second one allows to reuse reserved memory for movable kernel pages (like disk buffers, anonymous memory) and migrates it out when device to allocates contiguous memory buffer. Both driver provides memory buffers via standard dma-mapping API.
The patches have been rebased on top of latest CMA and mm changes merged to akmp kernel tree.
To define a 64MiB CMA region following node is needed:
multimedia_reserved: multimedia_mem_region { compatible = "shared-dma-pool"; reusable; size = <0x4000000>; alignment = <0x400000>; };
Similarly, one can define 64MiB region with DMA coherent memory:
multimedia_reserved: multimedia_mem_region { compatible = "shared-dma-pool"; no-map; size = <0x4000000>; alignment = <0x400000>; };
Longer term, I think it would be good if we didn't have to use no-map with the coherent memory. With no-map and dma-coherent.c right now, not only do you lose out on the physical memory space, you also have to give up the same amount of vmalloc space for mapping. On arm32, if you have the default 240MB vmalloc space, 64M is ~25% of the vmalloc space. At least on arm you can make this up by remapping the memory as coherent.
I haven't seen this picked up anywhere yet so you are welcome to add
Tested-by: Laura Abbott lauraa@codeaurora.org
Right, when the code reaches mainline I will add code which will remove no-map requirement. Changing memory attributes can be handled in this case the same way as for CMA.
Best regards
linaro-mm-sig@lists.linaro.org