Hi!
Aaro Koskinen and Josh Coombs reported that commit e9da6e9905e639 ("ARM:
dma-mapping: remove custom consistent dma region") introduced a
regresion. It turned out that the default 256KiB for atomic coherent
pool might not be enough. After that patch, some Kirkwood systems run
out of atomic coherent memory and fail without any meanfull message.
This patch series is an attempt to fix those issues by adding function
for setting coherent pool size from platform initialization code and
increasing the size of the pool for Kirkwood systems.
Best regards
Marek Szyprowski
Samsung Poland R&D Center
Patch summary:
Marek Szyprowski (3):
ARM: DMA-Mapping: add function for setting coherent pool size from
platform code
ARM: DMA-Mapping: print warning when atomic coherent allocation fails
ARM: Kirkwood: increase atomic coherent pool size
arch/arm/include/asm/dma-mapping.h | 7 +++++++
arch/arm/mach-kirkwood/common.c | 7 +++++++
arch/arm/mm/dma-mapping.c | 22 +++++++++++++++++++++-
3 files changed, 35 insertions(+), 1 deletions(-)
--
1.7.1.569.g6f426
Hi, All
We met question about dmac_map_area & dmac_flush_range from user addr.
mcr would not return on armv7 processor.
Existing ion carveout heap does not support partial cache flush.
Total cache will be flushed at all.
There is only one dirty bit for carveout heap, as well as sg_table->nents.
drivers/gpu/ion/ion_carveout_heap.c
ion_carveout_heap_map_dma -> sg_alloc_table(table, 1, GFP_KERNEL);
ion_buffer_alloc_dirty -> pages = buffer->sg_table->nents;
We want to support partial cache flush.
Align to cache line, instead of PAGE_SIZE, for efficiency consideration.
We have considered extended dirty bit, but looks like only align to PAGE_SIZE.
For experiment we modify ioctl ION_IOC_SYNC on armv7.
And directly use dmac_map_area & dmac_flush_range with add from user space.
However, we find dmac_map_area can not work with this addr from user space.
In fact, it is mcr can not work with addr from user space, it would hung.
Also, ion_vm_falut would happen twice.
The first time is from __dabt_usr, when we access the mmaped buffer, it is fine.
The second is from __davt_svc, it is caused by mcr, it is strange?
ION malloc carveout heap
addr = user mmap
user access addr, ion_vm_fault (__dabt_usr), build page table, and
vm_insert_page.
dmac_map_area & dmac_flush_range with addr -> ion_vm_fault (__davt_svc)
mcr hung.
Not understand why ion_vm_fault happen twice, where page table has been build.
Why mcr will hung with addr from user space.
Besides, no problem with ION on 3.0, which do not use ion_vm_fault.
Any suggestion?
Thanks
Hi,
ION debugfs currently shows/groups output based on type.
But, it is possible to have multiple heaps of the same type - for CMA
and carveout types.
It is more useful to get usage information for individual heaps.
- Nishanth Peethambaran
>From fa819b42fb69321a8e5db260ba9fd8ce7a2f16d2 Mon Sep 17 00:00:00 2001
From: Nishanth Peethambaran <nishanth(a)broadcom.com>
Date: Tue, 28 Aug 2012 07:57:37 +0530
Subject: [PATCH] gpu: ion: Update debugfs to show for each id
Update the debugfs read of client and heap to show
based on 'id' instead of 'type'.
Multiple heaps of the same type can be present, but
id is unique.
---
drivers/gpu/ion/ion.c | 14 +++++++-------
1 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/ion/ion.c b/drivers/gpu/ion/ion.c
index 34c12df..65cedee 100644
--- a/drivers/gpu/ion/ion.c
+++ b/drivers/gpu/ion/ion.c
@@ -547,11 +547,11 @@ static int ion_debug_client_show(struct seq_file
*s, void *unused)
for (n = rb_first(&client->handles); n; n = rb_next(n)) {
struct ion_handle *handle = rb_entry(n, struct ion_handle,
node);
- enum ion_heap_type type = handle->buffer->heap->type;
+ int id = handle->buffer->heap->id;
- if (!names[type])
- names[type] = handle->buffer->heap->name;
- sizes[type] += handle->buffer->size;
+ if (!names[id])
+ names[id] = handle->buffer->heap->name;
+ sizes[id] += handle->buffer->size;
}
mutex_unlock(&client->lock);
@@ -1121,7 +1121,7 @@ static const struct file_operations ion_fops = {
};
static size_t ion_debug_heap_total(struct ion_client *client,
- enum ion_heap_type type)
+ int id)
{
size_t size = 0;
struct rb_node *n;
@@ -1131,7 +1131,7 @@ static size_t ion_debug_heap_total(struct
ion_client *client,
struct ion_handle *handle = rb_entry(n,
struct ion_handle,
node);
- if (handle->buffer->heap->type == type)
+ if (handle->buffer->heap->id == id)
size += handle->buffer->size;
}
mutex_unlock(&client->lock);
@@ -1149,7 +1149,7 @@ static int ion_debug_heap_show(struct seq_file
*s, void *unused)
for (n = rb_first(&dev->clients); n; n = rb_next(n)) {
struct ion_client *client = rb_entry(n, struct ion_client,
node);
- size_t size = ion_debug_heap_total(client, heap->type);
+ size_t size = ion_debug_heap_total(client, heap->id);
if (!size)
continue;
if (client->task) {
--
1.7.0.4
How do we share ion buffers from user-space with other processes if
they are exported/shared after fork?
The ION_IOC_SHARE ioctl creates an fd for process-1. In 3.0 kernel,
the ION_ION_IMPORT ioctl from process-2 calls ion_import_fd which
calls fget(fd) which fails to find the file for the fd shared by
process-1.
In 3.4 kernel, dma_buf_get does the fget(fd) to get struct file which
also fails for the same reason - fget searches in current->files.
- Nishanth Peethambaran
Hi all,
I think that we have a memory mapping issue on ION carveout heap for
v3.4+ kernel from android.
The scenario is User app + kernel driver (cpu) + kernel driver (dma) that
all these three clients will access memory. And the memory is cacheable.
The .map_kernel() of carveout heap remaps the allocated memory buffer
by ioremap().
In arm_ioremap(), we don't allow memory to be mapped. In order to make
.map_kernel() working, we need to use memblock_alloc() &
memblock_remove() to move the heap memory from system to reserved
area. So the linear address of the memory buffer is removed from page table.
And the new virtual address comes from .map_kernel() while kernel driver
wants to access the buffer.
But ION use dma_sync_sg_for_devices() to flush cache that means they're
using linear address from page. So they're using the NOT-EXISTED virtual
address that is removed by memblock_remove().
Solution #1.
.map_kernel() only returns the linear address. And there's a limitation of this
solution, the heap should be always lying in low memory. So we needn't use
any ioremap() and memblock_remove() any more.
Solution #2.
Use vmap() in .map_kernel().
How do you think about these two solutions?
Regards
Haojian
Contiguous Memory Allocator requires each of its regions to be aligned
in such a way that it is possible to change migration type for all
pageblocks holding it and then isolate page of largest possible order from
the buddy allocator (which is MAX_ORDER-1). This patch relaxes alignment
requirements by one order, because MAX_ORDER alignment is not really
needed.
Signed-off-by: Marek Szyprowski <m.szyprowski(a)samsung.com>
CC: Michal Nazarewicz <mina86(a)mina86.com>
---
drivers/base/dma-contiguous.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 78efb03..34d94c7 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -250,7 +250,7 @@ int __init dma_declare_contiguous(struct device *dev, unsigned long size,
return -EINVAL;
/* Sanitise input arguments */
- alignment = PAGE_SIZE << max(MAX_ORDER, pageblock_order);
+ alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
base = ALIGN(base, alignment);
size = ALIGN(size, alignment);
limit &= ~(alignment - 1);
--
1.7.1.569.g6f426