Hi,
I've been (once again) looking at alternate caching models for Ion. Part of this work is also to make Ion fit better in to the dma_buf model.
Ion is a bit unusual for dma_buf. Most drivers that support dma_buf have two parts: exporting buffers that a driver allocates and importing buffers allocated elsewhere for use by the driver. Ion is basically designed to export only and not import buffers from other drivers (the need for import is also on my TODO list) Even more unusual, there is no actual 'driver' to map into. Ion currently does nothing except pass back the same sg_table each time without calling dma_map.
The description of the .map_dma_buf function in dma_buf_ops
* @map_dma_buf: returns list of scatter pages allocated, increases usecount * of the buffer. Requires atleast one attach to be called * before. Returned sg list should already be mapped into * _device_ address space. This call may sleep. May also return * -EINTR. Should return -EINVAL if attach hasn't been called yet.
So Ion is definitely not doing this correctly. This ties back into correcting the caching model. If we call dma_map_sg/dma_unmap_sg with begin_cpu_access, this should be enough to allow the caches to always be properly synchronized and means we can drop the various dma_sync calls floating around. This is going to violate one of the big fat comments in ion_buffer_create
/* * this will set up dma addresses for the sglist -- it is not * technically correct as per the dma api -- a specific * device isn't really taking ownership here. However, in practice on * our systems the only dma_address space is physical addresses. * Additionally, we can't afford the overhead of invalidating every * allocation via dma_map_sg. The implicit contract here is that * memory coming from the heaps is ready for dma, ie if it has a * cached mapping that mapping has been invalidated */ for_each_sg(buffer->sg_table->sgl, sg, buffer->sg_table->nents, i) { sg_dma_address(sg) = sg_phys(sg); sg_dma_len(sg) = sg->length; }
The overhead of invalidating is a valid concern. I'm hoping that the architecture has either evolved such that this won't be a problem or we can figure out some clever use of DMA_ATTR_SKIP_CPU_SYNC.
As part of this, I'm considering dropping the fault synchronization. If we have explicit use begin_cpu_access and use of the dma_buf sync ioctls, I don't think it should be necessary.
I have a 'pre-RFC' tree at https://pagure.io/kernel-ion/branch/ion_cache_proof_dec14 Yes, the patches are not bisectable and there is more to be done. These have been compile tested only and haven't been hooked up to anything to actually run (another actually big TODO). I'm mostly looking for feedback if this looks like the right direction and if there are going to be major problems with this approach. I don't actually anticipate this getting merged into drivers/staging/android/ion but this is the easiest way to continue discussion.
Thanks, Laura
Laura Abbott (4): staging: android: ion: Some cleanup staging: android: ion: Duplicate sg_table staging: android: ion: Remove page faulting support staging: android: ion: Call dma_map_sg for syncing and mapping
drivers/staging/android/ion/ion-ioctl.c | 6 - drivers/staging/android/ion/ion.c | 251 ++++++++---------------- drivers/staging/android/ion/ion.h | 5 +- drivers/staging/android/ion/ion_carveout_heap.c | 16 +- drivers/staging/android/ion/ion_chunk_heap.c | 15 +- drivers/staging/android/ion/ion_cma_heap.c | 5 +- drivers/staging/android/ion/ion_page_pool.c | 3 - drivers/staging/android/ion/ion_priv.h | 4 +- drivers/staging/android/ion/ion_system_heap.c | 14 +- 9 files changed, 90 insertions(+), 229 deletions(-)
- dmap_cnt isn't used. Remove it. - Ion has been using dma apis incorrectly to sync the caches. Remove bad usage in preparation for something better. - The alignment field doesn't actually change the alignment of an allocation, it only serves as an error check. This is basically pointless. Remove the field.
Not-signed-off-by: Laura Abbott labbott@redhat.com --- drivers/staging/android/ion/ion-ioctl.c | 6 -- drivers/staging/android/ion/ion.c | 92 ++----------------------- drivers/staging/android/ion/ion.h | 5 +- drivers/staging/android/ion/ion_carveout_heap.c | 16 +---- drivers/staging/android/ion/ion_chunk_heap.c | 15 +--- drivers/staging/android/ion/ion_cma_heap.c | 5 +- drivers/staging/android/ion/ion_page_pool.c | 3 - drivers/staging/android/ion/ion_priv.h | 4 +- drivers/staging/android/ion/ion_system_heap.c | 14 +--- 9 files changed, 16 insertions(+), 144 deletions(-)
diff --git a/drivers/staging/android/ion/ion-ioctl.c b/drivers/staging/android/ion/ion-ioctl.c index 7e7431d..a27db9d 100644 --- a/drivers/staging/android/ion/ion-ioctl.c +++ b/drivers/staging/android/ion/ion-ioctl.c @@ -95,7 +95,6 @@ long ion_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) struct ion_handle *handle;
handle = ion_alloc(client, data.allocation.len, - data.allocation.align, data.allocation.heap_id_mask, data.allocation.flags); if (IS_ERR(handle)) @@ -146,11 +145,6 @@ long ion_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) data.handle.handle = handle->id; break; } - case ION_IOC_SYNC: - { - ret = ion_sync_for_device(client, data.fd.fd); - break; - } case ION_IOC_CUSTOM: { if (!dev->custom_ioctl) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index d5cc307..8dd0932 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -102,7 +102,6 @@ static void ion_buffer_add(struct ion_device *dev, static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, struct ion_device *dev, unsigned long len, - unsigned long align, unsigned long flags) { struct ion_buffer *buffer; @@ -118,15 +117,14 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, buffer->flags = flags; kref_init(&buffer->ref);
- ret = heap->ops->allocate(heap, buffer, len, align, flags); + ret = heap->ops->allocate(heap, buffer, len, flags);
if (ret) { if (!(heap->flags & ION_HEAP_FLAG_DEFER_FREE)) goto err2;
ion_heap_freelist_drain(heap, 0); - ret = heap->ops->allocate(heap, buffer, len, align, - flags); + ret = heap->ops->allocate(heap, buffer, len, flags); if (ret) goto err2; } @@ -400,7 +398,7 @@ static int ion_handle_add(struct ion_client *client, struct ion_handle *handle) }
struct ion_handle *ion_alloc(struct ion_client *client, size_t len, - size_t align, unsigned int heap_id_mask, + unsigned int heap_id_mask, unsigned int flags) { struct ion_handle *handle; @@ -409,8 +407,8 @@ struct ion_handle *ion_alloc(struct ion_client *client, size_t len, struct ion_heap *heap; int ret;
- pr_debug("%s: len %zu align %zu heap_id_mask %u flags %x\n", __func__, - len, align, heap_id_mask, flags); + pr_debug("%s: len %zu heap_id_mask %u flags %x\n", __func__, + len, heap_id_mask, flags); /* * traverse the list of heaps available in this system in priority * order. If the heap type is supported by the client, and matches the @@ -427,7 +425,7 @@ struct ion_handle *ion_alloc(struct ion_client *client, size_t len, /* if the caller didn't specify this heap id */ if (!((1 << heap->id) & heap_id_mask)) continue; - buffer = ion_buffer_create(heap, dev, len, align, flags); + buffer = ion_buffer_create(heap, dev, len, flags); if (!IS_ERR(buffer)) break; } @@ -797,17 +795,12 @@ void ion_client_destroy(struct ion_client *client) } EXPORT_SYMBOL(ion_client_destroy);
-static void ion_buffer_sync_for_device(struct ion_buffer *buffer, - struct device *dev, - enum dma_data_direction direction); - static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direction) { struct dma_buf *dmabuf = attachment->dmabuf; struct ion_buffer *buffer = dmabuf->priv;
- ion_buffer_sync_for_device(buffer, attachment->dev, direction); return buffer->sg_table; }
@@ -817,60 +810,11 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, { }
-void ion_pages_sync_for_device(struct device *dev, struct page *page, - size_t size, enum dma_data_direction dir) -{ - struct scatterlist sg; - - sg_init_table(&sg, 1); - sg_set_page(&sg, page, size, 0); - /* - * This is not correct - sg_dma_address needs a dma_addr_t that is valid - * for the targeted device, but this works on the currently targeted - * hardware. - */ - sg_dma_address(&sg) = page_to_phys(page); - dma_sync_sg_for_device(dev, &sg, 1, dir); -} - struct ion_vma_list { struct list_head list; struct vm_area_struct *vma; };
-static void ion_buffer_sync_for_device(struct ion_buffer *buffer, - struct device *dev, - enum dma_data_direction dir) -{ - struct ion_vma_list *vma_list; - int pages = PAGE_ALIGN(buffer->size) / PAGE_SIZE; - int i; - - pr_debug("%s: syncing for device %s\n", __func__, - dev ? dev_name(dev) : "null"); - - if (!ion_buffer_fault_user_mappings(buffer)) - return; - - mutex_lock(&buffer->lock); - for (i = 0; i < pages; i++) { - struct page *page = buffer->pages[i]; - - if (ion_buffer_page_is_dirty(page)) - ion_pages_sync_for_device(dev, ion_buffer_page(page), - PAGE_SIZE, dir); - - ion_buffer_page_clean(buffer->pages + i); - } - list_for_each_entry(vma_list, &buffer->vmas, list) { - struct vm_area_struct *vma = vma_list->vma; - - zap_page_range(vma, vma->vm_start, vma->vm_end - vma->vm_start, - NULL); - } - mutex_unlock(&buffer->lock); -} - static int ion_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) { struct ion_buffer *buffer = vma->vm_private_data; @@ -1135,30 +1079,6 @@ struct ion_handle *ion_import_dma_buf_fd(struct ion_client *client, int fd) } EXPORT_SYMBOL(ion_import_dma_buf_fd);
-int ion_sync_for_device(struct ion_client *client, int fd) -{ - struct dma_buf *dmabuf; - struct ion_buffer *buffer; - - dmabuf = dma_buf_get(fd); - if (IS_ERR(dmabuf)) - return PTR_ERR(dmabuf); - - /* if this memory came from ion */ - if (dmabuf->ops != &dma_buf_ops) { - pr_err("%s: can not sync dmabuf from another exporter\n", - __func__); - dma_buf_put(dmabuf); - return -EINVAL; - } - buffer = dmabuf->priv; - - dma_sync_sg_for_device(NULL, buffer->sg_table->sgl, - buffer->sg_table->nents, DMA_BIDIRECTIONAL); - dma_buf_put(dmabuf); - return 0; -} - int ion_query_heaps(struct ion_client *client, struct ion_heap_query *query) { struct ion_device *dev = client->dev; diff --git a/drivers/staging/android/ion/ion.h b/drivers/staging/android/ion/ion.h index 93dafb4..3b4bff5 100644 --- a/drivers/staging/android/ion/ion.h +++ b/drivers/staging/android/ion/ion.h @@ -45,7 +45,6 @@ struct ion_buffer; * @name: used for debug purposes * @base: base address of heap in physical memory if applicable * @size: size of the heap in bytes if applicable - * @align: required alignment in physical memory if applicable * @priv: private info passed from the board file * * Provided by the board file. @@ -93,8 +92,6 @@ void ion_client_destroy(struct ion_client *client); * ion_alloc - allocate ion memory * @client: the client * @len: size of the allocation - * @align: requested allocation alignment, lots of hardware blocks - * have alignment requirements of some kind * @heap_id_mask: mask of heaps to allocate from, if multiple bits are set * heaps will be tried in order from highest to lowest * id @@ -106,7 +103,7 @@ void ion_client_destroy(struct ion_client *client); * an opaque handle to it. */ struct ion_handle *ion_alloc(struct ion_client *client, size_t len, - size_t align, unsigned int heap_id_mask, + unsigned int heap_id_mask, unsigned int flags);
/** diff --git a/drivers/staging/android/ion/ion_carveout_heap.c b/drivers/staging/android/ion/ion_carveout_heap.c index a8ea973..e0e360f 100644 --- a/drivers/staging/android/ion/ion_carveout_heap.c +++ b/drivers/staging/android/ion/ion_carveout_heap.c @@ -34,8 +34,7 @@ struct ion_carveout_heap { };
static ion_phys_addr_t ion_carveout_allocate(struct ion_heap *heap, - unsigned long size, - unsigned long align) + unsigned long size) { struct ion_carveout_heap *carveout_heap = container_of(heap, struct ion_carveout_heap, heap); @@ -60,16 +59,13 @@ static void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr,
static int ion_carveout_heap_allocate(struct ion_heap *heap, struct ion_buffer *buffer, - unsigned long size, unsigned long align, + unsigned long size, unsigned long flags) { struct sg_table *table; ion_phys_addr_t paddr; int ret;
- if (align > PAGE_SIZE) - return -EINVAL; - table = kmalloc(sizeof(*table), GFP_KERNEL); if (!table) return -ENOMEM; @@ -77,7 +73,7 @@ static int ion_carveout_heap_allocate(struct ion_heap *heap, if (ret) goto err_free;
- paddr = ion_carveout_allocate(heap, size, align); + paddr = ion_carveout_allocate(heap, size); if (paddr == ION_CARVEOUT_ALLOCATE_FAIL) { ret = -ENOMEM; goto err_free_table; @@ -104,10 +100,6 @@ static void ion_carveout_heap_free(struct ion_buffer *buffer)
ion_heap_buffer_zero(buffer);
- if (ion_buffer_cached(buffer)) - dma_sync_sg_for_device(NULL, table->sgl, table->nents, - DMA_BIDIRECTIONAL); - ion_carveout_free(heap, paddr, buffer->size); sg_free_table(table); kfree(table); @@ -132,8 +124,6 @@ struct ion_heap *ion_carveout_heap_create(struct ion_platform_heap *heap_data) page = pfn_to_page(PFN_DOWN(heap_data->base)); size = heap_data->size;
- ion_pages_sync_for_device(NULL, page, size, DMA_BIDIRECTIONAL); - ret = ion_heap_pages_zero(page, size, pgprot_writecombine(PAGE_KERNEL)); if (ret) return ERR_PTR(ret); diff --git a/drivers/staging/android/ion/ion_chunk_heap.c b/drivers/staging/android/ion/ion_chunk_heap.c index 70495dc..46e13f6 100644 --- a/drivers/staging/android/ion/ion_chunk_heap.c +++ b/drivers/staging/android/ion/ion_chunk_heap.c @@ -35,7 +35,7 @@ struct ion_chunk_heap {
static int ion_chunk_heap_allocate(struct ion_heap *heap, struct ion_buffer *buffer, - unsigned long size, unsigned long align, + unsigned long size, unsigned long flags) { struct ion_chunk_heap *chunk_heap = @@ -46,9 +46,6 @@ static int ion_chunk_heap_allocate(struct ion_heap *heap, unsigned long num_chunks; unsigned long allocated_size;
- if (align > chunk_heap->chunk_size) - return -EINVAL; - allocated_size = ALIGN(size, chunk_heap->chunk_size); num_chunks = allocated_size / chunk_heap->chunk_size;
@@ -104,10 +101,6 @@ static void ion_chunk_heap_free(struct ion_buffer *buffer)
ion_heap_buffer_zero(buffer);
- if (ion_buffer_cached(buffer)) - dma_sync_sg_for_device(NULL, table->sgl, table->nents, - DMA_BIDIRECTIONAL); - for_each_sg(table->sgl, sg, table->nents, i) { gen_pool_free(chunk_heap->pool, page_to_phys(sg_page(sg)), sg->length); @@ -135,8 +128,6 @@ struct ion_heap *ion_chunk_heap_create(struct ion_platform_heap *heap_data) page = pfn_to_page(PFN_DOWN(heap_data->base)); size = heap_data->size;
- ion_pages_sync_for_device(NULL, page, size, DMA_BIDIRECTIONAL); - ret = ion_heap_pages_zero(page, size, pgprot_writecombine(PAGE_KERNEL)); if (ret) return ERR_PTR(ret); @@ -160,8 +151,8 @@ struct ion_heap *ion_chunk_heap_create(struct ion_platform_heap *heap_data) chunk_heap->heap.ops = &chunk_heap_ops; chunk_heap->heap.type = ION_HEAP_TYPE_CHUNK; chunk_heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE; - pr_debug("%s: base %lu size %zu align %ld\n", __func__, - chunk_heap->base, heap_data->size, heap_data->align); + pr_debug("%s: base %lu size %zu \n", __func__, + chunk_heap->base, heap_data->size);
return &chunk_heap->heap;
diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c index 6c7de74..81780ed 100644 --- a/drivers/staging/android/ion/ion_cma_heap.c +++ b/drivers/staging/android/ion/ion_cma_heap.c @@ -42,7 +42,7 @@ struct ion_cma_buffer_info {
/* ION CMA heap operations functions */ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, - unsigned long len, unsigned long align, + unsigned long len, unsigned long flags) { struct ion_cma_heap *cma_heap = to_cma_heap(heap); @@ -54,9 +54,6 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, if (buffer->flags & ION_FLAG_CACHED) return -EINVAL;
- if (align > PAGE_SIZE) - return -EINVAL; - info = kzalloc(sizeof(struct ion_cma_buffer_info), GFP_KERNEL); if (!info) return ION_CMA_ALLOCATE_FAILED; diff --git a/drivers/staging/android/ion/ion_page_pool.c b/drivers/staging/android/ion/ion_page_pool.c index aea89c1..532eda7 100644 --- a/drivers/staging/android/ion/ion_page_pool.c +++ b/drivers/staging/android/ion/ion_page_pool.c @@ -30,9 +30,6 @@ static void *ion_page_pool_alloc_pages(struct ion_page_pool *pool)
if (!page) return NULL; - if (!pool->cached) - ion_pages_sync_for_device(NULL, page, PAGE_SIZE << pool->order, - DMA_BIDIRECTIONAL); return page; }
diff --git a/drivers/staging/android/ion/ion_priv.h b/drivers/staging/android/ion/ion_priv.h index 3c3b324..869a948 100644 --- a/drivers/staging/android/ion/ion_priv.h +++ b/drivers/staging/android/ion/ion_priv.h @@ -44,7 +44,6 @@ * @lock: protects the buffers cnt fields * @kmap_cnt: number of times the buffer is mapped to the kernel * @vaddr: the kernel mapping if kmap_cnt is not zero - * @dmap_cnt: number of times the buffer is mapped for dma * @sg_table: the sg table for the buffer if dmap_cnt is not zero * @pages: flat array of pages in the buffer -- used by fault * handler and only valid for buffers that are faulted in @@ -70,7 +69,6 @@ struct ion_buffer { struct mutex lock; int kmap_cnt; void *vaddr; - int dmap_cnt; struct sg_table *sg_table; struct page **pages; struct list_head vmas; @@ -174,7 +172,7 @@ struct ion_handle { struct ion_heap_ops { int (*allocate)(struct ion_heap *heap, struct ion_buffer *buffer, unsigned long len, - unsigned long align, unsigned long flags); + unsigned long flags); void (*free)(struct ion_buffer *buffer); void * (*map_kernel)(struct ion_heap *heap, struct ion_buffer *buffer); void (*unmap_kernel)(struct ion_heap *heap, struct ion_buffer *buffer); diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index 3ebbb75..a33331b 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -75,9 +75,6 @@ static struct page *alloc_buffer_page(struct ion_system_heap *heap,
page = ion_page_pool_alloc(pool);
- if (cached) - ion_pages_sync_for_device(NULL, page, PAGE_SIZE << order, - DMA_BIDIRECTIONAL); return page; }
@@ -129,7 +126,7 @@ static struct page *alloc_largest_available(struct ion_system_heap *heap,
static int ion_system_heap_allocate(struct ion_heap *heap, struct ion_buffer *buffer, - unsigned long size, unsigned long align, + unsigned long size, unsigned long flags) { struct ion_system_heap *sys_heap = container_of(heap, @@ -143,9 +140,6 @@ static int ion_system_heap_allocate(struct ion_heap *heap, unsigned long size_remaining = PAGE_ALIGN(size); unsigned int max_order = orders[0];
- if (align > PAGE_SIZE) - return -EINVAL; - if (size / PAGE_SIZE > totalram_pages / 2) return -ENOMEM;
@@ -372,7 +366,6 @@ void ion_system_heap_destroy(struct ion_heap *heap) static int ion_system_contig_heap_allocate(struct ion_heap *heap, struct ion_buffer *buffer, unsigned long len, - unsigned long align, unsigned long flags) { int order = get_order(len); @@ -381,9 +374,6 @@ static int ion_system_contig_heap_allocate(struct ion_heap *heap, unsigned long i; int ret;
- if (align > (PAGE_SIZE << order)) - return -EINVAL; - page = alloc_pages(low_order_gfp_flags, order); if (!page) return -ENOMEM; @@ -408,8 +398,6 @@ static int ion_system_contig_heap_allocate(struct ion_heap *heap,
buffer->sg_table = table;
- ion_pages_sync_for_device(NULL, page, len, DMA_BIDIRECTIONAL); - return 0;
free_table:
Ion currently returns a single sg_table on each dma_map call. This is incorrect for later usage.
Not-signed-off-by: Laura Abbott labbott@redhat.com --- drivers/staging/android/ion/ion.c | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 8dd0932..76b874a0 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -795,19 +795,49 @@ void ion_client_destroy(struct ion_client *client) } EXPORT_SYMBOL(ion_client_destroy);
+static struct sg_table *dup_sg_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlisg *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL); + if (ret) { + kfree(table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sg(table->sgl, sg, table->nents, i) { + memcpy(new_sg, sg, sizeof(*sg)); + sg->dma_address = 0; + new_sg = sg_next(new_sg); + } + + return new_table; +} + + static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direction) { struct dma_buf *dmabuf = attachment->dmabuf; struct ion_buffer *buffer = dmabuf->priv; + struct sg_table *table;
- return buffer->sg_table; + return dup_sg_table(buffer->sg_table); }
static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { + sg_free_table(table); + kfree(table); }
struct ion_vma_list {
Unclear if this is wanted or needed?
Not-signed-off-by: Laura Abbott labbott@redhat.com --- drivers/staging/android/ion/ion.c | 74 --------------------------------------- 1 file changed, 74 deletions(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 76b874a0..86dba07 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -41,37 +41,11 @@ #include "ion_priv.h" #include "compat_ion.h"
-bool ion_buffer_fault_user_mappings(struct ion_buffer *buffer) -{ - return (buffer->flags & ION_FLAG_CACHED) && - !(buffer->flags & ION_FLAG_CACHED_NEEDS_SYNC); -} - bool ion_buffer_cached(struct ion_buffer *buffer) { return !!(buffer->flags & ION_FLAG_CACHED); }
-static inline struct page *ion_buffer_page(struct page *page) -{ - return (struct page *)((unsigned long)page & ~(1UL)); -} - -static inline bool ion_buffer_page_is_dirty(struct page *page) -{ - return !!((unsigned long)page & 1UL); -} - -static inline void ion_buffer_page_dirty(struct page **page) -{ - *page = (struct page *)((unsigned long)(*page) | 1UL); -} - -static inline void ion_buffer_page_clean(struct page **page) -{ - *page = (struct page *)((unsigned long)(*page) & ~(1UL)); -} - /* this function should only be called while dev->lock is held */ static void ion_buffer_add(struct ion_device *dev, struct ion_buffer *buffer) @@ -139,25 +113,6 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, buffer->dev = dev; buffer->size = len;
- if (ion_buffer_fault_user_mappings(buffer)) { - int num_pages = PAGE_ALIGN(buffer->size) / PAGE_SIZE; - struct scatterlist *sg; - int i, j, k = 0; - - buffer->pages = vmalloc(sizeof(struct page *) * num_pages); - if (!buffer->pages) { - ret = -ENOMEM; - goto err1; - } - - for_each_sg(table->sgl, sg, table->nents, i) { - struct page *page = sg_page(sg); - - for (j = 0; j < sg->length / PAGE_SIZE; j++) - buffer->pages[k++] = page++; - } - } - buffer->dev = dev; buffer->size = len; INIT_LIST_HEAD(&buffer->vmas); @@ -845,25 +800,6 @@ struct ion_vma_list { struct vm_area_struct *vma; };
-static int ion_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) -{ - struct ion_buffer *buffer = vma->vm_private_data; - unsigned long pfn; - int ret; - - mutex_lock(&buffer->lock); - ion_buffer_page_dirty(buffer->pages + vmf->pgoff); - BUG_ON(!buffer->pages || !buffer->pages[vmf->pgoff]); - - pfn = page_to_pfn(ion_buffer_page(buffer->pages[vmf->pgoff])); - ret = vm_insert_pfn(vma, (unsigned long)vmf->virtual_address, pfn); - mutex_unlock(&buffer->lock); - if (ret) - return VM_FAULT_ERROR; - - return VM_FAULT_NOPAGE; -} - static void ion_vm_open(struct vm_area_struct *vma) { struct ion_buffer *buffer = vma->vm_private_data; @@ -900,7 +836,6 @@ static void ion_vm_close(struct vm_area_struct *vma) static const struct vm_operations_struct ion_vma_ops = { .open = ion_vm_open, .close = ion_vm_close, - .fault = ion_vm_fault, };
static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) @@ -914,15 +849,6 @@ static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) return -EINVAL; }
- if (ion_buffer_fault_user_mappings(buffer)) { - vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | - VM_DONTDUMP; - vma->vm_private_data = buffer; - vma->vm_ops = &ion_vma_ops; - ion_vm_open(vma); - return 0; - } - if (!(buffer->flags & ION_FLAG_CACHED)) vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
Technically, calling dma_buf_map_attachment should return a buffer properly dma_mapped. Add calls to dma_map_sg to begin_cpu_access to ensure this happens. As a side effect, this lets Ion buffers take advantage of the dma_buf sync ioctls.
Not-signed-off-by: Laura Abbott labbott@redhat.com --- drivers/staging/android/ion/ion.c | 61 ++++++++++++++++++++++++++++++--------- 1 file changed, 47 insertions(+), 14 deletions(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 86dba07..5177d79 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -776,6 +776,11 @@ static struct sg_table *dup_sg_table(struct sg_table *table) return new_table; }
+static void free_duped_table(struct sg_table *table) +{ + sg_free_table(table); + kfree(table); +}
static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direction) @@ -784,15 +789,29 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, struct ion_buffer *buffer = dmabuf->priv; struct sg_table *table;
- return dup_sg_table(buffer->sg_table); + /* + * TODO: Need to sync wrt CPU or device completely owning? + */ + + table = dup_sg_table(buffer->sg_table); + + if (!dma_map_sg(attachment->dev, table->sgl, table->nents, + direction)){ + ret = -ENOMEM; + goto err; + } + +err: + free_duped_table(table); + return ERR_PTR(ret); }
static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { - sg_free_table(table); - kfree(table); + dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); + free_duped_table(table); }
struct ion_vma_list { @@ -889,16 +908,24 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, struct ion_buffer *buffer = dmabuf->priv; void *vaddr;
- if (!buffer->heap->ops->map_kernel) { - pr_err("%s: map kernel is not implemented by this heap.\n", - __func__); - return -ENODEV; + /* + * TODO: Move this elsewhere because we don't always need a vaddr + */ + if (buffer->heap->ops->map_kernel) { + mutex_lock(&buffer->lock); + vaddr = ion_buffer_kmap_get(buffer); + mutex_unlock(&buffer->lock); }
- mutex_lock(&buffer->lock); - vaddr = ion_buffer_kmap_get(buffer); - mutex_unlock(&buffer->lock); - return PTR_ERR_OR_ZERO(vaddr); + /* + * Close enough right now? Flag to skip sync? + */ + if (!dma_map_sg(buffer->dev->dev.this_device, buffer->sg_table->sgl, + buffer->sg_table->nents, + DMA_BIDIRECTIONAL)) + return -ENOMEM; + + return 0; }
static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, @@ -906,9 +933,15 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, { struct ion_buffer *buffer = dmabuf->priv;
- mutex_lock(&buffer->lock); - ion_buffer_kmap_put(buffer); - mutex_unlock(&buffer->lock); + if (buffer->heap->ops->map_kernel) { + mutex_lock(&buffer->lock); + ion_buffer_kmap_put(buffer); + mutex_unlock(&buffer->lock); + } + + dma_unmap_sg(buffer->dev->dev.this_device, buffer->sg_table->sgl, + buffer->sg_table->nents, + DMA_BIDIRECTIONAL);
return 0; }
linaro-mm-sig@lists.linaro.org