The main advantage of the IOMMU API is that it is able to separate multiple contexts, per user, so one application that is able to program a DMA engine cannot access memory that another application has mapped. Is that what you do with ion_iommu_heap_create(), i.e. that it gets called for each GPU context?
Yes. The concept of this part cames from dma-mapping API code.
I don't understand. The dma-mapping code on top of IOMMU does *not* use one iommu context per GPU context, it uses one IOMMU context for everything together.
Let me add more details on how IOMMU heap allocator is intended to use. IOMMU heap allocator would be a part of ION memory manager. Clients from user space call Ion memory manger ioctl to allocate memory from Ion IOMMU heap. IOMMU heap allocator would allocate the non-contiguous pages based on the alloc size requested. The user space clients can pass the Ion memory handle to kernel stack/drivers. Kernel drivers would get sg list from Ion memory manager api based on the mem handle received. The sg list would be used to sync memory, map the physical pages memory into desired iommu space using dma-mapping API's before passing memory to H/W module.
The allocation logic is IOMMU heap allocator can use dma-mapping api to allocate, if possible. As of now, it doesn't seem possible. The dma alloc apis tries to alloc virtual space in kernel along with phys memory during alloc. The user can alloc arbitrarily large amounts of memory, which would not fit into virtual space of kernel. So, the IOMMU should alloc pages itself and IOMMU heap allocator should take care of conflicting mapping issue.
The Ion IOMMU Heap allocator shouldn't be doing any IOMMU mapping using direct IOMMU API's. Moreover, IOMMU heap allocator has no info on which dev is going to access the mapping. So, it can't do proper mapping.
Using vzalloc here probably goes a bit too far, it's fairly expensive to use compared with kzalloc, not only do you have to maintain the page table entries for the new mapping, it also means you use up precious space in the vmalloc area and have to use small pages for accessing the data in it.
Should I use "kzalloc()" instead here?
Yes, that would be better, at least if you can prove that there is a reasonable upper bound on the size of the allocation. kmalloc/kzalloc usually work ok up to 16kb or at most 128kb. If you think the total allocation might be larger than that, you have to use vzalloc anyway but that should come with a long comment explaining what is going on.
Kzalloc doesn't guarantee any size bigger than PAGE_SIZE when the system memory Is fully fragmented. It should only be used for sizes less than or equal to PAGE_SIZE. Otherwise vzalloc() should be used to avoid failures due to fragmentation.
-KR
--nvpublic