On Fri, May 16, 2014 at 06:08:45PM +0100, Jon Medhurst (Tixy) wrote:
On Fri, 2014-05-16 at 13:55 +0100, Catalin Marinas wrote: [...]
It could if arm64 would restrict the DMA addresses to 32-bit, but it doesn't and I end up on my platform with USB DMA buffers allocated >4GB address.
dma_alloc_coherent() on arm64 should return 32-bit addresses if the coherent_dma_mask is set to 32-bit.
Not if you have CONFIG_DMA_CMA. Unless I have misread the code, enabling CMA means memory comes from a common pool carved out at boot with no way for drivers to specify it's restrictions [1]. It's what I've spent most of the week trying to work around in a clean way, and have finally given up.
The easiest "hack" would be to pass a limit dma_contiguous_reserve() in arm64_memblock_init(), something like this:
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 51d5352e6ad5..558434c69612 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -162,7 +162,7 @@ void __init arm64_memblock_init(void) }
early_init_fdt_scan_reserved_mem(); - dma_contiguous_reserve(0); + dma_contiguous_reserve(dma_to_phys(NULL, DMA_BIT_MASK(32)) + 1);
memblock_allow_resize(); memblock_dump_all();
probably with a check for IS_ENABLED(CONFIG_ZONE_DMA) (we do this for swiotlb initialisation).
At some point, if we have some system topology description we could decide whether we need to limit the above based on the dma coherent masks.