On Friday 16 May 2014 13:55:01 Catalin Marinas wrote:
On Thu, May 15, 2014 at 05:53:53PM +0100, Liviu Dudau wrote:
On Thu, May 15, 2014 at 04:36:25PM +0100, Alan Stern wrote:
On Thu, 15 May 2014, Liviu Dudau wrote:
On Thu, May 15, 2014 at 03:11:48PM +0100, Alan Stern wrote:
On Wed, 14 May 2014, Mark Brown wrote:
arm64 architecture handles correctly 64bit DMAs and can enable support for 64bit EHCI host controllers.
Did you folks tested this for all sorts of host controllers? I have no way to verify that it works, and last I heard, many (or even most) controllers don't work right with 64-bit DMA.
I have tested it with a host controller that is capable of 64-bit DMA and without this change it doesn't work.
What do you mean it doesn't work? Can't the host controller use 32-bit DMA?
It could if arm64 would restrict the DMA addresses to 32-bit, but it doesn't and I end up on my platform with USB DMA buffers allocated >4GB address.
dma_alloc_coherent() on arm64 should return 32-bit addresses if the coherent_dma_mask is set to 32-bit. Which kernel version is this?
The more important question is what happens to high buffers allocated elsewhere that get passed into dma_map_sg by a device driver. Depending on the DT properties of the device and its parents, this needs to do one of three things:
a) translate the 64-bit virtual address into a 64-bit bus address b) create an IOMMU entry for the 64-bit address and pass the 32-bit IOMMU address to the driver c) use the swiotlb code to create a bounce buffer at a 32-bit DMA address and copy the data around
It's definitely wrong to just hardcode a DMA mask in the driver because that code doesn't know which of the three cases is being used. Moreover, you can't do it using an #ifdef CONFIG_ARM64, because it's completely independent of the architecture, and we need to do the exact same logic on ARM32 and any other architecture.
Arnd