On Wed, Feb 06, 2019 at 11:31:04PM -0800, Christoph Hellwig wrote:
The CPU may only access DMA mapped memory if ownership has been transferred back to the CPU using dma_sync_{single,sg}_to_cpu, and then before the device can access it again ownership needs to be transferred back to the device using dma_sync_{single,sg}_to_device.
I've run some testing, and this patch does indeed fix the crash in dma_sync_sg_for_cpu when it tried to use the 0 dma_address from the sg list.
Tested-by: Ørjan Eide orjan.eide@arm.com
I tested this on an older kernel, v4.14, since the dma-mapping code moved, in v4.19, to ignore the dma_address and instead use sg_phys() to get a valid address from the page, which is always valid in the ion sg lists. While this wouldn't crash on newer kernels, it's still good to avoid the unnecessary work when no CMO is needed.
Can you also test is with CONFIG_DMA_API_DEBUG enabled, as that should catch all the usual mistakes in DMA API usage, including the one found?
I checked again with CONFIG_DMA_API_DEBUG=y, both with and without this patch, and I didn't get any dma-mapping errors.
The issue I hit, without this patch, is when a CPU access starts after a device have attached, which caused ion to create a copy of the buffer's sg list with dma_address zeroed, but before the device have mapped the buffer.