On Fri, Jul 27, 2018 at 09:52:17PM +0300, Antti Seppälä wrote:
commit 56406e017a883b54b339207b230f85599f4d70ae upstream.
The commit 3bc04e28a030 ("usb: dwc2: host: Get aligned DMA in a more supported way") introduced a common way to align DMA allocations. The code in the commit aligns the struct dma_aligned_buffer but the actual DMA address pointed by data[0] gets aligned to an offset from the allocated boundary by the kmalloc_ptr and the old_xfer_buffer pointers.
This is against the recommendation in Documentation/DMA-API.txt which states:
Therefore, it is recommended that driver writers who don't take special care to determine the cache line size at run time only map virtual regions that begin and end on page boundaries (which are guaranteed also to be cache line boundaries).
The effect of this is that architectures with non-coherent DMA caches may run into memory corruption or kernel crashes with Unhandled kernel unaligned accesses exceptions.
Fix the alignment by positioning the DMA area in front of the allocation and use memory at the end of the area for storing the orginal transfer_buffer pointer. This may have the added benefit of increased performance as the DMA area is now fully aligned on all architectures.
Tested with Lantiq xRX200 (MIPS) and RPi Model B Rev 2 (ARM).
Fixes: 3bc04e28a030 ("usb: dwc2: host: Get aligned DMA in a more supported way") Cc: stable@vger.kernel.org Reviewed-by: Douglas Anderson dianders@chromium.org [ Antti: backported to 4.9: edited difference in whitespace ] Signed-off-by: Antti Seppälä a.seppala@gmail.com Signed-off-by: Felipe Balbi felipe.balbi@linux.intel.com
Notes: This is the same patch already applied upstream and queued for stable kernels 4.14 and 4.17 but with a minor whitespace edit to make it apply also on 4.9.
Now applied, thanks.
greg k-h