On Mon, Apr 16, 2018 at 2:39 PM, Christoph Hellwig hch@infradead.org wrote:
On Tue, Apr 03, 2018 at 08:08:32PM +0200, Daniel Vetter wrote:
I did not mean you should dma_map_sg/page. I just meant that using dma_map_resource to fill only the dma address part of the sg table seems perfectly sufficient.
But that is not how the interface work, especially facing sg_dma_len.
Assuming you get an sg table that's been mapping by calling dma_map_sg was always a bit a case of bending the abstraction to avoid typing code. The only thing an importer ever should have done is look at the dma addresses in that sg table, nothing else.
The scatterlist is not a very good abstraction unfortunately, but it it is spread all over the kernel. And we do expect that anyone who gets passed a scatterlist can use sg_page() or sg_virt() (which calls sg_page()) on it. Your changes would break that, and will cause major trouble because of that.
If you want to expose p2p memory returned from dma_map_resource in dmabuf do not use scatterlists for this please, but with a new interface that explicitly passes a virtual address, a dma address and a length and make it very clear that virt_to_page will not work on the virtual address.
We've broken that assumption in i915 years ago. Not struct page backed gpu memory is very real.
Of course we'll never feed such a strange sg table to a driver which doesn't understand it, but allowing sg_page == NULL works perfectly fine. At least for gpu drivers.
If that's not acceptable then I guess we could go over the entire tree and frob all the gpu related code to switch over to a new struct sg_table_might_not_be_struct_page_backed, including all the other functions we added over the past few years to iterate over sg tables. But seems slightly silly, given that sg tables seem to do exactly what we need. -Daniel