On Wed, 23 Nov 2022 at 09:07, Thomas Zimmermann tzimmermann@suse.de wrote:
Hi
Am 22.11.22 um 18:08 schrieb Daniel Vetter:
tldr; DMA buffers aren't normal memory, expecting that you can use them like that (like calling get_user_pages works, or that they're accounting like any other normal memory) cannot be guaranteed.
Since some userspace only runs on integrated devices, where all buffers are actually all resident system memory, there's a huge temptation to assume that a struct page is always present and useable like for any more pagecache backed mmap. This has the potential to result in a uapi nightmare.
To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which blocks get_user_pages and all the other struct page based infrastructure for everyone. In spirit this is the uapi counterpart to the kernel-internal CONFIG_DMABUF_DEBUG.
Motivated by a recent patch which wanted to swich the system dma-buf heap to vm_insert_page instead of vm_insert_pfn.
v2:
Jason brought up that we also want to guarantee that all ptes have the pte_special flag set, to catch fast get_user_pages (on architectures that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
From auditing the various functions to insert pfn pte entires (vm_insert_pfn_prot, remap_pfn_range and all it's callers like dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so this should be the correct flag to check for.
v3: Change to WARN_ON_ONCE (Thomas Zimmermann)
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-WbW... Acked-by: Christian König christian.koenig@amd.com Acked-by: Thomas Zimmermann tzimmermann@suse.de Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Suren Baghdasaryan surenb@google.com Cc: Matthew Wilcox willy@infradead.org Cc: John Stultz john.stultz@linaro.org Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org -- Ok I entirely forgot about this patch but stumbled over it and checked what's up with it no. I think it's ready now for merging:
- shmem helper patches to fix up vgem landed
- ttm has been fixed since a while
- I don't think we've had any other open issues
Time to lock down this uapi contract for real?
-Daniel
drivers/dma-buf/dma-buf.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index b6c36914e7c6..88718665c3c3 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -150,6 +150,8 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) ret = dmabuf->ops->mmap(dmabuf, vma); dma_resv_unlock(dmabuf->resv);
WARN_ON_ONCE(!(vma->vm_flags & VM_PFNMAP));
Well , I already a-b'ed this, but does it work with DMa helpers. I'm asking because of [1].
Best regards Thomas
[1] https://elixir.bootlin.com/linux/v6.1-rc6/source/drivers/gpu/drm/drm_gem_dma...
This one is entertaining, but also doesn't matter, because the remap_pfn_range that the various dma_mmap functions boil down to sets the VM_PFNMAP and a pile of other flags. See
https://elixir.bootlin.com/linux/v6.1-rc6/source/mm/memory.c#L2518
I have no idea why Laurent cleared this flag here just so it gets reset again a bit later when he added that code. Laurent? -Daniel
}return ret;
@@ -1495,6 +1497,8 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, ret = dmabuf->ops->mmap(dmabuf, vma); dma_resv_unlock(dmabuf->resv);
WARN_ON_ONCE(!(vma->vm_flags & VM_PFNMAP));
} EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF);return ret;
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Ivo Totev