On Thu, Sep 17, 2020 at 02:24:29PM +0200, Christian König wrote:
Am 17.09.20 um 14:18 schrieb Jason Gunthorpe:
On Thu, Sep 17, 2020 at 02:03:48PM +0200, Christian König wrote:
Am 17.09.20 um 13:31 schrieb Jason Gunthorpe:
On Thu, Sep 17, 2020 at 10:09:12AM +0200, Daniel Vetter wrote:
Yeah, but it doesn't work when forwarding from the drm chardev to the dma-buf on the importer side, since you'd need a ton of different address spaces. And you still rely on the core code picking up your pgoff mangling, which feels about as risky to me as the vma file pointer wrangling - if it's not consistently applied the reverse map is toast and unmap_mapping_range doesn't work correctly for our needs.
I would think the pgoff has to be translated at the same time the vm->vm_file is changed?
The owner of the dma_buf should have one virtual address space and FD, all its dma bufs should be linked to it, and all pgoffs translated to that space.
Yeah, that is exactly like amdgpu is doing it.
Going to document that somehow when I'm done with TTM cleanups.
BTW, while people are looking at this, is there a way to go from a VMA to a dma_buf that owns it?
Only a driver specific one.
Sounds OK
For TTM drivers vma->vm_private_data points to the buffer object. Not sure about the drivers using GEM only.
Why are drivers in control of the vma? I would think dma_buf should be the vma owner. IIRC module lifetime correctness essentially hings on the module owner of the struct file
Why are you asking?
I'm thinking about using find_vma on something that is not get_user_pages()'able to go to the underlying object, in this case dma buf.
So, user VA -> find_vma -> dma_buf object -> dma_buf operations on the memory it represents
Jason