Hi guys,
we are currently working an Freesync and direct scan out from system
memory on AMD APUs in A+A laptops.
On problem we stumbled over is that our display hardware needs to scan
out from uncached system memory and we currently don't have a way to
communicate that through DMA-buf.
For our specific use case at hand we are going to implement something
driver specific, but the question is should we have something more
generic for this?
After all the system memory access pattern is a PCIe extension and as
such something generic.
Regards,
Christian.
On Wed, Sep 30, 2020 at 11:39:06AM +0200, Michel Dänzer wrote:
> On 2020-03-17 10:21 p.m., Jason Ekstrand wrote:
> > Explicit synchronization is the future. At least, that seems to be what
> > most userspace APIs are agreeing on at this point. However, most of our
> > Linux APIs (both userspace and kernel UAPI) are currently built around
> > implicit synchronization with dma-buf. While work is ongoing to change
> > many of the userspace APIs and protocols to an explicit synchronization
> > model, switching over piecemeal is difficult due to the number of
> > potential components involved. On the kernel side, many drivers use
> > dma-buf including GPU (3D/compute), display, v4l, and others. In
> > userspace, we have X11, several Wayland compositors, 3D drivers, compute
> > drivers (OpenCL etc.), media encode/decode, and the list goes on.
> >
> > This patch provides a path forward by allowing userspace to manually
> > manage the fences attached to a dma-buf. Alternatively, one can think
> > of this as making dma-buf's implicit synchronization simply a carrier
> > for an explicit fence. This is accomplished by adding two IOCTLs to
> > dma-buf for importing and exporting a sync file to/from the dma-buf.
> > This way a userspace component which is uses explicit synchronization,
> > such as a Vulkan driver, can manually set the write fence on a buffer
> > before handing it off to an implicitly synchronized component such as a
> > Wayland compositor or video encoder. In this way, each of the different
> > components can be upgraded to an explicit synchronization model one at a
> > time as long as the userspace pieces connecting them are aware of it and
> > import/export fences at the right times.
> >
> > There is a potential race condition with this API if userspace is not
> > careful. A typical use case for implicit synchronization is to wait for
> > the dma-buf to be ready, use it, and then signal it for some other
> > component. Because a sync_file cannot be created until it is guaranteed
> > to complete in finite time, userspace can only signal the dma-buf after
> > it has already submitted the work which uses it to the kernel and has
> > received a sync_file back. There is no way to atomically submit a
> > wait-use-signal operation. This is not, however, really a problem with
> > this API so much as it is a problem with explicit synchronization
> > itself. The way this is typically handled is to have very explicit
> > ownership transfer points in the API or protocol which ensure that only
> > one component is using it at any given time. Both X11 (via the PRESENT
> > extension) and Wayland provide such ownership transfer points via
> > explicit present and idle messages.
> >
> > The decision was intentionally made in this patch to make the import and
> > export operations IOCTLs on the dma-buf itself rather than as a DRM
> > IOCTL. This makes it the import/export operation universal across all
> > components which use dma-buf including GPU, display, v4l, and others.
> > It also means that a userspace component can do the import/export
> > without access to the DRM fd which may be tricky to get in cases where
> > the client communicates with DRM via a userspace API such as OpenGL or
> > Vulkan. At a future date we may choose to add direct import/export APIs
> > to components such as drm_syncobj to avoid allocating a file descriptor
> > and going through two ioctls. However, that seems to be something of a
> > micro-optimization as import/export operations are likely to happen at a
> > rate of a few per frame of rendered or decoded video.
> >
> > v2 (Jason Ekstrand):
> > - Use a wrapper dma_fence_array of all fences including the new one
> > when importing an exclusive fence.
> >
> > v3 (Jason Ekstrand):
> > - Lock around setting shared fences as well as exclusive
> > - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
> > - Initialize ret to 0 in dma_buf_wait_sync_file
> >
> > v4 (Jason Ekstrand):
> > - Use the new dma_resv_get_singleton helper
> >
> > v5 (Jason Ekstrand):
> > - Rename the IOCTLs to import/export rather than wait/signal
> > - Drop the WRITE flag and always get/set the exclusive fence
> >
> > Signed-off-by: Jason Ekstrand <jason(a)jlekstrand.net>
>
> What's the status of this? DMA_BUF_IOCTL_EXPORT_SYNC_FILE would be useful
> for Wayland compositors to wait for client buffers to become ready without
> being prone to getting delayed by later HW access to them, so it would be
> nice to merge that at least (if DMA_BUF_IOCTL_IMPORT_SYNC_FILE is still
> controversial).
I think the missing bits are just the usual stuff
- igt testcases
- userspace using the new ioctls
- review of the entire pile
I don't think there's any fundamental objections aside from "no one ever
pushed this over the finish line".
Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Recently there was a fairly long thread about recoreable hardware page
faults, how they can deadlock, and what to do about that.
While the discussion is still fresh I figured good time to try and
document the conclusions a bit. This documentation section explains
what's the potential problem, and the remedies we've discussed,
roughly ordered from best to worst.
v2: Linus -> Linux typoe (Dave)
v3:
- Make it clear drivers only need to implement one option (Christian)
- Make it clearer that implicit sync is out the window with exclusive
fences (Christian)
- Add the fairly theoretical option of segementing the memory (either
statically or through dynamic checks at runtime for which piece of
memory is managed how) and explain why it's not a great idea (Felix)
References: https://lore.kernel.org/dri-devel/20210107030127.20393-1-Felix.Kuehling@amd…
Cc: Dave Airlie <airlied(a)gmail.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Thomas Hellström <thomas.hellstrom(a)intel.com>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: Jerome Glisse <jglisse(a)redhat.com>
Cc: Felix Kuehling <felix.kuehling(a)amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
Documentation/driver-api/dma-buf.rst | 76 ++++++++++++++++++++++++++++
1 file changed, 76 insertions(+)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index a2133d69872c..7f37ec30d9fd 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -257,3 +257,79 @@ fences in the kernel. This means:
userspace is allowed to use userspace fencing or long running compute
workloads. This also means no implicit fencing for shared buffers in these
cases.
+
+Recoverable Hardware Page Faults Implications
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Modern hardware supports recoverable page faults, which has a lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's running on the
+accelerator and a memory allocation is usually required to resolve the fault.
+But memory allocations are not allowed to gate completion of DMA fences, which
+means any workload using recoverable page faults cannot use DMA fences for
+synchronization. Synchronization fences controlled by userspace must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor protocols on
+Linux rely on DMA fences, which means without an entirely new userspace stack
+built on top of userspace fences, they cannot benefit from recoverable page
+faults. Specifically this means implicit synchronization will not be possible.
+The exception is when page faults are only used as migration hints and never to
+on-demand fill a memory request. For now this means recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D rendering and
+compute side, like compute units or command submission engines. If both a 3D
+job with a DMA fence and a compute workload using recoverable page faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish and release
+ hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the memory
+ allocation is waiting for the DMA fence of the 3D workload to complete.
+
+There are a few options to prevent this problem, one of which drivers need to
+ensure:
+
+- Compute workloads can always be preempted, even when a page fault is pending
+ and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling have
+ independent hardware resources to guarantee forward progress. This could be
+ achieved through e.g. through dedicated engines and minimal compute unit
+ reservations for DMA fence workloads.
+
+- The reservation approach could be further refined by only reserving the
+ hardware resources for DMA fence workloads when they are in-flight. This must
+ cover the time from when the DMA fence is visible to other threads up to
+ moment when fence is completed through dma_fence_signal().
+
+- As a last resort, if the hardware provides no useful reservation mechanics,
+ all workloads must be flushed from the GPU when switching between jobs
+ requiring DMA fences or jobs requiring page fault handling: This means all DMA
+ fences must complete before a compute job with page fault handling can be
+ inserted into the scheduler queue. And vice versa, before a DMA fence can be
+ made visible anywhere in the system, all compute workloads must be preempted
+ to guarantee all pending GPU page faults are flushed.
+
+- Only a fairly theoretical option would be to untangle these dependencies when
+ allocating memory to repair hardware page faults, either through separate
+ memory blocks or runtime tracking of the full dependency graph of all DMA
+ fences. This results very wide impact on the kernel, since resolving the page
+ on the CPU side can itself involve a page fault. It is much more feasible and
+ robust to limit the impact of handling hardware page faults to the specific
+ driver.
+
+Note that workloads that run on independent hardware like copy engines or other
+GPUs do not have any impact. This allows us to keep using DMA fences internally
+in the kernel even for resolving hardware page faults, e.g. by using copy
+engines to clear or copy memory needed to resolve the page fault.
+
+In some ways this page fault problem is a special case of the `Infinite DMA
+Fences` discussions: Infinite fences from compute workloads are allowed to
+depend on DMA fences, but not the other way around. And not even the page fault
+problem is new, because some other CPU thread in userspace might
+hit a page fault which holds up a userspace fence - supporting page faults on
+GPUs doesn't anything fundamentally new.
--
2.30.0
On Fri, Feb 26, 2021 at 10:41 AM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
>
>
> On 2/25/21 4:49 PM, Daniel Vetter wrote:
> > On Thu, Feb 25, 2021 at 11:44 AM Daniel Vetter <daniel(a)ffwll.ch> wrote:
> >> On Thu, Feb 25, 2021 at 11:28:31AM +0100, Christian König wrote:
> >>> Am 24.02.21 um 10:31 schrieb Daniel Vetter:
> >>>> On Wed, Feb 24, 2021 at 10:16 AM Thomas Hellström (Intel)
> >>>> <thomas_os(a)shipmail.org> wrote:
> >>>>> On 2/24/21 9:45 AM, Daniel Vetter wrote:
> >>>>>> On Wed, Feb 24, 2021 at 8:46 AM Thomas Hellström (Intel)
> >>>>>> <thomas_os(a)shipmail.org> wrote:
> >>>>>>> On 2/23/21 11:59 AM, Daniel Vetter wrote:
> >>>>>>>> tldr; DMA buffers aren't normal memory, expecting that you can use
> >>>>>>>> them like that (like calling get_user_pages works, or that they're
> >>>>>>>> accounting like any other normal memory) cannot be guaranteed.
> >>>>>>>>
> >>>>>>>> Since some userspace only runs on integrated devices, where all
> >>>>>>>> buffers are actually all resident system memory, there's a huge
> >>>>>>>> temptation to assume that a struct page is always present and useable
> >>>>>>>> like for any more pagecache backed mmap. This has the potential to
> >>>>>>>> result in a uapi nightmare.
> >>>>>>>>
> >>>>>>>> To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
> >>>>>>>> blocks get_user_pages and all the other struct page based
> >>>>>>>> infrastructure for everyone. In spirit this is the uapi counterpart to
> >>>>>>>> the kernel-internal CONFIG_DMABUF_DEBUG.
> >>>>>>>>
> >>>>>>>> Motivated by a recent patch which wanted to swich the system dma-buf
> >>>>>>>> heap to vm_insert_page instead of vm_insert_pfn.
> >>>>>>>>
> >>>>>>>> v2:
> >>>>>>>>
> >>>>>>>> Jason brought up that we also want to guarantee that all ptes have the
> >>>>>>>> pte_special flag set, to catch fast get_user_pages (on architectures
> >>>>>>>> that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
> >>>>>>>> still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
> >>>>>>>>
> >>>>>>>> From auditing the various functions to insert pfn pte entires
> >>>>>>>> (vm_insert_pfn_prot, remap_pfn_range and all it's callers like
> >>>>>>>> dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
> >>>>>>>> this should be the correct flag to check for.
> >>>>>>>>
> >>>>>>> If we require VM_PFNMAP, for ordinary page mappings, we also need to
> >>>>>>> disallow COW mappings, since it will not work on architectures that
> >>>>>>> don't have CONFIG_ARCH_HAS_PTE_SPECIAL, (see the docs for vm_normal_page()).
> >>>>>> Hm I figured everyone just uses MAP_SHARED for buffer objects since
> >>>>>> COW really makes absolutely no sense. How would we enforce this?
> >>>>> Perhaps returning -EINVAL on is_cow_mapping() at mmap time. Either that
> >>>>> or allowing MIXEDMAP.
> >>>>>
> >>>>>>> Also worth noting is the comment in ttm_bo_mmap_vma_setup() with
> >>>>>>> possible performance implications with x86 + PAT + VM_PFNMAP + normal
> >>>>>>> pages. That's a very old comment, though, and might not be valid anymore.
> >>>>>> I think that's why ttm has a page cache for these, because it indeed
> >>>>>> sucks. The PAT changes on pages are rather expensive.
> >>>>> IIRC the page cache was implemented because of the slowness of the
> >>>>> caching mode transition itself, more specifically the wbinvd() call +
> >>>>> global TLB flush.
> >>> Yes, exactly that. The global TLB flush is what really breaks our neck here
> >>> from a performance perspective.
> >>>
> >>>>>> There is still an issue for iomem mappings, because the PAT validation
> >>>>>> does a linear walk of the resource tree (lol) for every vm_insert_pfn.
> >>>>>> But for i915 at least this is fixed by using the io_mapping
> >>>>>> infrastructure, which does the PAT reservation only once when you set
> >>>>>> up the mapping area at driver load.
> >>>>> Yes, I guess that was the issue that the comment describes, but the
> >>>>> issue wasn't there with vm_insert_mixed() + VM_MIXEDMAP.
> >>>>>
> >>>>>> Also TTM uses VM_PFNMAP right now for everything, so it can't be a
> >>>>>> problem that hurts much :-)
> >>>>> Hmm, both 5.11 and drm-tip appears to still use MIXEDMAP?
> >>>>>
> >>>>> https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/ttm/ttm_bo_v…
> >>>> Uh that's bad, because mixed maps pointing at struct page wont stop
> >>>> gup. At least afaik.
> >>> Hui? I'm pretty sure MIXEDMAP stops gup as well. Otherwise we would have
> >>> already seen tons of problems with the page cache.
> >> On any architecture which has CONFIG_ARCH_HAS_PTE_SPECIAL vm_insert_mixed
> >> boils down to vm_insert_pfn wrt gup. And special pte stops gup fast path.
> >>
> >> But if you don't have VM_IO or VM_PFNMAP set, then I'm not seeing how
> >> you're stopping gup slow path. See check_vma_flags() in mm/gup.c.
> >>
> >> Also if you don't have CONFIG_ARCH_HAS_PTE_SPECIAL then I don't think
> >> vm_insert_mixed even works on iomem pfns. There's the devmap exception,
> >> but we're not devmap. Worse ttm abuses some accidental codepath to smuggle
> >> in hugepte support by intentionally not being devmap.
> >>
> >> So I'm really not sure this works as we think it should. Maybe good to do
> >> a quick test program on amdgpu with a buffer in system memory only and try
> >> to do direct io into it. If it works, you have a problem, and a bad one.
> > That's probably impossible, since a quick git grep shows that pretty
> > much anything reasonable has special ptes: arc, arm, arm64, powerpc,
> > riscv, s390, sh, sparc, x86. I don't think you'll have a platform
> > where you can plug an amdgpu in and actually exercise the bug :-)
>
> Hm. AFAIK _insert_mixed() doesn't set PTE_SPECIAL on system pages, so I
> don't see what should be stopping gup to those?
If you have an arch with pte special we use insert_pfn(), which afaict
will use pte_mkspecial for the !devmap case. And ttm isn't devmap
(otherwise our hugepte abuse of devmap hugeptes would go rather
wrong).
So I think it stops gup. But I haven't verified at all. Would be good
if Christian can check this with some direct io to a buffer in system
memory.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
tldr; DMA buffers aren't normal memory, expecting that you can use
them like that (like calling get_user_pages works, or that they're
accounting like any other normal memory) cannot be guaranteed.
Since some userspace only runs on integrated devices, where all
buffers are actually all resident system memory, there's a huge
temptation to assume that a struct page is always present and useable
like for any more pagecache backed mmap. This has the potential to
result in a uapi nightmare.
To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
blocks get_user_pages and all the other struct page based
infrastructure for everyone. In spirit this is the uapi counterpart to
the kernel-internal CONFIG_DMABUF_DEBUG.
Motivated by a recent patch which wanted to swich the system dma-buf
heap to vm_insert_page instead of vm_insert_pfn.
v2:
Jason brought up that we also want to guarantee that all ptes have the
pte_special flag set, to catch fast get_user_pages (on architectures
that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
>From auditing the various functions to insert pfn pte entires
(vm_insert_pfn_prot, remap_pfn_range and all it's callers like
dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
this should be the correct flag to check for.
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-Wb…
Acked-by: Christian König <christian.koenig(a)amd.com>
Cc: Jason Gunthorpe <jgg(a)ziepe.ca>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: John Stultz <john.stultz(a)linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/dma-buf.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index f264b70c383e..06cb1d2e9fdc 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -127,6 +127,7 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
+ int ret;
if (!is_dma_buf_file(file))
return -EINVAL;
@@ -142,7 +143,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
@@ -1244,6 +1249,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
+ int ret;
+
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1264,7 +1271,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);
--
2.30.0
On Wed, Feb 24, 2021 at 10:16 AM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
>
>
> On 2/24/21 9:45 AM, Daniel Vetter wrote:
> > On Wed, Feb 24, 2021 at 8:46 AM Thomas Hellström (Intel)
> > <thomas_os(a)shipmail.org> wrote:
> >>
> >> On 2/23/21 11:59 AM, Daniel Vetter wrote:
> >>> tldr; DMA buffers aren't normal memory, expecting that you can use
> >>> them like that (like calling get_user_pages works, or that they're
> >>> accounting like any other normal memory) cannot be guaranteed.
> >>>
> >>> Since some userspace only runs on integrated devices, where all
> >>> buffers are actually all resident system memory, there's a huge
> >>> temptation to assume that a struct page is always present and useable
> >>> like for any more pagecache backed mmap. This has the potential to
> >>> result in a uapi nightmare.
> >>>
> >>> To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
> >>> blocks get_user_pages and all the other struct page based
> >>> infrastructure for everyone. In spirit this is the uapi counterpart to
> >>> the kernel-internal CONFIG_DMABUF_DEBUG.
> >>>
> >>> Motivated by a recent patch which wanted to swich the system dma-buf
> >>> heap to vm_insert_page instead of vm_insert_pfn.
> >>>
> >>> v2:
> >>>
> >>> Jason brought up that we also want to guarantee that all ptes have the
> >>> pte_special flag set, to catch fast get_user_pages (on architectures
> >>> that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
> >>> still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
> >>>
> >>> From auditing the various functions to insert pfn pte entires
> >>> (vm_insert_pfn_prot, remap_pfn_range and all it's callers like
> >>> dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
> >>> this should be the correct flag to check for.
> >>>
> >> If we require VM_PFNMAP, for ordinary page mappings, we also need to
> >> disallow COW mappings, since it will not work on architectures that
> >> don't have CONFIG_ARCH_HAS_PTE_SPECIAL, (see the docs for vm_normal_page()).
> > Hm I figured everyone just uses MAP_SHARED for buffer objects since
> > COW really makes absolutely no sense. How would we enforce this?
>
> Perhaps returning -EINVAL on is_cow_mapping() at mmap time. Either that
> or allowing MIXEDMAP.
>
> >> Also worth noting is the comment in ttm_bo_mmap_vma_setup() with
> >> possible performance implications with x86 + PAT + VM_PFNMAP + normal
> >> pages. That's a very old comment, though, and might not be valid anymore.
> > I think that's why ttm has a page cache for these, because it indeed
> > sucks. The PAT changes on pages are rather expensive.
>
> IIRC the page cache was implemented because of the slowness of the
> caching mode transition itself, more specifically the wbinvd() call +
> global TLB flush.
>
> >
> > There is still an issue for iomem mappings, because the PAT validation
> > does a linear walk of the resource tree (lol) for every vm_insert_pfn.
> > But for i915 at least this is fixed by using the io_mapping
> > infrastructure, which does the PAT reservation only once when you set
> > up the mapping area at driver load.
>
> Yes, I guess that was the issue that the comment describes, but the
> issue wasn't there with vm_insert_mixed() + VM_MIXEDMAP.
>
> >
> > Also TTM uses VM_PFNMAP right now for everything, so it can't be a
> > problem that hurts much :-)
>
> Hmm, both 5.11 and drm-tip appears to still use MIXEDMAP?
>
> https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/ttm/ttm_bo_v…
Uh that's bad, because mixed maps pointing at struct page wont stop
gup. At least afaik.
Christian, do we need to patch this up, and maybe fix up ttm fault
handler to use io_mapping so the vm_insert_pfn stuff is fast?
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Feb 24, 2021 at 8:46 AM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
>
>
> On 2/23/21 11:59 AM, Daniel Vetter wrote:
> > tldr; DMA buffers aren't normal memory, expecting that you can use
> > them like that (like calling get_user_pages works, or that they're
> > accounting like any other normal memory) cannot be guaranteed.
> >
> > Since some userspace only runs on integrated devices, where all
> > buffers are actually all resident system memory, there's a huge
> > temptation to assume that a struct page is always present and useable
> > like for any more pagecache backed mmap. This has the potential to
> > result in a uapi nightmare.
> >
> > To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
> > blocks get_user_pages and all the other struct page based
> > infrastructure for everyone. In spirit this is the uapi counterpart to
> > the kernel-internal CONFIG_DMABUF_DEBUG.
> >
> > Motivated by a recent patch which wanted to swich the system dma-buf
> > heap to vm_insert_page instead of vm_insert_pfn.
> >
> > v2:
> >
> > Jason brought up that we also want to guarantee that all ptes have the
> > pte_special flag set, to catch fast get_user_pages (on architectures
> > that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
> > still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
> >
> > From auditing the various functions to insert pfn pte entires
> > (vm_insert_pfn_prot, remap_pfn_range and all it's callers like
> > dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
> > this should be the correct flag to check for.
> >
> If we require VM_PFNMAP, for ordinary page mappings, we also need to
> disallow COW mappings, since it will not work on architectures that
> don't have CONFIG_ARCH_HAS_PTE_SPECIAL, (see the docs for vm_normal_page()).
Hm I figured everyone just uses MAP_SHARED for buffer objects since
COW really makes absolutely no sense. How would we enforce this?
> Also worth noting is the comment in ttm_bo_mmap_vma_setup() with
> possible performance implications with x86 + PAT + VM_PFNMAP + normal
> pages. That's a very old comment, though, and might not be valid anymore.
I think that's why ttm has a page cache for these, because it indeed
sucks. The PAT changes on pages are rather expensive.
There is still an issue for iomem mappings, because the PAT validation
does a linear walk of the resource tree (lol) for every vm_insert_pfn.
But for i915 at least this is fixed by using the io_mapping
infrastructure, which does the PAT reservation only once when you set
up the mapping area at driver load.
Also TTM uses VM_PFNMAP right now for everything, so it can't be a
problem that hurts much :-)
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Feb 24, 2021 at 12:22 PM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
>
>
> On 2/24/21 10:26 AM, Daniel Vetter wrote:
> > On Wed, Feb 24, 2021 at 9:47 AM Thomas Hellström (Intel)
> > <thomas_os(a)shipmail.org> wrote:
> >>
> >> On 2/3/21 4:29 PM, Daniel Vetter wrote:
> >>> Recently there was a fairly long thread about recoreable hardware page
> >>> faults, how they can deadlock, and what to do about that.
> >>>
> >>> While the discussion is still fresh I figured good time to try and
> >>> document the conclusions a bit. This documentation section explains
> >>> what's the potential problem, and the remedies we've discussed,
> >>> roughly ordered from best to worst.
> >>>
> >>> v2: Linus -> Linux typoe (Dave)
> >>>
> >>> v3:
> >>> - Make it clear drivers only need to implement one option (Christian)
> >>> - Make it clearer that implicit sync is out the window with exclusive
> >>> fences (Christian)
> >>> - Add the fairly theoretical option of segementing the memory (either
> >>> statically or through dynamic checks at runtime for which piece of
> >>> memory is managed how) and explain why it's not a great idea (Felix)
> >>>
> >>> References: https://lore.kernel.org/dri-devel/20210107030127.20393-1-Felix.Kuehling@amd…
> >>> Cc: Dave Airlie <airlied(a)gmail.com>
> >>> Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
> >>> Cc: Thomas Hellström <thomas.hellstrom(a)intel.com>
> >>> Cc: "Christian König" <christian.koenig(a)amd.com>
> >>> Cc: Jerome Glisse <jglisse(a)redhat.com>
> >>> Cc: Felix Kuehling <felix.kuehling(a)amd.com>
> >>> Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
> >>> Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
> >>> Cc: linux-media(a)vger.kernel.org
> >>> Cc: linaro-mm-sig(a)lists.linaro.org
> >>> ---
> >>> Documentation/driver-api/dma-buf.rst | 76 ++++++++++++++++++++++++++++
> >>> 1 file changed, 76 insertions(+)
> >>>
> >>> diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> >>> index a2133d69872c..7f37ec30d9fd 100644
> >>> --- a/Documentation/driver-api/dma-buf.rst
> >>> +++ b/Documentation/driver-api/dma-buf.rst
> >>> @@ -257,3 +257,79 @@ fences in the kernel. This means:
> >>> userspace is allowed to use userspace fencing or long running compute
> >>> workloads. This also means no implicit fencing for shared buffers in these
> >>> cases.
> >>> +
> >>> +Recoverable Hardware Page Faults Implications
> >>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >>> +
> >>> +Modern hardware supports recoverable page faults, which has a lot of
> >>> +implications for DMA fences.
> >>> +
> >>> +First, a pending page fault obviously holds up the work that's running on the
> >>> +accelerator and a memory allocation is usually required to resolve the fault.
> >>> +But memory allocations are not allowed to gate completion of DMA fences, which
> >>> +means any workload using recoverable page faults cannot use DMA fences for
> >>> +synchronization. Synchronization fences controlled by userspace must be used
> >>> +instead.
> >>> +
> >>> +On GPUs this poses a problem, because current desktop compositor protocols on
> >>> +Linux rely on DMA fences, which means without an entirely new userspace stack
> >>> +built on top of userspace fences, they cannot benefit from recoverable page
> >>> +faults. Specifically this means implicit synchronization will not be possible.
> >>> +The exception is when page faults are only used as migration hints and never to
> >>> +on-demand fill a memory request. For now this means recoverable page
> >>> +faults on GPUs are limited to pure compute workloads.
> >>> +
> >>> +Furthermore GPUs usually have shared resources between the 3D rendering and
> >>> +compute side, like compute units or command submission engines. If both a 3D
> >>> +job with a DMA fence and a compute workload using recoverable page faults are
> >>> +pending they could deadlock:
> >>> +
> >>> +- The 3D workload might need to wait for the compute job to finish and release
> >>> + hardware resources first.
> >>> +
> >>> +- The compute workload might be stuck in a page fault, because the memory
> >>> + allocation is waiting for the DMA fence of the 3D workload to complete.
> >>> +
> >>> +There are a few options to prevent this problem, one of which drivers need to
> >>> +ensure:
> >>> +
> >>> +- Compute workloads can always be preempted, even when a page fault is pending
> >>> + and not yet repaired. Not all hardware supports this.
> >>> +
> >>> +- DMA fence workloads and workloads which need page fault handling have
> >>> + independent hardware resources to guarantee forward progress. This could be
> >>> + achieved through e.g. through dedicated engines and minimal compute unit
> >>> + reservations for DMA fence workloads.
> >>> +
> >>> +- The reservation approach could be further refined by only reserving the
> >>> + hardware resources for DMA fence workloads when they are in-flight. This must
> >>> + cover the time from when the DMA fence is visible to other threads up to
> >>> + moment when fence is completed through dma_fence_signal().
> >>> +
> >>> +- As a last resort, if the hardware provides no useful reservation mechanics,
> >>> + all workloads must be flushed from the GPU when switching between jobs
> >>> + requiring DMA fences or jobs requiring page fault handling: This means all DMA
> >>> + fences must complete before a compute job with page fault handling can be
> >>> + inserted into the scheduler queue. And vice versa, before a DMA fence can be
> >>> + made visible anywhere in the system, all compute workloads must be preempted
> >>> + to guarantee all pending GPU page faults are flushed.
> >>> +
> >>> +- Only a fairly theoretical option would be to untangle these dependencies when
> >>> + allocating memory to repair hardware page faults, either through separate
> >>> + memory blocks or runtime tracking of the full dependency graph of all DMA
> >>> + fences. This results very wide impact on the kernel, since resolving the page
> >>> + on the CPU side can itself involve a page fault. It is much more feasible and
> >>> + robust to limit the impact of handling hardware page faults to the specific
> >>> + driver.
> >>> +
> >>> +Note that workloads that run on independent hardware like copy engines or other
> >>> +GPUs do not have any impact. This allows us to keep using DMA fences internally
> >>> +in the kernel even for resolving hardware page faults, e.g. by using copy
> >>> +engines to clear or copy memory needed to resolve the page fault.
> >>> +
> >>> +In some ways this page fault problem is a special case of the `Infinite DMA
> >>> +Fences` discussions: Infinite fences from compute workloads are allowed to
> >>> +depend on DMA fences, but not the other way around. And not even the page fault
> >>> +problem is new, because some other CPU thread in userspace might
> >>> +hit a page fault which holds up a userspace fence - supporting page faults on
> >>> +GPUs doesn't anything fundamentally new.
> >> To me, in general this looks good. One thing, though is that for a first
> >> time reader it might not be totally clear what's special with a compute
> >> workload. Perhaps some clarification?
> > In the docs this new section is right after the infinite fence
> > section, which goes through this kind of stuff. So it's not so much
> > "compute workloads" but "infinite fences", which I think is explained
> > plenty enough.
> >
> OK,
> >> Also since the current cross-driver dma_fence locking order is
> >>
> >> 1) dma_resv ->
> >> 2) memory_allocation / reclaim ->
> >> 3) dma_fence_wait/critical
> >>
> >> And the locking order required for recoverable pagefault is
> >>
> >> a) dma_resv ->
> >> b) fence_wait/critical ->
> >> c) memory_allocation / reclaim
> >>
> >> (Possibly with a) and b) interchanged above, Is it possible to service a
> >> recoverable pagefault without taking the dma_resv lock?)
> > It's worse, since the lock order we care about is:
> > 1) mmap_sem
> > 2) dma_resv
> > 3) reclaim
> > 4) dma_fence_wait
> >
> > And for this nice brave new world of unified shared memory/hmm, we
> > really need to be able to resolve arbitrary cpu side page faults (with
> > fixup_user_fault() like the iommu driver does too for PASID mode) to
> > be able to serve gpu page faults. So even if we take dma_resv
> > completely out of the system we still have:
> >
> > 1) mmap_sem
> > 2) reclaim
> > 3) dma_fence_wait
> >
> > So we'd also need to throw out dma_fence_wait, and at that point we're
> > looking at a new gpu driver stack.
> >
> Meaning that the locking order for workloads with recoverable page
> faults becomes:
>
> a) dma_fence_wait/critical
> b) mmap_sem
> c) dma_resv
> d) reclaim
>
> which I agree we can't really use with the current stack whatever we do
> with dma_fence_wait vs reclaim.
>
>
> >> It's clear that the fence critical section in b) is not compatible with
> >> the dma_fence wait in 3) and thus the memory restrictions are needed.
> >> But I think given the memory allocation restrictions for recoverable
> >> pagefaults I guess at some point we must ask ourselves why are they
> >> necessary and what's the price to be paid for getting rid of them, and
> >> document also that. *If* it's the case that it all boils down to the 2)
> >> -> 3) locking order above, and that's mandated *only* by the dma_fence
> >> wait in the userptr mmu notifiers, I think these restrictions are a
> >> pretty high price to pay. Wouldn't it be possible now to replace that
> >> fence wait with either page pinning (which now is coherent since 5.9) or
> >> preempt-ctx fences + unpinned pages if available and thus invert the 2)
> >> -> 3) locking order?
> > It's not just userptr, it's also shrinkers. mmap_sem requires
> > GFP_KERNEL allocations, so that already excludes our shrinkers if we
> > want this. That means gpu memory becomes pinned when it's managed with
> > dma_fence. Christian König just reworked ttm to stop doing that, to
> > remove the hard and fairly arbitrary "max 50% of system memory" limit.
>
> Even with shrinkers, and assuming there is no guarantee we can preempt,
> one could tag memory / bos for release on next reservation / dma_fence
> signal whatever happens first, which would not give memory back on
> direct reclaim, but will eventually release it. Will not help with the
> mmap_sem issue, though.
>
>
> >
> >
> > Note that just preempt-ctx fences alone isn't enough, since you could
> > end up with something like this:
> > - compute workload using gpu page faults hangs a all of the
> > compute/shadercores on page faults, we can't preempt
> > - concurrently there's a 3d workload running, which because fixed
> > function, and only preempt between draw calls. it is stuck waiting for
> > some shader cores to become avaiable. this is possible because most
> > gpus nowadays have separate command queues for compute/3d workloads
> > - our kernel page fault handler for the compute job wants to preempt
> > the 3d workload, which wont happen
> > - everyone is firmly stuck and the user gets angry
> Yes, preempt-ctx fences would indeed have to be guaranteed to be able to
> preempt to work, using one of the options described above, But in
> general, inverting reclaim and dma_fence_wait would actually resolve
> this particular situation, even if it doesn't help with recoverable
> pagefaults due to mmap_sem:
>
> - kernel page fault handler ends up in shrinker tagging 3D workload
> resources for release
> - Moves on to another shrinker that succeeds to release enough memory
> - Compute workload proceeds
> - 3D workload proceeds.
>
> ..But just wanting to get the full picture of what the tradeoffs of this
> really are.
The trouble with this is that we still need to guarantee forward
progress. That means we must have enough memory available in other
shrinkers/page-cache which is not hogged by the gpu in any way. E.g.
just removing the dma_fence_wait from the mmu notifier wont help if we
still hold the page refcounts at large.
There's roughly two options:
- Lots of local caches for memory allocations which are sized to
contain the max in-flight working set. This is essentially what the
block layer uses for GFP_NOIO inversions. Problem is you must
guarantee that everyone goes through these mempools, and that they're
actually sized for the worst case. Since we're talking about fixing up
userspace page faults that's pretty much the entire kernel which needs
to be switched over to accurate memory reservation and accounting.
- Other option is what essentially shrinkers/filesystems do with
GFP_NOFS: You hard-limit them to something that's small enough overall
and make sure you still have enough other memory (like normal page
cache, which can be evicted with GFP_NOFS generally) to guarantee
forward progress. Since shrinkers keep looping even when there's
fairly minimal forward progress you should be able to get out of
things again. Problem for us is that we don't just have shrinkers, but
also love to consume arbitrary pagecache with get_user_pages/userptr,
so this is tricky to guarantee. The big hammer approach here what ttm
has done thus far with a hard limit to 50% and a best-effort kthread
that tries to balance things in the background, outside of reclaim.
The other problem I'm seeing is that core kernel is building ever more
features on top of page migration, and there's a fairly constant flow
of work to get to a point where page migration can be guaranteed.
Stuff like get_user_pages vs. pin_user_pages, recent work (I think not
yet merged) to move pages out of ZONE_MOVEABLE if their pinned. This
is because hugepages have become rather critical for performance, and
contiguous memory allocator really has to guarantee it can find its
large blocks (I think there's people looking into using CMA for giant
pages, since you're not going to ever get 1gb pages out of the buddy
allocator). GPUs hogging arbitrary amounts of these pages through
userptr, without being able to release them in a timely manner, does
not look like a design with a bright future.
But yeah if we could just wish away the dma_fence_wait from reclaim
context, all these problems would go away.
-Daniel
> /Thomas
> >
> > So just requiring that everything uses preempt-ctx fences isn't enough
> > due to shared resources possible blocking preemption even across
> > engines. Plus we'd still have the problem that dma_fence_wait from
> > reclaim isn't allowed, pinning all the 3d workload memory for good.
> >
> > Aside: This means that for compute workloads using page faults we
> > cannot use preempt-ctx fences either, but instead memory reclaim has
> > to exclusively use pte zapping (both shrinker and mmu notifier).
> >
> > Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Feb 24, 2021 at 9:47 AM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
>
>
> On 2/3/21 4:29 PM, Daniel Vetter wrote:
> > Recently there was a fairly long thread about recoreable hardware page
> > faults, how they can deadlock, and what to do about that.
> >
> > While the discussion is still fresh I figured good time to try and
> > document the conclusions a bit. This documentation section explains
> > what's the potential problem, and the remedies we've discussed,
> > roughly ordered from best to worst.
> >
> > v2: Linus -> Linux typoe (Dave)
> >
> > v3:
> > - Make it clear drivers only need to implement one option (Christian)
> > - Make it clearer that implicit sync is out the window with exclusive
> > fences (Christian)
> > - Add the fairly theoretical option of segementing the memory (either
> > statically or through dynamic checks at runtime for which piece of
> > memory is managed how) and explain why it's not a great idea (Felix)
> >
> > References: https://lore.kernel.org/dri-devel/20210107030127.20393-1-Felix.Kuehling@amd…
> > Cc: Dave Airlie <airlied(a)gmail.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
> > Cc: Thomas Hellström <thomas.hellstrom(a)intel.com>
> > Cc: "Christian König" <christian.koenig(a)amd.com>
> > Cc: Jerome Glisse <jglisse(a)redhat.com>
> > Cc: Felix Kuehling <felix.kuehling(a)amd.com>
> > Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
> > Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
> > Cc: linux-media(a)vger.kernel.org
> > Cc: linaro-mm-sig(a)lists.linaro.org
> > ---
> > Documentation/driver-api/dma-buf.rst | 76 ++++++++++++++++++++++++++++
> > 1 file changed, 76 insertions(+)
> >
> > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> > index a2133d69872c..7f37ec30d9fd 100644
> > --- a/Documentation/driver-api/dma-buf.rst
> > +++ b/Documentation/driver-api/dma-buf.rst
> > @@ -257,3 +257,79 @@ fences in the kernel. This means:
> > userspace is allowed to use userspace fencing or long running compute
> > workloads. This also means no implicit fencing for shared buffers in these
> > cases.
> > +
> > +Recoverable Hardware Page Faults Implications
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Modern hardware supports recoverable page faults, which has a lot of
> > +implications for DMA fences.
> > +
> > +First, a pending page fault obviously holds up the work that's running on the
> > +accelerator and a memory allocation is usually required to resolve the fault.
> > +But memory allocations are not allowed to gate completion of DMA fences, which
> > +means any workload using recoverable page faults cannot use DMA fences for
> > +synchronization. Synchronization fences controlled by userspace must be used
> > +instead.
> > +
> > +On GPUs this poses a problem, because current desktop compositor protocols on
> > +Linux rely on DMA fences, which means without an entirely new userspace stack
> > +built on top of userspace fences, they cannot benefit from recoverable page
> > +faults. Specifically this means implicit synchronization will not be possible.
> > +The exception is when page faults are only used as migration hints and never to
> > +on-demand fill a memory request. For now this means recoverable page
> > +faults on GPUs are limited to pure compute workloads.
> > +
> > +Furthermore GPUs usually have shared resources between the 3D rendering and
> > +compute side, like compute units or command submission engines. If both a 3D
> > +job with a DMA fence and a compute workload using recoverable page faults are
> > +pending they could deadlock:
> > +
> > +- The 3D workload might need to wait for the compute job to finish and release
> > + hardware resources first.
> > +
> > +- The compute workload might be stuck in a page fault, because the memory
> > + allocation is waiting for the DMA fence of the 3D workload to complete.
> > +
> > +There are a few options to prevent this problem, one of which drivers need to
> > +ensure:
> > +
> > +- Compute workloads can always be preempted, even when a page fault is pending
> > + and not yet repaired. Not all hardware supports this.
> > +
> > +- DMA fence workloads and workloads which need page fault handling have
> > + independent hardware resources to guarantee forward progress. This could be
> > + achieved through e.g. through dedicated engines and minimal compute unit
> > + reservations for DMA fence workloads.
> > +
> > +- The reservation approach could be further refined by only reserving the
> > + hardware resources for DMA fence workloads when they are in-flight. This must
> > + cover the time from when the DMA fence is visible to other threads up to
> > + moment when fence is completed through dma_fence_signal().
> > +
> > +- As a last resort, if the hardware provides no useful reservation mechanics,
> > + all workloads must be flushed from the GPU when switching between jobs
> > + requiring DMA fences or jobs requiring page fault handling: This means all DMA
> > + fences must complete before a compute job with page fault handling can be
> > + inserted into the scheduler queue. And vice versa, before a DMA fence can be
> > + made visible anywhere in the system, all compute workloads must be preempted
> > + to guarantee all pending GPU page faults are flushed.
> > +
> > +- Only a fairly theoretical option would be to untangle these dependencies when
> > + allocating memory to repair hardware page faults, either through separate
> > + memory blocks or runtime tracking of the full dependency graph of all DMA
> > + fences. This results very wide impact on the kernel, since resolving the page
> > + on the CPU side can itself involve a page fault. It is much more feasible and
> > + robust to limit the impact of handling hardware page faults to the specific
> > + driver.
> > +
> > +Note that workloads that run on independent hardware like copy engines or other
> > +GPUs do not have any impact. This allows us to keep using DMA fences internally
> > +in the kernel even for resolving hardware page faults, e.g. by using copy
> > +engines to clear or copy memory needed to resolve the page fault.
> > +
> > +In some ways this page fault problem is a special case of the `Infinite DMA
> > +Fences` discussions: Infinite fences from compute workloads are allowed to
> > +depend on DMA fences, but not the other way around. And not even the page fault
> > +problem is new, because some other CPU thread in userspace might
> > +hit a page fault which holds up a userspace fence - supporting page faults on
> > +GPUs doesn't anything fundamentally new.
>
> To me, in general this looks good. One thing, though is that for a first
> time reader it might not be totally clear what's special with a compute
> workload. Perhaps some clarification?
In the docs this new section is right after the infinite fence
section, which goes through this kind of stuff. So it's not so much
"compute workloads" but "infinite fences", which I think is explained
plenty enough.
> Also since the current cross-driver dma_fence locking order is
>
> 1) dma_resv ->
> 2) memory_allocation / reclaim ->
> 3) dma_fence_wait/critical
>
> And the locking order required for recoverable pagefault is
>
> a) dma_resv ->
> b) fence_wait/critical ->
> c) memory_allocation / reclaim
>
> (Possibly with a) and b) interchanged above, Is it possible to service a
> recoverable pagefault without taking the dma_resv lock?)
It's worse, since the lock order we care about is:
1) mmap_sem
2) dma_resv
3) reclaim
4) dma_fence_wait
And for this nice brave new world of unified shared memory/hmm, we
really need to be able to resolve arbitrary cpu side page faults (with
fixup_user_fault() like the iommu driver does too for PASID mode) to
be able to serve gpu page faults. So even if we take dma_resv
completely out of the system we still have:
1) mmap_sem
2) reclaim
3) dma_fence_wait
So we'd also need to throw out dma_fence_wait, and at that point we're
looking at a new gpu driver stack.
> It's clear that the fence critical section in b) is not compatible with
> the dma_fence wait in 3) and thus the memory restrictions are needed.
> But I think given the memory allocation restrictions for recoverable
> pagefaults I guess at some point we must ask ourselves why are they
> necessary and what's the price to be paid for getting rid of them, and
> document also that. *If* it's the case that it all boils down to the 2)
> -> 3) locking order above, and that's mandated *only* by the dma_fence
> wait in the userptr mmu notifiers, I think these restrictions are a
> pretty high price to pay. Wouldn't it be possible now to replace that
> fence wait with either page pinning (which now is coherent since 5.9) or
> preempt-ctx fences + unpinned pages if available and thus invert the 2)
> -> 3) locking order?
It's not just userptr, it's also shrinkers. mmap_sem requires
GFP_KERNEL allocations, so that already excludes our shrinkers if we
want this. That means gpu memory becomes pinned when it's managed with
dma_fence. Christian König just reworked ttm to stop doing that, to
remove the hard and fairly arbitrary "max 50% of system memory" limit.
The only option that would work is if we throw out dma_fence out and
exclusively manage memory with gpu page faults only. Even across the
gpus that do support page faults that's not true for all workloads,
e.g. nvidia only supports page faults on compute engines, not on any
of the 3d fixed function stuff or media codecs (to my knowledge). So
not going to happen. In addition, this plan requires that we stop
using dma_fence for sync (otherwise you can't use preempt-ctx fences).
That's the "10 year plan to rev the ecosystem" problem, although with
lots of discussions with Christian and Jason from mesa I think we've
boiled this down to a more minimal task list by now:
- vk driver mode that users userspace fences. There's a somewhat nasty
problem here because you can only decide which mode to use when you
know which winsys to use. And with vk that's created after the render
pipeline (unlike gl). Plus it's dynamic, e.g. if you run on X or
wayland without the below stuff support, but then use direct display
and the kernel already has timeline syncobj support, you should
probably use userspace fencing.
- new wayland/x protocol to pass timeline drm_syncob around as syncobj
- drm_syncobj support in atomic ioctl
- compositors switching over to vk + using timeline syncobj for fencing
- new drm_syncobj mode where we wrap a userspace fence: new userspace
would pull the userspace fence out, any legacy drivers would have to
block until the fence has signalled. This sucks compared to current
timeline syncobj, where we just wait for the dma_fence to materialize,
but since it can use the same paths/ioctls it would at least give us a
smooth upgrade path. Inclusive allowing both old and new userspace to
co-exist on the same compositor session.
Note that just preempt-ctx fences alone isn't enough, since you could
end up with something like this:
- compute workload using gpu page faults hangs a all of the
compute/shadercores on page faults, we can't preempt
- concurrently there's a 3d workload running, which because fixed
function, and only preempt between draw calls. it is stuck waiting for
some shader cores to become avaiable. this is possible because most
gpus nowadays have separate command queues for compute/3d workloads
- our kernel page fault handler for the compute job wants to preempt
the 3d workload, which wont happen
- everyone is firmly stuck and the user gets angry
So just requiring that everything uses preempt-ctx fences isn't enough
due to shared resources possible blocking preemption even across
engines. Plus we'd still have the problem that dma_fence_wait from
reclaim isn't allowed, pinning all the 3d workload memory for good.
Aside: This means that for compute workloads using page faults we
cannot use preempt-ctx fences either, but instead memory reclaim has
to exclusively use pte zapping (both shrinker and mmu notifier).
Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch