On Thu, Mar 11, 2021 at 2:12 PM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
>
> Hi!
>
> On 3/11/21 2:00 PM, Daniel Vetter wrote:
> > On Thu, Mar 11, 2021 at 11:22:06AM +0100, Thomas Hellström (Intel) wrote:
> >> On 3/1/21 3:09 PM, Daniel Vetter wrote:
> >>> On Mon, Mar 1, 2021 at 11:17 AM Christian König
> >>> <christian.koenig(a)amd.com> wrote:
> >>>>
> >>>> Am 01.03.21 um 10:21 schrieb Thomas Hellström (Intel):
> >>>>> On 3/1/21 10:05 AM, Daniel Vetter wrote:
> >>>>>> On Mon, Mar 01, 2021 at 09:39:53AM +0100, Thomas Hellström (Intel)
> >>>>>> wrote:
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> On 3/1/21 9:28 AM, Daniel Vetter wrote:
> >>>>>>>> On Sat, Feb 27, 2021 at 9:06 AM Thomas Hellström (Intel)
> >>>>>>>> <thomas_os(a)shipmail.org> wrote:
> >>>>>>>>> On 2/26/21 2:28 PM, Daniel Vetter wrote:
> >>>>>>>>>> So I think it stops gup. But I haven't verified at all. Would be
> >>>>>>>>>> good
> >>>>>>>>>> if Christian can check this with some direct io to a buffer in
> >>>>>>>>>> system
> >>>>>>>>>> memory.
> >>>>>>>>> Hmm,
> >>>>>>>>>
> >>>>>>>>> Docs (again vm_normal_page() say)
> >>>>>>>>>
> >>>>>>>>> * VM_MIXEDMAP mappings can likewise contain memory with or
> >>>>>>>>> without "struct
> >>>>>>>>> * page" backing, however the difference is that _all_ pages
> >>>>>>>>> with a struct
> >>>>>>>>> * page (that is, those where pfn_valid is true) are refcounted
> >>>>>>>>> and
> >>>>>>>>> considered
> >>>>>>>>> * normal pages by the VM. The disadvantage is that pages are
> >>>>>>>>> refcounted
> >>>>>>>>> * (which can be slower and simply not an option for some PFNMAP
> >>>>>>>>> users). The
> >>>>>>>>> * advantage is that we don't have to follow the strict
> >>>>>>>>> linearity rule of
> >>>>>>>>> * PFNMAP mappings in order to support COWable mappings.
> >>>>>>>>>
> >>>>>>>>> but it's true __vm_insert_mixed() ends up in the insert_pfn()
> >>>>>>>>> path, so
> >>>>>>>>> the above isn't really true, which makes me wonder if and in that
> >>>>>>>>> case
> >>>>>>>>> why there could any longer ever be a significant performance
> >>>>>>>>> difference
> >>>>>>>>> between MIXEDMAP and PFNMAP.
> >>>>>>>> Yeah it's definitely confusing. I guess I'll hack up a patch and see
> >>>>>>>> what sticks.
> >>>>>>>>
> >>>>>>>>> BTW regarding the TTM hugeptes, I don't think we ever landed that
> >>>>>>>>> devmap
> >>>>>>>>> hack, so they are (for the non-gup case) relying on
> >>>>>>>>> vma_is_special_huge(). For the gup case, I think the bug is still
> >>>>>>>>> there.
> >>>>>>>> Maybe there's another devmap hack, but the ttm_vm_insert functions do
> >>>>>>>> use PFN_DEV and all that. And I think that stops gup_fast from trying
> >>>>>>>> to find the underlying page.
> >>>>>>>> -Daniel
> >>>>>>> Hmm perhaps it might, but I don't think so. The fix I tried out was
> >>>>>>> to set
> >>>>>>>
> >>>>>>> PFN_DEV | PFN_MAP for huge PTEs which causes pfn_devmap() to be
> >>>>>>> true, and
> >>>>>>> then
> >>>>>>>
> >>>>>>> follow_devmap_pmd()->get_dev_pagemap() which returns NULL and
> >>>>>>> gup_fast()
> >>>>>>> backs off,
> >>>>>>>
> >>>>>>> in the end that would mean setting in stone that "if there is a huge
> >>>>>>> devmap
> >>>>>>> page table entry for which we haven't registered any devmap struct
> >>>>>>> pages
> >>>>>>> (get_dev_pagemap returns NULL), we should treat that as a "special"
> >>>>>>> huge
> >>>>>>> page table entry".
> >>>>>>>
> >>>>>>> From what I can tell, all code calling get_dev_pagemap() already
> >>>>>>> does that,
> >>>>>>> it's just a question of getting it accepted and formalizing it.
> >>>>>> Oh I thought that's already how it works, since I didn't spot anything
> >>>>>> else that would block gup_fast from falling over. I guess really would
> >>>>>> need some testcases to make sure direct i/o (that's the easiest to test)
> >>>>>> fails like we expect.
> >>>>> Yeah, IIRC the "| PFN_MAP" is the missing piece for TTM huge ptes.
> >>>>> Otherwise pmd_devmap() will not return true and since there is no
> >>>>> pmd_special() things break.
> >>>> Is that maybe the issue we have seen with amdgpu and huge pages?
> >>> Yeah, essentially when you have a hugepte inserted by ttm, and it
> >>> happens to point at system memory, then gup will work on that. And
> >>> create all kinds of havoc.
> >>>
> >>>> Apart from that I'm lost guys, that devmap and gup stuff is not
> >>>> something I have a good knowledge of apart from a one mile high view.
> >>> I'm not really better, hence would be good to do a testcase and see.
> >>> This should provoke it:
> >>> - allocate nicely aligned bo in system memory
> >>> - mmap, again nicely aligned to 2M
> >>> - do some direct io from a filesystem into that mmap, that should trigger gup
> >>> - before the gup completes free the mmap and bo so that ttm recycles
> >>> the pages, which should trip up on the elevated refcount. If you wait
> >>> until the direct io is completely, then I think nothing bad can be
> >>> observed.
> >>>
> >>> Ofc if your amdgpu+hugepte issue is something else, then maybe we have
> >>> another issue.
> >>>
> >>> Also usual caveat: I'm not an mm hacker either, so might be completely wrong.
> >>> -Daniel
> >> So I did the following quick experiment on vmwgfx, and it turns out that
> >> with it,
> >> fast gup never succeeds. Without the "| PFN_MAP", it typically succeeds
> >>
> >> I should probably craft an RFC formalizing this.
> > Yeah I think that would be good. Maybe even more formalized if we also
> > switch over to VM_PFNMAP, since afaiui these pte flags here only stop the
> > fast gup path. And slow gup can still peak through VM_MIXEDMAP. Or
> > something like that.
> >
> > Otoh your description of when it only sometimes succeeds would indicate my
> > understanding of VM_PFNMAP vs VM_MIXEDMAP is wrong here.
>
> My understanding from reading the vmf_insert_mixed() code is that iff
> the arch has pte_special(), VM_MIXEDMAP should be harmless. But that's
> not consistent with the vm_normal_page() doc. For architectures without
> pte_special, VM_PFNMAP must be used, and then we must also block COW
> mappings.
>
> If we can get someone can commit to verify that the potential PAT WC
> performance issue is gone with PFNMAP, I can put together a series with
> that included.
Iirc when I checked there's not much archs without pte_special, so I
guess that's why we luck out. Hopefully.
> As for existing userspace using COW TTM mappings, I once had a couple of
> test cases to verify that it actually worked, in particular together
> with huge PMDs and PUDs where breaking COW would imply splitting those,
> but I can't think of anything else actually wanting to do that other
> than by mistake.
Yeah disallowing MAP_PRIVATE mappings would be another good thing to
lock down. Really doesn't make much sense.
-Daniel
> /Thomas
>
>
> >
> > Christian, what's your take?
> > -Daniel
> >
> >> /Thomas
> >>
> >> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> >> b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> >> index 6dc96cf66744..72b6fb17c984 100644
> >> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> >> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> >> @@ -195,6 +195,7 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault
> >> *vmf,
> >> pfn_t pfnt;
> >> struct ttm_tt *ttm = bo->ttm;
> >> bool write = vmf->flags & FAULT_FLAG_WRITE;
> >> + struct dev_pagemap *pagemap;
> >>
> >> /* Fault should not cross bo boundary. */
> >> page_offset &= ~(fault_page_size - 1);
> >> @@ -210,6 +211,17 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault
> >> *vmf,
> >> if ((pfn & (fault_page_size - 1)) != 0)
> >> goto out_fallback;
> >>
> >> + /*
> >> + * Huge entries must be special, that is marking them as devmap
> >> + * with no backing device map range. If there is a backing
> >> + * range, Don't insert a huge entry.
> >> + */
> >> + pagemap = get_dev_pagemap(pfn, NULL);
> >> + if (pagemap) {
> >> + put_dev_pagemap(pagemap);
> >> + goto out_fallback;
> >> + }
> >> +
> >> /* Check that memory is contiguous. */
> >> if (!bo->mem.bus.is_iomem) {
> >> for (i = 1; i < fault_page_size; ++i) {
> >> @@ -223,7 +235,7 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault
> >> *vmf,
> >> }
> >> }
> >>
> >> - pfnt = __pfn_to_pfn_t(pfn, PFN_DEV);
> >> + pfnt = __pfn_to_pfn_t(pfn, PFN_DEV | PFN_MAP);
> >> if (fault_page_size == (HPAGE_PMD_SIZE >> PAGE_SHIFT))
> >> ret = vmf_insert_pfn_pmd_prot(vmf, pfnt, pgprot, write);
> >> #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >> @@ -236,6 +248,21 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault
> >> *vmf,
> >> if (ret != VM_FAULT_NOPAGE)
> >> goto out_fallback;
> >>
> >> +#if 1
> >> + {
> >> + int npages;
> >> + struct page *page;
> >> +
> >> + npages = get_user_pages_fast_only(vmf->address, 1, 0,
> >> &page);
> >> + if (npages == 1) {
> >> + DRM_WARN("Fast gup succeeded. Bad.\n");
> >> + put_page(page);
> >> + } else {
> >> + DRM_INFO("Fast gup failed. Good.\n");
> >> + }
> >> + }
> >> +#endif
> >> +
> >> return VM_FAULT_NOPAGE;
> >> out_fallback:
> >> count_vm_event(THP_FAULT_FALLBACK);
> >>
> >>
> >>
> >>
> >>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, Mar 11, 2021 at 10:02 AM Alexandre Desnoyers <alex(a)qtec.com> wrote:
>
> On Thu, Mar 11, 2021 at 2:49 PM Daniel Gomez <daniel(a)qtec.com> wrote:
> >
> > On Thu, 11 Mar 2021 at 10:09, Daniel Gomez <daniel(a)qtec.com> wrote:
> > >
> > > On Wed, 10 Mar 2021 at 18:06, Alex Deucher <alexdeucher(a)gmail.com> wrote:
> > > >
> > > > On Wed, Mar 10, 2021 at 11:37 AM Daniel Gomez <daniel(a)qtec.com> wrote:
> > > > >
> > > > > Disabling GFXOFF via the quirk list fixes a hardware lockup in
> > > > > Ryzen V1605B, RAVEN 0x1002:0x15DD rev 0x83.
> > > > >
> > > > > Signed-off-by: Daniel Gomez <daniel(a)qtec.com>
> > > > > ---
> > > > >
> > > > > This patch is a continuation of the work here:
> > > > > https://lkml.org/lkml/2021/2/3/122 where a hardware lockup was discussed and
> > > > > a dma_fence deadlock was provoke as a side effect. To reproduce the issue
> > > > > please refer to the above link.
> > > > >
> > > > > The hardware lockup was introduced in 5.6-rc1 for our particular revision as it
> > > > > wasn't part of the new blacklist. Before that, in kernel v5.5, this hardware was
> > > > > working fine without any hardware lock because the GFXOFF was actually disabled
> > > > > by the if condition for the CHIP_RAVEN case. So this patch, adds the 'Radeon
> > > > > Vega Mobile Series [1002:15dd] (rev 83)' to the blacklist to disable the GFXOFF.
> > > > >
> > > > > But besides the fix, I'd like to ask from where this revision comes from. Is it
> > > > > an ASIC revision or is it hardcoded in the VBIOS from our vendor? From what I
> > > > > can see, it comes from the ASIC and I wonder if somehow we can get an APU in the
> > > > > future, 'not blacklisted', with the same problem. Then, should this table only
> > > > > filter for the vendor and device and not the revision? Do you know if there are
> > > > > any revisions for the 1002:15dd validated, tested and functional?
> > > >
> > > > The pci revision id (RID) is used to specify the specific SKU within a
> > > > family. GFXOFF is supposed to be working on all raven variants. It
> > > > was tested and functional on all reference platforms and any OEM
> > > > platforms that launched with Linux support. There are a lot of
> > > > dependencies on sbios in the early raven variants (0x15dd), so it's
> > > > likely more of a specific platform issue, but there is not a good way
> > > > to detect this so we use the DID/SSID/RID as a proxy. The newer raven
> > > > variants (0x15d8) have much better GFXOFF support since they all
> > > > shipped with newer firmware and sbios.
> > >
> > > We took one of the first reference platform boards to design our
> > > custom board based on the V1605B and I assume it has one of the early 'unstable'
> > > raven variants with RID 0x83. Also, as OEM we are in control of the bios
> > > (provided by insyde) but I wasn't sure about the RID so, thanks for the
> > > clarification. Is there anything we can do with the bios to have the GFXOFF
> > > enabled and 'stable' for this particular revision? Otherwise we'd need to add
> > > the 0x83 RID to the table. Also, there is an extra ']' in the patch
> > > subject. Sorry
> > > for that. Would you need a new patch in case you accept it with the ']' removed?
> > >
> > > Good to hear that the newer raven versions have better GFXOFF support.
> >
> > Adding Alex Desnoyer to the loop as he is the electronic/hardware and
> > bios responsible so, he can
> > provide more information about this.
>
> Hello everyone,
>
> We, Qtechnology, are the OEM of the hardware platform where we
> originally discovered the bug. Our platform is based on the AMD
> Dibbler V-1000 reference design, with the latest Insyde BIOS release
> available for the (now unsupported) Dibbler platform. We have the
> Insyde BIOS source code internally, so we can make some modifications
> as needed.
>
> The last test that Daniel and myself performed was on a standard
> Dibbler PCB rev.B1 motherboard (NOT our platform), and using the
> corresponding latest AMD released BIOS "RDB1109GA". As Daniel wrote,
> the hardware lockup can be reproduced on the Dibbler, even if it has a
> different RID that our V1605B APU.
>
> We also have a Neousys Technology POC-515 embedded computer (V-1000,
> V1605B) in our office. The Neousys PC also uses Insyde BIOS. This
> computer is also locking-up in the test.
> https://www.neousys-tech.com/en/product/application/rugged-embedded/poc-500…
>
>
> Digging into the BIOS source code, the only reference to GFXOFF is in
> the SMU and PSP firmware release notes, where some bug fixes have been
> mentioned for previous SMU/PSP releases. After a quick "git grep -i
> gfx | grep -i off", there seems to be no mention of GFXOFF in the
> Insyde UEFI (inluding AMD PI) code base. I would appreciate any
> information regarding BIOS modification needed to make the GFXOFF
> feature stable. As you (Alex Deucher) mentionned, it should be
> functional on all AMD Raven reference platforms.
>
It's handled by the firmwares carried by the sbios. I'm not sure what
versions off hand. Probably want to make sure you have the latest
ones. Do you have an AMD partner contact? It might be best to bring
this up with them.
Regarding the issues you are seeing is this a general issue with all
workloads that use the GFX shader cores? Or just specific workloads?
If it's just compute workloads, you might try this patch. It may fix
the issue for you.
Alex
>
> Regards,
>
> Alexandre Desnoyers
>
>
> >
> > I've now done a test on the reference platform (dibbler) with the
> > latest bios available
> > and the hw lockup can be also reproduced with the same steps.
> >
> > For reference, I'm using mainline kernel 5.12-rc2.
> >
> > [ 5.938544] [drm] initializing kernel modesetting (RAVEN
> > 0x1002:0x15DD 0x1002:0x15DD 0xC1).
> > [ 5.939942] amdgpu: ATOM BIOS: 113-RAVEN-11
> >
> > As in the previous cases, the clocks go to 100% of usage when the hang occurs.
> >
> > However, when the gpu hangs, dmesg output displays the following:
> >
> > [ 1568.279847] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx
> > timeout, signaled seq=188, emitted seq=191
> > [ 1568.434084] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* Process
> > information: process Xorg pid 311 thread Xorg:cs0 pid 312
> > [ 1568.279847] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx
> > timeout, signaled seq=188, emitted seq=191
> > [ 1568.434084] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* Process
> > information: process Xorg pid 311 thread Xorg:cs0 pid 312
> > [ 1568.507000] amdgpu 0000:01:00.0: amdgpu: GPU reset begin!
> > [ 1628.491882] rcu: INFO: rcu_sched self-detected stall on CPU
> > [ 1628.491882] rcu: 3-...!: (665 ticks this GP)
> > idle=f9a/1/0x4000000000000000 softirq=188533/188533 fqs=15
> > [ 1628.491882] rcu: rcu_sched kthread timer wakeup didn't happen for
> > 58497 jiffies! g726761 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
> > [ 1628.491882] rcu: Possible timer handling issue on cpu=2
> > timer-softirq=55225
> > [ 1628.491882] rcu: rcu_sched kthread starved for 58500 jiffies!
> > g726761 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=2
> > [ 1628.491882] rcu: Unless rcu_sched kthread gets sufficient CPU
> > time, OOM is now expected behavior.
> > [ 1628.491882] rcu: RCU grace-period kthread stack dump:
> > [ 1628.491882] rcu: Stack dump where RCU GP kthread last ran:
> > [ 1808.518445] rcu: INFO: rcu_sched self-detected stall on CPU
> > [ 1808.518445] rcu: 3-...!: (2643 ticks this GP)
> > idle=f9a/1/0x4000000000000000 softirq=188533/188533 fqs=15
> > [ 1808.518445] rcu: rcu_sched kthread starved for 238526 jiffies!
> > g726761 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=2
> > [ 1808.518445] rcu: Unless rcu_sched kthread gets sufficient CPU
> > time, OOM is now expected behavior.
> > [ 1808.518445] rcu: RCU grace-period kthread stack dump:
> > [ 1808.518445] rcu: Stack dump where RCU GP kthread last ran:
> >
> > >
> > > Daniel
> > >
> > > >
> > > > Alex
> > > >
> > > >
> > > > >
> > > > > Logs:
> > > > > [ 27.708348] [drm] initializing kernel modesetting (RAVEN
> > > > > 0x1002:0x15DD 0x1002:0x15DD 0x83).
> > > > > [ 27.789156] amdgpu: ATOM BIOS: 113-RAVEN-115
> > > > >
> > > > > Thanks in advance,
> > > > > Daniel
> > > > >
> > > > > drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 2 ++
> > > > > 1 file changed, 2 insertions(+)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> > > > > index 65db88bb6cbc..319d4b99aec8 100644
> > > > > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> > > > > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> > > > > @@ -1243,6 +1243,8 @@ static const struct amdgpu_gfxoff_quirk amdgpu_gfxoff_quirk_list[] = {
> > > > > { 0x1002, 0x15dd, 0x103c, 0x83e7, 0xd3 },
> > > > > /* GFXOFF is unstable on C6 parts with a VBIOS 113-RAVEN-114 */
> > > > > { 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc6 },
> > > > > + /* GFXOFF provokes a hw lockup on 83 parts with a VBIOS 113-RAVEN-115 */
> > > > > + { 0x1002, 0x15dd, 0x1002, 0x15dd, 0x83 },
> > > > > { 0, 0, 0, 0, 0 },
> > > > > };
> > > > >
> > > > > --
> > > > > 2.30.1
> > > > >
> > > > > _______________________________________________
> > > > > dri-devel mailing list
> > > > > dri-devel(a)lists.freedesktop.org
> > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Mar 11, 2021 at 11:22:06AM +0100, Thomas Hellström (Intel) wrote:
>
> On 3/1/21 3:09 PM, Daniel Vetter wrote:
> > On Mon, Mar 1, 2021 at 11:17 AM Christian König
> > <christian.koenig(a)amd.com> wrote:
> > >
> > >
> > > Am 01.03.21 um 10:21 schrieb Thomas Hellström (Intel):
> > > > On 3/1/21 10:05 AM, Daniel Vetter wrote:
> > > > > On Mon, Mar 01, 2021 at 09:39:53AM +0100, Thomas Hellström (Intel)
> > > > > wrote:
> > > > > > Hi,
> > > > > >
> > > > > > On 3/1/21 9:28 AM, Daniel Vetter wrote:
> > > > > > > On Sat, Feb 27, 2021 at 9:06 AM Thomas Hellström (Intel)
> > > > > > > <thomas_os(a)shipmail.org> wrote:
> > > > > > > > On 2/26/21 2:28 PM, Daniel Vetter wrote:
> > > > > > > > > So I think it stops gup. But I haven't verified at all. Would be
> > > > > > > > > good
> > > > > > > > > if Christian can check this with some direct io to a buffer in
> > > > > > > > > system
> > > > > > > > > memory.
> > > > > > > > Hmm,
> > > > > > > >
> > > > > > > > Docs (again vm_normal_page() say)
> > > > > > > >
> > > > > > > > * VM_MIXEDMAP mappings can likewise contain memory with or
> > > > > > > > without "struct
> > > > > > > > * page" backing, however the difference is that _all_ pages
> > > > > > > > with a struct
> > > > > > > > * page (that is, those where pfn_valid is true) are refcounted
> > > > > > > > and
> > > > > > > > considered
> > > > > > > > * normal pages by the VM. The disadvantage is that pages are
> > > > > > > > refcounted
> > > > > > > > * (which can be slower and simply not an option for some PFNMAP
> > > > > > > > users). The
> > > > > > > > * advantage is that we don't have to follow the strict
> > > > > > > > linearity rule of
> > > > > > > > * PFNMAP mappings in order to support COWable mappings.
> > > > > > > >
> > > > > > > > but it's true __vm_insert_mixed() ends up in the insert_pfn()
> > > > > > > > path, so
> > > > > > > > the above isn't really true, which makes me wonder if and in that
> > > > > > > > case
> > > > > > > > why there could any longer ever be a significant performance
> > > > > > > > difference
> > > > > > > > between MIXEDMAP and PFNMAP.
> > > > > > > Yeah it's definitely confusing. I guess I'll hack up a patch and see
> > > > > > > what sticks.
> > > > > > >
> > > > > > > > BTW regarding the TTM hugeptes, I don't think we ever landed that
> > > > > > > > devmap
> > > > > > > > hack, so they are (for the non-gup case) relying on
> > > > > > > > vma_is_special_huge(). For the gup case, I think the bug is still
> > > > > > > > there.
> > > > > > > Maybe there's another devmap hack, but the ttm_vm_insert functions do
> > > > > > > use PFN_DEV and all that. And I think that stops gup_fast from trying
> > > > > > > to find the underlying page.
> > > > > > > -Daniel
> > > > > > Hmm perhaps it might, but I don't think so. The fix I tried out was
> > > > > > to set
> > > > > >
> > > > > > PFN_DEV | PFN_MAP for huge PTEs which causes pfn_devmap() to be
> > > > > > true, and
> > > > > > then
> > > > > >
> > > > > > follow_devmap_pmd()->get_dev_pagemap() which returns NULL and
> > > > > > gup_fast()
> > > > > > backs off,
> > > > > >
> > > > > > in the end that would mean setting in stone that "if there is a huge
> > > > > > devmap
> > > > > > page table entry for which we haven't registered any devmap struct
> > > > > > pages
> > > > > > (get_dev_pagemap returns NULL), we should treat that as a "special"
> > > > > > huge
> > > > > > page table entry".
> > > > > >
> > > > > > From what I can tell, all code calling get_dev_pagemap() already
> > > > > > does that,
> > > > > > it's just a question of getting it accepted and formalizing it.
> > > > > Oh I thought that's already how it works, since I didn't spot anything
> > > > > else that would block gup_fast from falling over. I guess really would
> > > > > need some testcases to make sure direct i/o (that's the easiest to test)
> > > > > fails like we expect.
> > > > Yeah, IIRC the "| PFN_MAP" is the missing piece for TTM huge ptes.
> > > > Otherwise pmd_devmap() will not return true and since there is no
> > > > pmd_special() things break.
> > > Is that maybe the issue we have seen with amdgpu and huge pages?
> > Yeah, essentially when you have a hugepte inserted by ttm, and it
> > happens to point at system memory, then gup will work on that. And
> > create all kinds of havoc.
> >
> > > Apart from that I'm lost guys, that devmap and gup stuff is not
> > > something I have a good knowledge of apart from a one mile high view.
> > I'm not really better, hence would be good to do a testcase and see.
> > This should provoke it:
> > - allocate nicely aligned bo in system memory
> > - mmap, again nicely aligned to 2M
> > - do some direct io from a filesystem into that mmap, that should trigger gup
> > - before the gup completes free the mmap and bo so that ttm recycles
> > the pages, which should trip up on the elevated refcount. If you wait
> > until the direct io is completely, then I think nothing bad can be
> > observed.
> >
> > Ofc if your amdgpu+hugepte issue is something else, then maybe we have
> > another issue.
> >
> > Also usual caveat: I'm not an mm hacker either, so might be completely wrong.
> > -Daniel
>
> So I did the following quick experiment on vmwgfx, and it turns out that
> with it,
> fast gup never succeeds. Without the "| PFN_MAP", it typically succeeds
>
> I should probably craft an RFC formalizing this.
Yeah I think that would be good. Maybe even more formalized if we also
switch over to VM_PFNMAP, since afaiui these pte flags here only stop the
fast gup path. And slow gup can still peak through VM_MIXEDMAP. Or
something like that.
Otoh your description of when it only sometimes succeeds would indicate my
understanding of VM_PFNMAP vs VM_MIXEDMAP is wrong here.
Christian, what's your take?
-Daniel
>
> /Thomas
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> index 6dc96cf66744..72b6fb17c984 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> @@ -195,6 +195,7 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault
> *vmf,
> pfn_t pfnt;
> struct ttm_tt *ttm = bo->ttm;
> bool write = vmf->flags & FAULT_FLAG_WRITE;
> + struct dev_pagemap *pagemap;
>
> /* Fault should not cross bo boundary. */
> page_offset &= ~(fault_page_size - 1);
> @@ -210,6 +211,17 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault
> *vmf,
> if ((pfn & (fault_page_size - 1)) != 0)
> goto out_fallback;
>
> + /*
> + * Huge entries must be special, that is marking them as devmap
> + * with no backing device map range. If there is a backing
> + * range, Don't insert a huge entry.
> + */
> + pagemap = get_dev_pagemap(pfn, NULL);
> + if (pagemap) {
> + put_dev_pagemap(pagemap);
> + goto out_fallback;
> + }
> +
> /* Check that memory is contiguous. */
> if (!bo->mem.bus.is_iomem) {
> for (i = 1; i < fault_page_size; ++i) {
> @@ -223,7 +235,7 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault
> *vmf,
> }
> }
>
> - pfnt = __pfn_to_pfn_t(pfn, PFN_DEV);
> + pfnt = __pfn_to_pfn_t(pfn, PFN_DEV | PFN_MAP);
> if (fault_page_size == (HPAGE_PMD_SIZE >> PAGE_SHIFT))
> ret = vmf_insert_pfn_pmd_prot(vmf, pfnt, pgprot, write);
> #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> @@ -236,6 +248,21 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault
> *vmf,
> if (ret != VM_FAULT_NOPAGE)
> goto out_fallback;
>
> +#if 1
> + {
> + int npages;
> + struct page *page;
> +
> + npages = get_user_pages_fast_only(vmf->address, 1, 0,
> &page);
> + if (npages == 1) {
> + DRM_WARN("Fast gup succeeded. Bad.\n");
> + put_page(page);
> + } else {
> + DRM_INFO("Fast gup failed. Good.\n");
> + }
> + }
> +#endif
> +
> return VM_FAULT_NOPAGE;
> out_fallback:
> count_vm_event(THP_FAULT_FALLBACK);
>
>
>
>
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Mar 10, 2021 at 11:37 AM Daniel Gomez <daniel(a)qtec.com> wrote:
>
> Disabling GFXOFF via the quirk list fixes a hardware lockup in
> Ryzen V1605B, RAVEN 0x1002:0x15DD rev 0x83.
>
> Signed-off-by: Daniel Gomez <daniel(a)qtec.com>
> ---
>
> This patch is a continuation of the work here:
> https://lkml.org/lkml/2021/2/3/122 where a hardware lockup was discussed and
> a dma_fence deadlock was provoke as a side effect. To reproduce the issue
> please refer to the above link.
>
> The hardware lockup was introduced in 5.6-rc1 for our particular revision as it
> wasn't part of the new blacklist. Before that, in kernel v5.5, this hardware was
> working fine without any hardware lock because the GFXOFF was actually disabled
> by the if condition for the CHIP_RAVEN case. So this patch, adds the 'Radeon
> Vega Mobile Series [1002:15dd] (rev 83)' to the blacklist to disable the GFXOFF.
>
> But besides the fix, I'd like to ask from where this revision comes from. Is it
> an ASIC revision or is it hardcoded in the VBIOS from our vendor? From what I
> can see, it comes from the ASIC and I wonder if somehow we can get an APU in the
> future, 'not blacklisted', with the same problem. Then, should this table only
> filter for the vendor and device and not the revision? Do you know if there are
> any revisions for the 1002:15dd validated, tested and functional?
The pci revision id (RID) is used to specify the specific SKU within a
family. GFXOFF is supposed to be working on all raven variants. It
was tested and functional on all reference platforms and any OEM
platforms that launched with Linux support. There are a lot of
dependencies on sbios in the early raven variants (0x15dd), so it's
likely more of a specific platform issue, but there is not a good way
to detect this so we use the DID/SSID/RID as a proxy. The newer raven
variants (0x15d8) have much better GFXOFF support since they all
shipped with newer firmware and sbios.
Alex
>
> Logs:
> [ 27.708348] [drm] initializing kernel modesetting (RAVEN
> 0x1002:0x15DD 0x1002:0x15DD 0x83).
> [ 27.789156] amdgpu: ATOM BIOS: 113-RAVEN-115
>
> Thanks in advance,
> Daniel
>
> drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> index 65db88bb6cbc..319d4b99aec8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> @@ -1243,6 +1243,8 @@ static const struct amdgpu_gfxoff_quirk amdgpu_gfxoff_quirk_list[] = {
> { 0x1002, 0x15dd, 0x103c, 0x83e7, 0xd3 },
> /* GFXOFF is unstable on C6 parts with a VBIOS 113-RAVEN-114 */
> { 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc6 },
> + /* GFXOFF provokes a hw lockup on 83 parts with a VBIOS 113-RAVEN-115 */
> + { 0x1002, 0x15dd, 0x1002, 0x15dd, 0x83 },
> { 0, 0, 0, 0, 0 },
> };
>
> --
> 2.30.1
>
> _______________________________________________
> dri-devel mailing list
> dri-devel(a)lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Am 01.03.21 um 10:21 schrieb Thomas Hellström (Intel):
>
> On 3/1/21 10:05 AM, Daniel Vetter wrote:
>> On Mon, Mar 01, 2021 at 09:39:53AM +0100, Thomas Hellström (Intel)
>> wrote:
>>> Hi,
>>>
>>> On 3/1/21 9:28 AM, Daniel Vetter wrote:
>>>> On Sat, Feb 27, 2021 at 9:06 AM Thomas Hellström (Intel)
>>>> <thomas_os(a)shipmail.org> wrote:
>>>>> On 2/26/21 2:28 PM, Daniel Vetter wrote:
>>>>>> So I think it stops gup. But I haven't verified at all. Would be
>>>>>> good
>>>>>> if Christian can check this with some direct io to a buffer in
>>>>>> system
>>>>>> memory.
>>>>> Hmm,
>>>>>
>>>>> Docs (again vm_normal_page() say)
>>>>>
>>>>> * VM_MIXEDMAP mappings can likewise contain memory with or
>>>>> without "struct
>>>>> * page" backing, however the difference is that _all_ pages
>>>>> with a struct
>>>>> * page (that is, those where pfn_valid is true) are refcounted
>>>>> and
>>>>> considered
>>>>> * normal pages by the VM. The disadvantage is that pages are
>>>>> refcounted
>>>>> * (which can be slower and simply not an option for some PFNMAP
>>>>> users). The
>>>>> * advantage is that we don't have to follow the strict
>>>>> linearity rule of
>>>>> * PFNMAP mappings in order to support COWable mappings.
>>>>>
>>>>> but it's true __vm_insert_mixed() ends up in the insert_pfn()
>>>>> path, so
>>>>> the above isn't really true, which makes me wonder if and in that
>>>>> case
>>>>> why there could any longer ever be a significant performance
>>>>> difference
>>>>> between MIXEDMAP and PFNMAP.
>>>> Yeah it's definitely confusing. I guess I'll hack up a patch and see
>>>> what sticks.
>>>>
>>>>> BTW regarding the TTM hugeptes, I don't think we ever landed that
>>>>> devmap
>>>>> hack, so they are (for the non-gup case) relying on
>>>>> vma_is_special_huge(). For the gup case, I think the bug is still
>>>>> there.
>>>> Maybe there's another devmap hack, but the ttm_vm_insert functions do
>>>> use PFN_DEV and all that. And I think that stops gup_fast from trying
>>>> to find the underlying page.
>>>> -Daniel
>>> Hmm perhaps it might, but I don't think so. The fix I tried out was
>>> to set
>>>
>>> PFN_DEV | PFN_MAP for huge PTEs which causes pfn_devmap() to be
>>> true, and
>>> then
>>>
>>> follow_devmap_pmd()->get_dev_pagemap() which returns NULL and
>>> gup_fast()
>>> backs off,
>>>
>>> in the end that would mean setting in stone that "if there is a huge
>>> devmap
>>> page table entry for which we haven't registered any devmap struct
>>> pages
>>> (get_dev_pagemap returns NULL), we should treat that as a "special"
>>> huge
>>> page table entry".
>>>
>>> From what I can tell, all code calling get_dev_pagemap() already
>>> does that,
>>> it's just a question of getting it accepted and formalizing it.
>> Oh I thought that's already how it works, since I didn't spot anything
>> else that would block gup_fast from falling over. I guess really would
>> need some testcases to make sure direct i/o (that's the easiest to test)
>> fails like we expect.
>
> Yeah, IIRC the "| PFN_MAP" is the missing piece for TTM huge ptes.
> Otherwise pmd_devmap() will not return true and since there is no
> pmd_special() things break.
Is that maybe the issue we have seen with amdgpu and huge pages?
Apart from that I'm lost guys, that devmap and gup stuff is not
something I have a good knowledge of apart from a one mile high view.
Christian.
>
> /Thomas
>
>
>
>> -Daniel
On Mon, Mar 01, 2021 at 09:39:53AM +0100, Thomas Hellström (Intel) wrote:
> Hi,
>
> On 3/1/21 9:28 AM, Daniel Vetter wrote:
> > On Sat, Feb 27, 2021 at 9:06 AM Thomas Hellström (Intel)
> > <thomas_os(a)shipmail.org> wrote:
> > > On 2/26/21 2:28 PM, Daniel Vetter wrote:
> > > > So I think it stops gup. But I haven't verified at all. Would be good
> > > > if Christian can check this with some direct io to a buffer in system
> > > > memory.
> > > Hmm,
> > >
> > > Docs (again vm_normal_page() say)
> > >
> > > * VM_MIXEDMAP mappings can likewise contain memory with or without "struct
> > > * page" backing, however the difference is that _all_ pages with a struct
> > > * page (that is, those where pfn_valid is true) are refcounted and
> > > considered
> > > * normal pages by the VM. The disadvantage is that pages are refcounted
> > > * (which can be slower and simply not an option for some PFNMAP
> > > users). The
> > > * advantage is that we don't have to follow the strict linearity rule of
> > > * PFNMAP mappings in order to support COWable mappings.
> > >
> > > but it's true __vm_insert_mixed() ends up in the insert_pfn() path, so
> > > the above isn't really true, which makes me wonder if and in that case
> > > why there could any longer ever be a significant performance difference
> > > between MIXEDMAP and PFNMAP.
> > Yeah it's definitely confusing. I guess I'll hack up a patch and see
> > what sticks.
> >
> > > BTW regarding the TTM hugeptes, I don't think we ever landed that devmap
> > > hack, so they are (for the non-gup case) relying on
> > > vma_is_special_huge(). For the gup case, I think the bug is still there.
> > Maybe there's another devmap hack, but the ttm_vm_insert functions do
> > use PFN_DEV and all that. And I think that stops gup_fast from trying
> > to find the underlying page.
> > -Daniel
>
> Hmm perhaps it might, but I don't think so. The fix I tried out was to set
>
> PFN_DEV | PFN_MAP for huge PTEs which causes pfn_devmap() to be true, and
> then
>
> follow_devmap_pmd()->get_dev_pagemap() which returns NULL and gup_fast()
> backs off,
>
> in the end that would mean setting in stone that "if there is a huge devmap
> page table entry for which we haven't registered any devmap struct pages
> (get_dev_pagemap returns NULL), we should treat that as a "special" huge
> page table entry".
>
> From what I can tell, all code calling get_dev_pagemap() already does that,
> it's just a question of getting it accepted and formalizing it.
Oh I thought that's already how it works, since I didn't spot anything
else that would block gup_fast from falling over. I guess really would
need some testcases to make sure direct i/o (that's the easiest to test)
fails like we expect.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Sat, Feb 27, 2021 at 9:06 AM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
> On 2/26/21 2:28 PM, Daniel Vetter wrote:
> > So I think it stops gup. But I haven't verified at all. Would be good
> > if Christian can check this with some direct io to a buffer in system
> > memory.
>
> Hmm,
>
> Docs (again vm_normal_page() say)
>
> * VM_MIXEDMAP mappings can likewise contain memory with or without "struct
> * page" backing, however the difference is that _all_ pages with a struct
> * page (that is, those where pfn_valid is true) are refcounted and
> considered
> * normal pages by the VM. The disadvantage is that pages are refcounted
> * (which can be slower and simply not an option for some PFNMAP
> users). The
> * advantage is that we don't have to follow the strict linearity rule of
> * PFNMAP mappings in order to support COWable mappings.
>
> but it's true __vm_insert_mixed() ends up in the insert_pfn() path, so
> the above isn't really true, which makes me wonder if and in that case
> why there could any longer ever be a significant performance difference
> between MIXEDMAP and PFNMAP.
Yeah it's definitely confusing. I guess I'll hack up a patch and see
what sticks.
> BTW regarding the TTM hugeptes, I don't think we ever landed that devmap
> hack, so they are (for the non-gup case) relying on
> vma_is_special_huge(). For the gup case, I think the bug is still there.
Maybe there's another devmap hack, but the ttm_vm_insert functions do
use PFN_DEV and all that. And I think that stops gup_fast from trying
to find the underlying page.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Fri, Feb 26, 2021 at 10:41 AM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
>
>
> On 2/25/21 4:49 PM, Daniel Vetter wrote:
> > On Thu, Feb 25, 2021 at 11:44 AM Daniel Vetter <daniel(a)ffwll.ch> wrote:
> >> On Thu, Feb 25, 2021 at 11:28:31AM +0100, Christian König wrote:
> >>> Am 24.02.21 um 10:31 schrieb Daniel Vetter:
> >>>> On Wed, Feb 24, 2021 at 10:16 AM Thomas Hellström (Intel)
> >>>> <thomas_os(a)shipmail.org> wrote:
> >>>>> On 2/24/21 9:45 AM, Daniel Vetter wrote:
> >>>>>> On Wed, Feb 24, 2021 at 8:46 AM Thomas Hellström (Intel)
> >>>>>> <thomas_os(a)shipmail.org> wrote:
> >>>>>>> On 2/23/21 11:59 AM, Daniel Vetter wrote:
> >>>>>>>> tldr; DMA buffers aren't normal memory, expecting that you can use
> >>>>>>>> them like that (like calling get_user_pages works, or that they're
> >>>>>>>> accounting like any other normal memory) cannot be guaranteed.
> >>>>>>>>
> >>>>>>>> Since some userspace only runs on integrated devices, where all
> >>>>>>>> buffers are actually all resident system memory, there's a huge
> >>>>>>>> temptation to assume that a struct page is always present and useable
> >>>>>>>> like for any more pagecache backed mmap. This has the potential to
> >>>>>>>> result in a uapi nightmare.
> >>>>>>>>
> >>>>>>>> To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
> >>>>>>>> blocks get_user_pages and all the other struct page based
> >>>>>>>> infrastructure for everyone. In spirit this is the uapi counterpart to
> >>>>>>>> the kernel-internal CONFIG_DMABUF_DEBUG.
> >>>>>>>>
> >>>>>>>> Motivated by a recent patch which wanted to swich the system dma-buf
> >>>>>>>> heap to vm_insert_page instead of vm_insert_pfn.
> >>>>>>>>
> >>>>>>>> v2:
> >>>>>>>>
> >>>>>>>> Jason brought up that we also want to guarantee that all ptes have the
> >>>>>>>> pte_special flag set, to catch fast get_user_pages (on architectures
> >>>>>>>> that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
> >>>>>>>> still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
> >>>>>>>>
> >>>>>>>> From auditing the various functions to insert pfn pte entires
> >>>>>>>> (vm_insert_pfn_prot, remap_pfn_range and all it's callers like
> >>>>>>>> dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
> >>>>>>>> this should be the correct flag to check for.
> >>>>>>>>
> >>>>>>> If we require VM_PFNMAP, for ordinary page mappings, we also need to
> >>>>>>> disallow COW mappings, since it will not work on architectures that
> >>>>>>> don't have CONFIG_ARCH_HAS_PTE_SPECIAL, (see the docs for vm_normal_page()).
> >>>>>> Hm I figured everyone just uses MAP_SHARED for buffer objects since
> >>>>>> COW really makes absolutely no sense. How would we enforce this?
> >>>>> Perhaps returning -EINVAL on is_cow_mapping() at mmap time. Either that
> >>>>> or allowing MIXEDMAP.
> >>>>>
> >>>>>>> Also worth noting is the comment in ttm_bo_mmap_vma_setup() with
> >>>>>>> possible performance implications with x86 + PAT + VM_PFNMAP + normal
> >>>>>>> pages. That's a very old comment, though, and might not be valid anymore.
> >>>>>> I think that's why ttm has a page cache for these, because it indeed
> >>>>>> sucks. The PAT changes on pages are rather expensive.
> >>>>> IIRC the page cache was implemented because of the slowness of the
> >>>>> caching mode transition itself, more specifically the wbinvd() call +
> >>>>> global TLB flush.
> >>> Yes, exactly that. The global TLB flush is what really breaks our neck here
> >>> from a performance perspective.
> >>>
> >>>>>> There is still an issue for iomem mappings, because the PAT validation
> >>>>>> does a linear walk of the resource tree (lol) for every vm_insert_pfn.
> >>>>>> But for i915 at least this is fixed by using the io_mapping
> >>>>>> infrastructure, which does the PAT reservation only once when you set
> >>>>>> up the mapping area at driver load.
> >>>>> Yes, I guess that was the issue that the comment describes, but the
> >>>>> issue wasn't there with vm_insert_mixed() + VM_MIXEDMAP.
> >>>>>
> >>>>>> Also TTM uses VM_PFNMAP right now for everything, so it can't be a
> >>>>>> problem that hurts much :-)
> >>>>> Hmm, both 5.11 and drm-tip appears to still use MIXEDMAP?
> >>>>>
> >>>>> https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/ttm/ttm_bo_v…
> >>>> Uh that's bad, because mixed maps pointing at struct page wont stop
> >>>> gup. At least afaik.
> >>> Hui? I'm pretty sure MIXEDMAP stops gup as well. Otherwise we would have
> >>> already seen tons of problems with the page cache.
> >> On any architecture which has CONFIG_ARCH_HAS_PTE_SPECIAL vm_insert_mixed
> >> boils down to vm_insert_pfn wrt gup. And special pte stops gup fast path.
> >>
> >> But if you don't have VM_IO or VM_PFNMAP set, then I'm not seeing how
> >> you're stopping gup slow path. See check_vma_flags() in mm/gup.c.
> >>
> >> Also if you don't have CONFIG_ARCH_HAS_PTE_SPECIAL then I don't think
> >> vm_insert_mixed even works on iomem pfns. There's the devmap exception,
> >> but we're not devmap. Worse ttm abuses some accidental codepath to smuggle
> >> in hugepte support by intentionally not being devmap.
> >>
> >> So I'm really not sure this works as we think it should. Maybe good to do
> >> a quick test program on amdgpu with a buffer in system memory only and try
> >> to do direct io into it. If it works, you have a problem, and a bad one.
> > That's probably impossible, since a quick git grep shows that pretty
> > much anything reasonable has special ptes: arc, arm, arm64, powerpc,
> > riscv, s390, sh, sparc, x86. I don't think you'll have a platform
> > where you can plug an amdgpu in and actually exercise the bug :-)
>
> Hm. AFAIK _insert_mixed() doesn't set PTE_SPECIAL on system pages, so I
> don't see what should be stopping gup to those?
If you have an arch with pte special we use insert_pfn(), which afaict
will use pte_mkspecial for the !devmap case. And ttm isn't devmap
(otherwise our hugepte abuse of devmap hugeptes would go rather
wrong).
So I think it stops gup. But I haven't verified at all. Would be good
if Christian can check this with some direct io to a buffer in system
memory.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch