Device exclusive page table entries are used to prevent CPU access to a page whilst it is being accessed from a device. Typically this is used to implement atomic operations when the underlying bus does not support atomic access. When a CPU thread encounters a device exclusive entry it locks the page and restores the original entry after calling mmu notifiers to signal drivers that exclusive access is no longer available.
The device exclusive entry holds a reference to the page making it safe to access the struct page whilst the entry is present. However the fault handling code does not hold the PTL when taking the page lock. This means if there are multiple threads faulting concurrently on the device exclusive entry one will remove the entry whilst others will wait on the page lock without holding a reference.
This can lead to threads locking or waiting on a page with a zero refcount. Whilst mmap_lock prevents the pages getting freed via munmap() they may still be freed by a migration. This leads to warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked when the refcount drops to zero. Note that during removal of the device exclusive entry the PTE is currently re-checked under the PTL so no futher bad page accesses occur once it is locked.
Signed-off-by: Alistair Popple apopple@nvidia.com Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Cc: stable@vger.kernel.org --- mm/memory.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c index 8c8420934d60..b499bd283d8e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3623,8 +3623,19 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range;
- if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) + /* + * We need a page reference to lock the page because we don't + * hold the PTL so a racing thread can remove the + * device-exclusive entry and unmap the page. If the page is + * free the entry must have been removed already. + */ + if (!get_page_unless_zero(vmf->page)) + return 0; + + if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { + put_page(vmf->page); return VM_FAULT_RETRY; + } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, vma->vm_mm, vmf->address & PAGE_MASK, (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL); @@ -3637,6 +3648,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
pte_unmap_unlock(vmf->pte, vmf->ptl); folio_unlock(folio); + put_page(vmf->page);
mmu_notifier_invalidate_range_end(&range); return 0;
On 3/27/23 19:14, Alistair Popple wrote:
Device exclusive page table entries are used to prevent CPU access to a page whilst it is being accessed from a device. Typically this is used to implement atomic operations when the underlying bus does not support atomic access. When a CPU thread encounters a device exclusive entry it locks the page and restores the original entry after calling mmu notifiers to signal drivers that exclusive access is no longer available.
The device exclusive entry holds a reference to the page making it safe to access the struct page whilst the entry is present. However the fault handling code does not hold the PTL when taking the page lock. This means if there are multiple threads faulting concurrently on the device exclusive entry one will remove the entry whilst others will wait on the page lock without holding a reference.
This can lead to threads locking or waiting on a page with a zero refcount. Whilst mmap_lock prevents the pages getting freed via munmap() they may still be freed by a migration. This leads to
An important point! So I'm glad that you wrote it here clearly.
warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked when the refcount drops to zero. Note that during removal of the device exclusive entry the PTE is currently re-checked under the PTL so no futher bad page accesses occur once it is locked.
Maybe change that last sentence to something like this:
"Fix this by taking a page reference before starting to remove a device exclusive pte. This is done safely in a lock-free way by first getting a reference via get_page_unless_zero(), and then re-checking after acquiring the PTL, that the page is the correct one."
?
...well, maybe that's not all that much help. But it does at least provide the traditional description of what the patch *does*, at the end of the commit description. But please treat this as just an optional suggestion.
Signed-off-by: Alistair Popple apopple@nvidia.com Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Cc: stable@vger.kernel.org
mm/memory.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-)
On the patch process, I see that this applies to linux-stable's 6.1.y branch. I'd suggest two things:
1) Normally, what I've seen done is to post against either the current top of tree linux.git, or else against one of the mm-stable branches. And then after it's accepted, create a version for -stable.
2) Either indicate in the cover letter (or after the ---) which branch or commit this applies to, or let git format-patch help by passing in the base commit via --base. That will save "some people" (people like me) from having to guess, if they want to apply the patch locally and poke around at it.
Anyway, all of the above are just little documentation and process suggestions, but the patch itself looks great, so please feel free to add:
Reviewed-by: John Hubbard jhubbard@nvidia.com
thanks,
On Mon, 27 Mar 2023 23:25:49 -0700 John Hubbard jhubbard@nvidia.com wrote:
On the patch process, I see that this applies to linux-stable's 6.1.y branch. I'd suggest two things:
- Normally, what I've seen done is to post against either the current
top of tree linux.git, or else against one of the mm-stable branches. And then after it's accepted, create a version for -stable.
Yup. I had to jiggle the patch a bit because mmu_notifier_range_init_owner()'s arguments have changed. Once this hits mainline, the -stable maintainers will probably ask for a version which suits the relevant kernel version(s).
John Hubbard jhubbard@nvidia.com writes:
warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked when the refcount drops to zero. Note that during removal of the device exclusive entry the PTE is currently re-checked under the PTL so no futher bad page accesses occur once it is locked.
Maybe change that last sentence to something like this:
"Fix this by taking a page reference before starting to remove a device exclusive pte. This is done safely in a lock-free way by first getting a reference via get_page_unless_zero(), and then re-checking after acquiring the PTL, that the page is the correct one."
?
...well, maybe that's not all that much help. But it does at least provide the traditional description of what the patch *does*, at the end of the commit description. But please treat this as just an optional suggestion.
My wording was probably a little awkward. The intent was to point out the existing code subsequent to taking the page lock was already correct/safe. I figured the patch itself does a pretty good of describing the actual fix so am inclined to leave it.
Andrew Morton akpm@linux-foundation.org writes:
On Mon, 27 Mar 2023 23:25:49 -0700 John Hubbard jhubbard@nvidia.com wrote:
On the patch process, I see that this applies to linux-stable's 6.1.y branch. I'd suggest two things:
- Normally, what I've seen done is to post against either the current
top of tree linux.git, or else against one of the mm-stable branches. And then after it's accepted, create a version for -stable.
Yup. I had to jiggle the patch a bit because mmu_notifier_range_init_owner()'s arguments have changed. Once this hits mainline, the -stable maintainers will probably ask for a version which suits the relevant kernel version(s).
Thanks Andrew. That's my bad, I was developing on top of v6.1 and neglected to rebase. Happy to provide versions for -stable as required.
On 3/27/23 19:14, Alistair Popple wrote:
Device exclusive page table entries are used to prevent CPU access to a page whilst it is being accessed from a device. Typically this is used to implement atomic operations when the underlying bus does not support atomic access. When a CPU thread encounters a device exclusive entry it locks the page and restores the original entry after calling mmu notifiers to signal drivers that exclusive access is no longer available.
The device exclusive entry holds a reference to the page making it safe to access the struct page whilst the entry is present. However the fault handling code does not hold the PTL when taking the page lock. This means if there are multiple threads faulting concurrently on the device exclusive entry one will remove the entry whilst others will wait on the page lock without holding a reference.
This can lead to threads locking or waiting on a page with a zero refcount. Whilst mmap_lock prevents the pages getting freed via munmap() they may still be freed by a migration. This leads to warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked when the refcount drops to zero. Note that during removal of the device exclusive entry the PTE is currently re-checked under the PTL so no futher bad page accesses occur once it is locked.
Signed-off-by: Alistair Popple apopple@nvidia.com Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Cc: stable@vger.kernel.org
Thanks for finding this. You can add: Reviewed-by: Ralph Campbell rcampbell@nvidia.com
On Tue, Mar 28, 2023 at 01:14:34PM +1100, Alistair Popple wrote:
+++ b/mm/memory.c @@ -3623,8 +3623,19 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range;
- if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))
- /*
* We need a page reference to lock the page because we don't
* hold the PTL so a racing thread can remove the
* device-exclusive entry and unmap the page. If the page is
* free the entry must have been removed already.
*/
- if (!get_page_unless_zero(vmf->page))
return 0;
From a folio point of view: what the hell are you doing here? Tail pages don't have individual refcounts; all the refcounts are actually taken on the folio. So this should be:
if (!folio_try_get(folio)) return 0;
(you can fix up the comment yourself)
- if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
put_page(vmf->page);
folio_put(folio);
return VM_FAULT_RETRY;
- } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, vma->vm_mm, vmf->address & PAGE_MASK, (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3637,6 +3648,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); folio_unlock(folio);
- put_page(vmf->page);
folio_put(folio)
There, I just saved you 3 calls to compound_head(), saving roughly 150 bytes of kernel text.
mmu_notifier_invalidate_range_end(&range); return 0; -- 2.39.2
On 3/28/23 20:16, Matthew Wilcox wrote: ...
- if (!get_page_unless_zero(vmf->page))
return 0;
From a folio point of view: what the hell are you doing here? Tail pages don't have individual refcounts; all the refcounts are actually
ohh, and I really should have caught that too. I plead spending too much time recently in a somewhat more driver-centric mindset, and failing to mentally shift gears properly for this case.
Sorry for missing that!
thanks,
John Hubbard jhubbard@nvidia.com writes:
On 3/28/23 20:16, Matthew Wilcox wrote: ...
- if (!get_page_unless_zero(vmf->page))
return 0;
From a folio point of view: what the hell are you doing here? Tail pages don't have individual refcounts; all the refcounts are actually
I had stuck with using the page because none of this stuff (yet) supports compound pages anyway so we shouldn't see a tail page anyway. But point taken, I admit I need to find some time to get a deeper internalised understanding of folios than just s/page/folio.
ohh, and I really should have caught that too. I plead spending too much time recently in a somewhat more driver-centric mindset, and failing to mentally shift gears properly for this case.
Sorry for missing that!
thanks,
Alistair Popple apopple@nvidia.com writes:
John Hubbard jhubbard@nvidia.com writes:
On 3/28/23 20:16, Matthew Wilcox wrote: ...
- if (!get_page_unless_zero(vmf->page))
return 0;
From a folio point of view: what the hell are you doing here? Tail pages don't have individual refcounts; all the refcounts are actually
I had stuck with using the page because none of this stuff (yet) supports compound pages anyway so we shouldn't see a tail page anyway. But point taken, I admit I need to find some time to get a deeper internalised understanding of folios than just s/page/folio.
And looking at this while updating it made the mixed usage of page/folio look really werid/wrong so thanks for pointing that out. Will send an update.
linux-stable-mirror@lists.linaro.org