We received a report that the copy-on-write issue repored by Jann Horn in https://bugs.chromium.org/p/project-zero/issues/detail?id=2045 is still reproducible on 4.14 and 4.19 kernels (the first issue with the reproducer coded in vmsplice.c). I confirmed this and also that the issue was not reproducible with 5.10 kernel. I tracked the fix to the following patch introduced in 5.9 which changes the do_wp_page() logic:
09854ba94c6a 'mm: do_wp_page() simplification'
I backported this patch (#2 in the series) along with 2 prerequisite patches (#1 and #4) that keep the backports clean and two followup fixes to the main patch (#3 and #5). I had to skip the following fix:
feb889fb40fa 'mm: don't put pinned pages into the swap cache'
because it uses page_maybe_dma_pinned() which does not exists in earlier kernels. Because pin_user_pages() does not exist there as well, I *think* we can safely skip this fix on older kernels, but I would appreciate if someone could confirm that claim.
The patchset cleanly applies over: stable linux-4.19.y, tag: v4.19.184
Note: 4.14 and 4.19 backports are very similar, so while I backported only to these two versions I think backports for other versions can be done easily.
Kirill Tkhai (1): mm: reuse only-pte-mapped KSM page in do_wp_page()
Linus Torvalds (2): mm: do_wp_page() simplification mm: fix misplaced unlock_page in do_wp_page()
Nadav Amit (1): mm/userfaultfd: fix memory corruption due to writeprotect
Shaohua Li (1): userfaultfd: wp: add helper for writeprotect check
include/linux/ksm.h | 7 ++++ include/linux/userfaultfd_k.h | 10 ++++++ mm/ksm.c | 30 ++++++++++++++++-- mm/memory.c | 60 ++++++++++++++++------------------- 4 files changed, 73 insertions(+), 34 deletions(-)
From: Kirill Tkhai ktkhai@virtuozzo.com
Add an optimization for KSM pages almost in the same way that we have for ordinary anonymous pages. If there is a write fault in a page, which is mapped to an only pte, and it is not related to swap cache; the page may be reused without copying its content.
[ Note that we do not consider PageSwapCache() pages at least for now, since we don't want to complicate __get_ksm_page(), which has nice optimization based on this (for the migration case). Currenly it is spinning on PageSwapCache() pages, waiting for when they have unfreezed counters (i.e., for the migration finish). But we don't want to make it also spinning on swap cache pages, which we try to reuse, since there is not a very high probability to reuse them. So, for now we do not consider PageSwapCache() pages at all. ]
So in reuse_ksm_page() we check for 1) PageSwapCache() and 2) page_stable_node(), to skip a page, which KSM is currently trying to link to stable tree. Then we do page_ref_freeze() to prohibit KSM to merge one more page into the page, we are reusing. After that, nobody can refer to the reusing page: KSM skips !PageSwapCache() pages with zero refcount; and the protection against of all other participants is the same as for reused ordinary anon pages pte lock, page lock and mmap_sem.
[akpm@linux-foundation.org: replace BUG_ON()s with WARN_ON()s] Link: http://lkml.kernel.org/r/154471491016.31352.1168978849911555609.stgit@localh... Signed-off-by: Kirill Tkhai ktkhai@virtuozzo.com Reviewed-by: Yang Shi yang.shi@linux.alibaba.com Cc: "Kirill A. Shutemov" kirill@shutemov.name Cc: Hugh Dickins hughd@google.com Cc: Andrea Arcangeli aarcange@redhat.com Cc: Christian Koenig christian.koenig@amd.com Cc: Claudio Imbrenda imbrenda@linux.vnet.ibm.com Cc: Rik van Riel riel@surriel.com Cc: Huang Ying ying.huang@intel.com Cc: Minchan Kim minchan@kernel.org Cc: Kirill Tkhai ktkhai@virtuozzo.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org --- include/linux/ksm.h | 7 +++++++ mm/ksm.c | 30 ++++++++++++++++++++++++++++-- mm/memory.c | 16 ++++++++++++++-- 3 files changed, 49 insertions(+), 4 deletions(-)
diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 161e8164abcf..e48b1e453ff5 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -53,6 +53,8 @@ struct page *ksm_might_need_to_copy(struct page *page,
void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc); void ksm_migrate_page(struct page *newpage, struct page *oldpage); +bool reuse_ksm_page(struct page *page, + struct vm_area_struct *vma, unsigned long address);
#else /* !CONFIG_KSM */
@@ -86,6 +88,11 @@ static inline void rmap_walk_ksm(struct page *page, static inline void ksm_migrate_page(struct page *newpage, struct page *oldpage) { } +static inline bool reuse_ksm_page(struct page *page, + struct vm_area_struct *vma, unsigned long address) +{ + return false; +} #endif /* CONFIG_MMU */ #endif /* !CONFIG_KSM */
diff --git a/mm/ksm.c b/mm/ksm.c index d021bcf94c41..c4e95ca65d62 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -705,8 +705,9 @@ static struct page *get_ksm_page(struct stable_node *stable_node, bool lock_it) * case this node is no longer referenced, and should be freed; * however, it might mean that the page is under page_ref_freeze(). * The __remove_mapping() case is easy, again the node is now stale; - * but if page is swapcache in migrate_page_move_mapping(), it might - * still be our page, in which case it's essential to keep the node. + * the same is in reuse_ksm_page() case; but if page is swapcache + * in migrate_page_move_mapping(), it might still be our page, + * in which case it's essential to keep the node. */ while (!get_page_unless_zero(page)) { /* @@ -2648,6 +2649,31 @@ void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) goto again; }
+bool reuse_ksm_page(struct page *page, + struct vm_area_struct *vma, + unsigned long address) +{ +#ifdef CONFIG_DEBUG_VM + if (WARN_ON(is_zero_pfn(page_to_pfn(page))) || + WARN_ON(!page_mapped(page)) || + WARN_ON(!PageLocked(page))) { + dump_page(page, "reuse_ksm_page"); + return false; + } +#endif + + if (PageSwapCache(page) || !page_stable_node(page)) + return false; + /* Prohibit parallel get_ksm_page() */ + if (!page_ref_freeze(page, 1)) + return false; + + page_move_anon_rmap(page, vma); + page->index = linear_page_index(vma, address); + page_ref_unfreeze(page, 1); + + return true; +} #ifdef CONFIG_MIGRATION void ksm_migrate_page(struct page *newpage, struct page *oldpage) { diff --git a/mm/memory.c b/mm/memory.c index c1a05c2484b0..3874acce1472 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2846,8 +2846,11 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) * Take out anonymous pages first, anonymous shared vmas are * not dirty accountable. */ - if (PageAnon(vmf->page) && !PageKsm(vmf->page)) { + if (PageAnon(vmf->page)) { int total_map_swapcount; + if (PageKsm(vmf->page) && (PageSwapCache(vmf->page) || + page_count(vmf->page) != 1)) + goto copy; if (!trylock_page(vmf->page)) { get_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -2862,6 +2865,15 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) } put_page(vmf->page); } + if (PageKsm(vmf->page)) { + bool reused = reuse_ksm_page(vmf->page, vmf->vma, + vmf->address); + unlock_page(vmf->page); + if (!reused) + goto copy; + wp_page_reuse(vmf); + return VM_FAULT_WRITE; + } if (reuse_swap_page(vmf->page, &total_map_swapcount)) { if (total_map_swapcount == 1) { /* @@ -2882,7 +2894,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) (VM_WRITE|VM_SHARED))) { return wp_page_shared(vmf); } - +copy: /* * Ok, we need to copy. Oh, well.. */
From: Linus Torvalds torvalds@linux-foundation.org
How about we just make sure we're the only possible valid user fo the page before we bother to reuse it?
Simplify, simplify, simplify.
And get rid of the nasty serialization on the page lock at the same time.
[peterx: add subject prefix]
Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Peter Xu peterx@redhat.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org --- mm/memory.c | 58 ++++++++++++++++------------------------------------- 1 file changed, 17 insertions(+), 41 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c index 3874acce1472..d95a4573a273 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2847,49 +2847,25 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) * not dirty accountable. */ if (PageAnon(vmf->page)) { - int total_map_swapcount; - if (PageKsm(vmf->page) && (PageSwapCache(vmf->page) || - page_count(vmf->page) != 1)) + struct page *page = vmf->page; + + /* PageKsm() doesn't necessarily raise the page refcount */ + if (PageKsm(page) || page_count(page) != 1) + goto copy; + if (!trylock_page(page)) + goto copy; + if (PageKsm(page) || page_mapcount(page) != 1 || page_count(page) != 1) { + unlock_page(page); goto copy; - if (!trylock_page(vmf->page)) { - get_page(vmf->page); - pte_unmap_unlock(vmf->pte, vmf->ptl); - lock_page(vmf->page); - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); - if (!pte_same(*vmf->pte, vmf->orig_pte)) { - unlock_page(vmf->page); - pte_unmap_unlock(vmf->pte, vmf->ptl); - put_page(vmf->page); - return 0; - } - put_page(vmf->page); - } - if (PageKsm(vmf->page)) { - bool reused = reuse_ksm_page(vmf->page, vmf->vma, - vmf->address); - unlock_page(vmf->page); - if (!reused) - goto copy; - wp_page_reuse(vmf); - return VM_FAULT_WRITE; - } - if (reuse_swap_page(vmf->page, &total_map_swapcount)) { - if (total_map_swapcount == 1) { - /* - * The page is all ours. Move it to - * our anon_vma so the rmap code will - * not search our parent or siblings. - * Protected against the rmap code by - * the page lock. - */ - page_move_anon_rmap(vmf->page, vma); - } - unlock_page(vmf->page); - wp_page_reuse(vmf); - return VM_FAULT_WRITE; } - unlock_page(vmf->page); + /* + * Ok, we've got the only map reference, and the only + * page count reference, and the page is locked, + * it's dark out, and we're wearing sunglasses. Hit it. + */ + wp_page_reuse(vmf); + unlock_page(page); + return VM_FAULT_WRITE; } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == (VM_WRITE|VM_SHARED))) { return wp_page_shared(vmf);
From: Linus Torvalds torvalds@linux-foundation.org
Commit 09854ba94c6a ("mm: do_wp_page() simplification") reorganized all the code around the page re-use vs copy, but in the process also moved the final unlock_page() around to after the wp_page_reuse() call.
That normally doesn't matter - but it means that the unlock_page() is now done after releasing the page table lock. Again, not a big deal, you'd think.
But it turns out that it's very wrong indeed, because once we've released the page table lock, we've basically lost our only reference to the page - the page tables - and it could now be free'd at any time. We do hold the mmap_sem, so no actual unmap() can happen, but madvise can come in and a MADV_DONTNEED will zap the page range - and free the page.
So now the page may be free'd just as we're unlocking it, which in turn will usually trigger a "Bad page state" error in the freeing path. To make matters more confusing, by the time the debug code prints out the page state, the unlock has typically completed and everything looks fine again.
This all doesn't happen in any normal situations, but it does trigger with the dirtyc0w_child LTP test. And it seems to trigger much more easily (but not expclusively) on s390 than elsewhere, probably because s390 doesn't do the "batch pages up for freeing after the TLB flush" that gives the unlock_page() more time to complete and makes the race harder to hit.
Fixes: 09854ba94c6a ("mm: do_wp_page() simplification") Link: https://lore.kernel.org/lkml/a46e9bbef2ed4e17778f5615e818526ef848d791.camel@... Link: https://lore.kernel.org/linux-mm/c41149a8-211e-390b-af1d-d5eee690fecb@linux.... Reported-by: Qian Cai cai@redhat.com Reported-by: Alex Shi alex.shi@linux.alibaba.com Bisected-and-analyzed-by: Gerald Schaefer gerald.schaefer@linux.ibm.com Tested-by: Gerald Schaefer gerald.schaefer@linux.ibm.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org --- mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c index d95a4573a273..656d90a75cf8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2863,8 +2863,8 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) * page count reference, and the page is locked, * it's dark out, and we're wearing sunglasses. Hit it. */ - wp_page_reuse(vmf); unlock_page(page); + wp_page_reuse(vmf); return VM_FAULT_WRITE; } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == (VM_WRITE|VM_SHARED))) {
From: Shaohua Li shli@fb.com
Patch series "userfaultfd: write protection support", v6.
Overview ========
The uffd-wp work was initialized by Shaohua Li [1], and later continued by Andrea [2]. This series is based upon Andrea's latest userfaultfd tree, and it is a continuous works from both Shaohua and Andrea. Many of the follow up ideas come from Andrea too.
Besides the old MISSING register mode of userfaultfd, the new uffd-wp support provides another alternative register mode called UFFDIO_REGISTER_MODE_WP that can be used to listen to not only missing page faults but also write protection page faults, or even they can be registered together. At the same time, the new feature also provides a new userfaultfd ioctl called UFFDIO_WRITEPROTECT which allows the userspace to write protect a range or memory or fixup write permission of faulted pages.
Please refer to the document patch "userfaultfd: wp: UFFDIO_REGISTER_MODE_WP documentation update" for more information on the new interface and what it can do.
The major workflow of an uffd-wp program should be:
1. Register a memory region with WP mode using UFFDIO_REGISTER_MODE_WP
2. Write protect part of the whole registered region using UFFDIO_WRITEPROTECT, passing in UFFDIO_WRITEPROTECT_MODE_WP to show that we want to write protect the range.
3. Start a working thread that modifies the protected pages, meanwhile listening to UFFD messages.
4. When a write is detected upon the protected range, page fault happens, a UFFD message will be generated and reported to the page fault handling thread
5. The page fault handler thread resolves the page fault using the new UFFDIO_WRITEPROTECT ioctl, but this time passing in !UFFDIO_WRITEPROTECT_MODE_WP instead showing that we want to recover the write permission. Before this operation, the fault handler thread can do anything it wants, e.g., dumps the page to a persistent storage.
6. The worker thread will continue running with the correctly applied write permission from step 5.
Currently there are already two projects that are based on this new userfaultfd feature.
QEMU Live Snapshot: The project provides a way to allow the QEMU hypervisor to take snapshot of VMs without stopping the VM [3].
LLNL umap library: The project provides a mmap-like interface and "allow to have an application specific buffer of pages cached from a large file, i.e. out-of-core execution using memory map" [4][5].
Before posting the patchset, this series was smoke tested against QEMU live snapshot and the LLNL umap library (by doing parallel quicksort using 128 sorting threads + 80 uffd servicing threads). My sincere thanks to Marty Mcfadden and Denis Plotnikov for the help along the way.
TODO ====
- hugetlbfs/shmem support - performance - more architectures - cooperate with mprotect()-allowed processes (???) - ...
References ==========
[1] https://lwn.net/Articles/666187/ [2] https://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git/log/?h=userfau... [3] https://github.com/denis-plotnikov/qemu/commits/background-snapshot-kvm [4] https://github.com/LLNL/umap [5] https://llnl-umap.readthedocs.io/en/develop/ [6] https://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git/commit/?h=user... [7] https://lkml.org/lkml/2018/11/21/370 [8] https://lkml.org/lkml/2018/12/30/64
This patch (of 19):
Add helper for writeprotect check. Will use it later.
Signed-off-by: Shaohua Li shli@fb.com Signed-off-by: Andrea Arcangeli aarcange@redhat.com Signed-off-by: Peter Xu peterx@redhat.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Jerome Glisse jglisse@redhat.com Reviewed-by: Mike Rapoport rppt@linux.vnet.ibm.com Cc: Rik van Riel riel@redhat.com Cc: Kirill A. Shutemov kirill@shutemov.name Cc: Mel Gorman mgorman@suse.de Cc: Hugh Dickins hughd@google.com Cc: Johannes Weiner hannes@cmpxchg.org Cc: Bobby Powers bobbypowers@gmail.com Cc: Brian Geffon bgeffon@google.com Cc: David Hildenbrand david@redhat.com Cc: Denis Plotnikov dplotnikov@virtuozzo.com Cc: "Dr . David Alan Gilbert" dgilbert@redhat.com Cc: Martin Cracauer cracauer@cons.org Cc: Marty McFadden mcfadden8@llnl.gov Cc: Maya Gokhale gokhale2@llnl.gov Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Pavel Emelyanov xemul@openvz.org Link: http://lkml.kernel.org/r/20200220163112.11409-2-peterx@redhat.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org --- include/linux/userfaultfd_k.h | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 37c9eba75c98..38f748e7186e 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -50,6 +50,11 @@ static inline bool userfaultfd_missing(struct vm_area_struct *vma) return vma->vm_flags & VM_UFFD_MISSING; }
+static inline bool userfaultfd_wp(struct vm_area_struct *vma) +{ + return vma->vm_flags & VM_UFFD_WP; +} + static inline bool userfaultfd_armed(struct vm_area_struct *vma) { return vma->vm_flags & (VM_UFFD_MISSING | VM_UFFD_WP); @@ -94,6 +99,11 @@ static inline bool userfaultfd_missing(struct vm_area_struct *vma) return false; }
+static inline bool userfaultfd_wp(struct vm_area_struct *vma) +{ + return false; +} + static inline bool userfaultfd_armed(struct vm_area_struct *vma) { return false;
From: Nadav Amit namit@vmware.com
Userfaultfd self-test fails occasionally, indicating a memory corruption.
Analyzing this problem indicates that there is a real bug since mmap_lock is only taken for read in mwriteprotect_range() and defers flushes, and since there is insufficient consideration of concurrent deferred TLB flushes in wp_page_copy(). Although the PTE is flushed from the TLBs in wp_page_copy(), this flush takes place after the copy has already been performed, and therefore changes of the page are possible between the time of the copy and the time in which the PTE is flushed.
To make matters worse, memory-unprotection using userfaultfd also poses a problem. Although memory unprotection is logically a promotion of PTE permissions, and therefore should not require a TLB flush, the current userrfaultfd code might actually cause a demotion of the architectural PTE permission: when userfaultfd_writeprotect() unprotects memory region, it unintentionally *clears* the RW-bit if it was already set. Note that this unprotecting a PTE that is not write-protected is a valid use-case: the userfaultfd monitor might ask to unprotect a region that holds both write-protected and write-unprotected PTEs.
The scenario that happens in selftests/vm/userfaultfd is as follows:
cpu0 cpu1 cpu2 ---- ---- ---- [ Writable PTE cached in TLB ] userfaultfd_writeprotect() [ write-*unprotect* ] mwriteprotect_range() mmap_read_lock() change_protection()
change_protection_range() ... change_pte_range() [ *clear* “write”-bit ] [ defer TLB flushes ] [ page-fault ] ... wp_page_copy() cow_user_page() [ copy page ] [ write to old page ] ... set_pte_at_notify()
A similar scenario can happen:
cpu0 cpu1 cpu2 cpu3 ---- ---- ---- ---- [ Writable PTE cached in TLB ] userfaultfd_writeprotect() [ write-protect ] [ deferred TLB flush ] userfaultfd_writeprotect() [ write-unprotect ] [ deferred TLB flush] [ page-fault ] wp_page_copy() cow_user_page() [ copy page ] ... [ write to page ] set_pte_at_notify()
This race exists since commit 292924b26024 ("userfaultfd: wp: apply _PAGE_UFFD_WP bit"). Yet, as Yu Zhao pointed, these races became apparent since commit 09854ba94c6a ("mm: do_wp_page() simplification") which made wp_page_copy() more likely to take place, specifically if page_count(page)
To resolve the aforementioned races, check whether there are pending flushes on uffd-write-protected VMAs, and if there are, perform a flush before doing the COW.
Further optimizations will follow to avoid during uffd-write-unprotect unnecassary PTE write-protection and TLB flushes.
Link: https://lkml.kernel.org/r/20210304095423.3825684-1-namit@vmware.com Fixes: 09854ba94c6a ("mm: do_wp_page() simplification") Signed-off-by: Nadav Amit namit@vmware.com Suggested-by: Yu Zhao yuzhao@google.com Reviewed-by: Peter Xu peterx@redhat.com Tested-by: Peter Xu peterx@redhat.com Cc: Andrea Arcangeli aarcange@redhat.com Cc: Andy Lutomirski luto@kernel.org Cc: Pavel Emelyanov xemul@openvz.org Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Mike Rapoport rppt@linux.vnet.ibm.com Cc: Minchan Kim minchan@kernel.org Cc: Will Deacon will@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: stable@vger.kernel.org [5.9+] Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org --- mm/memory.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c index 656d90a75cf8..fe6e92de9bec 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2825,6 +2825,14 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma;
+ /* + * Userfaultfd write-protect can defer flushes. Ensure the TLB + * is flushed in this case before copying. + */ + if (unlikely(userfaultfd_wp(vmf->vma) && + mm_tlb_flush_pending(vmf->vma->vm_mm))) + flush_tlb_page(vmf->vma, vmf->address); + vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte); if (!vmf->page) { /*
linux-stable-mirror@lists.linaro.org