The patch below does not apply to the 4.19-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
Possible dependencies:
fd35ca3d12cc ("mm/migrate_device.c: copy pte dirty bit to page") a3589e1d5fe3 ("mm/migrate_device.c: add missing flush_cache_page()") 6c287605fd56 ("mm: remember exclusively mapped anonymous pages with PG_anon_exclusive") 6c54dc6c7437 ("mm/rmap: use page_move_anon_rmap() when reusing a mapped PageAnon() page exclusively") 28c5209dfd5f ("mm/rmap: pass rmap flags to hugepage_add_anon_rmap()") f1e2db12e45b ("mm/rmap: remove do_page_add_anon_rmap()") 14f9135d5470 ("mm/rmap: convert RMAP flags to a proper distinct rmap_t type") fb3d824d1a46 ("mm/rmap: split page_dup_rmap() into page_dup_file_rmap() and page_try_dup_anon_rmap()") b51ad4f8679e ("mm/memory: slightly simplify copy_present_pte()") 623a1ddfeb23 ("mm/hugetlb: take src_mm->write_protect_seq in copy_hugetlb_page_range()") 3bff7e3f1f16 ("mm/huge_memory: streamline COW logic in do_huge_pmd_wp_page()") c145e0b47c77 ("mm: streamline COW logic in do_swap_page()") 84d60fdd3733 ("mm: slightly clarify KSM logic in do_swap_page()") 53a05ad9f21d ("mm: optimize do_wp_page() for exclusive pages in the swapcache") 9030fb0bb9d6 ("Merge tag 'folio-5.18c' of git://git.infradead.org/users/willy/pagecache")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From fd35ca3d12cc9922d7d9a35f934e72132dbc4853 Mon Sep 17 00:00:00 2001 From: Alistair Popple apopple@nvidia.com Date: Fri, 2 Sep 2022 10:35:53 +1000 Subject: [PATCH] mm/migrate_device.c: copy pte dirty bit to page
migrate_vma_setup() has a fast path in migrate_vma_collect_pmd() that installs migration entries directly if it can lock the migrating page. When removing a dirty pte the dirty bit is supposed to be carried over to the underlying page to prevent it being lost.
Currently migrate_vma_*() can only be used for private anonymous mappings. That means loss of the dirty bit usually doesn't result in data loss because these pages are typically not file-backed. However pages may be backed by swap storage which can result in data loss if an attempt is made to migrate a dirty page that doesn't yet have the PageDirty flag set.
In this case migration will fail due to unexpected references but the dirty pte bit will be lost. If the page is subsequently reclaimed data won't be written back to swap storage as it is considered uptodate, resulting in data loss if the page is subsequently accessed.
Prevent this by copying the dirty bit to the page when removing the pte to match what try_to_migrate_one() does.
Link: https://lkml.kernel.org/r/dd48e4882ce859c295c1a77612f66d198b0403f9.166207852... Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages") Signed-off-by: Alistair Popple apopple@nvidia.com Acked-by: Peter Xu peterx@redhat.com Reviewed-by: "Huang, Ying" ying.huang@intel.com Reported-by: "Huang, Ying" ying.huang@intel.com Acked-by: David Hildenbrand david@redhat.com Cc: Alex Sierra alex.sierra@amd.com Cc: Ben Skeggs bskeggs@redhat.com Cc: Felix Kuehling Felix.Kuehling@amd.com Cc: huang ying huang.ying.caritas@gmail.com Cc: Jason Gunthorpe jgg@nvidia.com Cc: John Hubbard jhubbard@nvidia.com Cc: Karol Herbst kherbst@redhat.com Cc: Logan Gunthorpe logang@deltatee.com Cc: Lyude Paul lyude@redhat.com Cc: Matthew Wilcox willy@infradead.org Cc: Nadav Amit nadav.amit@gmail.com Cc: Paul Mackerras paulus@ozlabs.org Cc: Ralph Campbell rcampbell@nvidia.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 4cc849c3b54c..dbf6c7a7a7c9 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -7,6 +7,7 @@ #include <linux/export.h> #include <linux/memremap.h> #include <linux/migrate.h> +#include <linux/mm.h> #include <linux/mm_inline.h> #include <linux/mmu_notifier.h> #include <linux/oom.h> @@ -196,7 +197,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, flush_cache_page(vma, addr, pte_pfn(*ptep)); anon_exclusive = PageAnon(page) && PageAnonExclusive(page); if (anon_exclusive) { - ptep_clear_flush(vma, addr, ptep); + pte = ptep_clear_flush(vma, addr, ptep);
if (page_try_share_anon_rmap(page)) { set_pte_at(mm, addr, ptep, pte); @@ -206,11 +207,15 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, goto next; } } else { - ptep_get_and_clear(mm, addr, ptep); + pte = ptep_get_and_clear(mm, addr, ptep); }
migrate->cpages++;
+ /* Set the dirty flag on the folio now the pte is gone. */ + if (pte_dirty(pte)) + folio_mark_dirty(page_folio(page)); + /* Setup special migration page table entry */ if (mpfn & MIGRATE_PFN_WRITE) entry = make_writable_migration_entry(
linux-stable-mirror@lists.linaro.org