The quilt patch titled Subject: mm/migrate: fix wrongly apply write bit after mkdirty on sparc64 has been removed from the -mm tree. Its filename was mm-migrate-fix-wrongly-apply-write-bit-after-mkdirty-on-sparc64.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------ From: Peter Xu peterx@redhat.com Subject: mm/migrate: fix wrongly apply write bit after mkdirty on sparc64 Date: Thu, 16 Feb 2023 10:30:59 -0500
Nick Bowler reported another sparc64 breakage after the young/dirty persistent work for page migration (per "Link:" below). That's after a similar report [2].
It turns out page migration was overlooked, and it wasn't failing before because page migration was not enabled in the initial report test environment.
David proposed another way [2] to fix this from sparc64 side, but that patch didn't land somehow. Neither did I check whether there's any other arch that has similar issues.
Let's fix it for now as simple as moving the write bit handling to be after dirty, like what we did before.
Note: this is based on mm-unstable, because the breakage was since 6.1 and we're at a very late stage of 6.2 (-rc8), so I assume for this specific case we should target this at 6.3.
[1] https://lore.kernel.org/all/20221021160603.GA23307@u164.east.ru/ [2] https://lore.kernel.org/all/20221212130213.136267-1-david@redhat.com/
Link: https://lkml.kernel.org/r/20230216153059.256739-1-peterx@redhat.com Fixes: 2e3468778dbe ("mm: remember young/dirty bit for page migrations") Link: https://lore.kernel.org/all/CADyTPExpEqaJiMGoV+Z6xVgL50ZoMJg49B10LcZ=8eg19u3... Signed-off-by: Peter Xu peterx@redhat.com Reported-by: Nick Bowler nbowler@draconx.ca Acked-by: David Hildenbrand david@redhat.com Tested-by: Nick Bowler nbowler@draconx.ca Cc: regressions@lists.linux.dev Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
--- a/mm/huge_memory.c~mm-migrate-fix-wrongly-apply-write-bit-after-mkdirty-on-sparc64 +++ a/mm/huge_memory.c @@ -3272,8 +3272,6 @@ void remove_migration_pmd(struct page_vm pmde = mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot)); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde = pmd_mksoft_dirty(pmde); - if (is_writable_migration_entry(entry)) - pmde = maybe_pmd_mkwrite(pmde, vma); if (pmd_swp_uffd_wp(*pvmw->pmd)) pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde)); if (!is_migration_entry_young(entry)) @@ -3281,6 +3279,10 @@ void remove_migration_pmd(struct page_vm /* NOTE: this may contain setting soft-dirty on some archs */ if (PageDirty(new) && is_migration_entry_dirty(entry)) pmde = pmd_mkdirty(pmde); + if (is_writable_migration_entry(entry)) + pmde = maybe_pmd_mkwrite(pmde, vma); + else + pmde = pmd_wrprotect(pmde);
if (PageAnon(new)) { rmap_t rmap_flags = RMAP_COMPOUND; --- a/mm/migrate.c~mm-migrate-fix-wrongly-apply-write-bit-after-mkdirty-on-sparc64 +++ a/mm/migrate.c @@ -224,6 +224,8 @@ static bool remove_migration_pte(struct pte = maybe_mkwrite(pte, vma); else if (pte_swp_uffd_wp(*pvmw.pte)) pte = pte_mkuffd_wp(pte); + else + pte = pte_wrprotect(pte);
if (folio_test_anon(folio) && !is_readable_migration_entry(entry)) rmap_flags |= RMAP_EXCLUSIVE; _
Patches currently in -mm which might be from peterx@redhat.com are
mm-uffd-fix-comment-in-handling-pte-markers.patch