The patch below does not apply to the 4.14-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-4.14.y git checkout FETCH_HEAD git cherry-pick -x 66b2ca086210732954a7790d63d35542936fc664 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2023052253-oppressed-blurb-418a@gregkh' --subject-prefix 'PATCH 4.14.y' HEAD^..
Possible dependencies:
66b2ca086210 ("powerpc/64s/radix: Fix soft dirty tracking") 47d99948eee4 ("powerpc/mm: Move book3s64 specifics in subdirectory mm/book3s64") fb0b0a73b223 ("powerpc: Enable kcov") e66c3209c7fd ("powerpc: Move page table dump files in a dedicated subdirectory") 7c91efce1608 ("powerpc/mm: dump block address translation on book3s/32") 0261a508c9fc ("powerpc/mm: dump segment registers on book3s/32") 32ea4c149990 ("powerpc/mm: Extend pte_fragment functionality to PPC32") a74791dd9833 ("powerpc/mm: add helpers to get/set mm.context->pte_frag") d09780f3a8d4 ("powerpc/mm: Move pgtable_t into platform headers") 994da93d1968 ("powerpc/mm: move platform specific mmu-xxx.h in platform directories") a95d133c8643 ("powerpc/mm: Move pte_fragment_alloc() to a common location") a43ccc4bc499 ("powerpc/book3s32: Remove CONFIG_BOOKE dependent code") 5b3e84fc10dd ("powerpc: change CONFIG_PPC_STD_MMU to CONFIG_PPC_BOOK3S") 68289ae935da ("powerpc: change CONFIG_PPC_STD_MMU_32 to CONFIG_PPC_BOOK3S_32") 9a8dd708d547 ("memblock: rename memblock_alloc{_nid,_try_nid} to memblock_phys_alloc*") 48e7b7695745 ("powerpc/64s/hash: Convert SLB miss handlers to C") 97026b5a5ac2 ("powerpc/mm: Split dump_pagelinuxtables flag_array table") 34eb138ed74d ("powerpc/mm: don't use _PAGE_EXEC for calling hash_preload()") c766ee72235d ("powerpc: handover page flags with a pgprot_t parameter") 56f3c1413f5c ("powerpc/mm: properly set PAGE_KERNEL flags in ioremap()")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 66b2ca086210732954a7790d63d35542936fc664 Mon Sep 17 00:00:00 2001 From: Michael Ellerman mpe@ellerman.id.au Date: Thu, 11 May 2023 21:42:24 +1000 Subject: [PATCH] powerpc/64s/radix: Fix soft dirty tracking MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
It was reported that soft dirty tracking doesn't work when using the Radix MMU.
The tracking is supposed to work by clearing the soft dirty bit for a mapping and then write protecting the PTE. If/when the page is written to, a page fault occurs and the soft dirty bit is added back via pte_mkdirty(). For example in wp_page_reuse():
entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) update_mmu_cache(vma, vmf->address, vmf->pte);
Unfortunately on radix _PAGE_SOFTDIRTY is being dropped by radix__ptep_set_access_flags(), called from ptep_set_access_flags(), meaning the soft dirty bit is not set even though the page has been written to.
Fix it by adding _PAGE_SOFTDIRTY to the set of bits that are able to be changed in radix__ptep_set_access_flags().
Fixes: b0b5e9b13047 ("powerpc/mm/radix: Add radix pte #defines") Cc: stable@vger.kernel.org # v4.7+ Reported-by: Dan Horák dan@danny.cz Link: https://lore.kernel.org/r/20230511095558.56663a50f86bdc4cd97700b7@danny.cz Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://msgid.link/20230511114224.977423-1-mpe@ellerman.id.au
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 26245aaf12b8..2297aa764ecd 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1040,8 +1040,8 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep, pte_t entry, unsigned long address, int psize) { struct mm_struct *mm = vma->vm_mm; - unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED | - _PAGE_RW | _PAGE_EXEC); + unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_SOFT_DIRTY | + _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
unsigned long change = pte_val(entry) ^ pte_val(*ptep); /*
linux-stable-mirror@lists.linaro.org