The patch titled Subject: mm: protect kernel pgtables in apply_to_pte_range() has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-protect-kernel-pgtables-in-apply_to_pte_range.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Alexander Gordeev agordeev@linux.ibm.com Subject: mm: protect kernel pgtables in apply_to_pte_range() Date: Tue, 8 Apr 2025 18:07:32 +0200
The lazy MMU mode can only be entered and left under the protection of the page table locks for all page tables which may be modified. Yet, when it comes to kernel mappings apply_to_pte_range() does not take any locks. That does not conform arch_enter|leave_lazy_mmu_mode() semantics and could potentially lead to re-schedulling a process while in lazy MMU mode or racing on a kernel page table updates.
Link: https://lkml.kernel.org/r/ef8f6538b83b7fc3372602f90375348f9b4f3596.174412812... Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates") Signed-off-by: Alexander Gordeev agordeev@linux.ibm.com Cc: stable@vger.kernel.org Cc: Andrey Ryabinin ryabinin.a.a@gmail.com Cc: Guenetr Roeck linux@roeck-us.net Cc: Hugh Dickins hughd@google.com Cc: Jeremy Fitzhardinge jeremy@goop.org Cc: Juegren Gross jgross@suse.com Cc: Nicholas Piggin npiggin@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/kasan/shadow.c | 7 ++----- mm/memory.c | 5 ++++- 2 files changed, 6 insertions(+), 6 deletions(-)
--- a/mm/kasan/shadow.c~mm-protect-kernel-pgtables-in-apply_to_pte_range +++ a/mm/kasan/shadow.c @@ -308,14 +308,14 @@ static int kasan_populate_vmalloc_pte(pt __memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
- spin_lock(&init_mm.page_table_lock); if (likely(pte_none(ptep_get(ptep)))) { set_pte_at(&init_mm, addr, ptep, pte); page = 0; } - spin_unlock(&init_mm.page_table_lock); + if (page) free_page(page); + return 0; }
@@ -401,13 +401,10 @@ static int kasan_depopulate_vmalloc_pte(
page = (unsigned long)__va(pte_pfn(ptep_get(ptep)) << PAGE_SHIFT);
- spin_lock(&init_mm.page_table_lock); - if (likely(!pte_none(ptep_get(ptep)))) { pte_clear(&init_mm, addr, ptep); free_page(page); } - spin_unlock(&init_mm.page_table_lock);
return 0; } --- a/mm/memory.c~mm-protect-kernel-pgtables-in-apply_to_pte_range +++ a/mm/memory.c @@ -2926,6 +2926,7 @@ static int apply_to_pte_range(struct mm_ pte = pte_offset_kernel(pmd, addr); if (!pte) return err; + spin_lock(&init_mm.page_table_lock); } else { if (create) pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); @@ -2951,7 +2952,9 @@ static int apply_to_pte_range(struct mm_
arch_leave_lazy_mmu_mode();
- if (mm != &init_mm) + if (mm == &init_mm) + spin_unlock(&init_mm.page_table_lock); + else pte_unmap_unlock(mapped_pte, ptl);
*mask |= PGTBL_PTE_MODIFIED; _
Patches currently in -mm which might be from agordeev@linux.ibm.com are
kasan-avoid-sleepable-page-allocation-from-atomic-context.patch mm-cleanup-apply_to_pte_range-routine.patch mm-protect-kernel-pgtables-in-apply_to_pte_range.patch