The quilt patch titled Subject: mm/vmalloc: leave lazy MMU mode on PTE mapping error has been removed from the -mm tree. Its filename was mm-vmalloc-leave-lazy-mmu-mode-on-pte-mapping-error.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------ From: Alexander Gordeev agordeev@linux.ibm.com Subject: mm/vmalloc: leave lazy MMU mode on PTE mapping error Date: Mon, 23 Jun 2025 09:57:21 +0200
vmap_pages_pte_range() enters the lazy MMU mode, but fails to leave it in case an error is encountered.
Link: https://lkml.kernel.org/r/20250623075721.2817094-1-agordeev@linux.ibm.com Fixes: 2ba3e6947aed ("mm/vmalloc: track which page-table levels were modified") Signed-off-by: Alexander Gordeev agordeev@linux.ibm.com Reported-by: kernel test robot lkp@intel.com Reported-by: Dan Carpenter dan.carpenter@linaro.org Closes: https://lore.kernel.org/r/202506132017.T1l1l6ME-lkp@intel.com/ Reviewed-by: Ryan Roberts ryan.roberts@arm.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/vmalloc.c | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-)
--- a/mm/vmalloc.c~mm-vmalloc-leave-lazy-mmu-mode-on-pte-mapping-error +++ a/mm/vmalloc.c @@ -514,6 +514,7 @@ static int vmap_pages_pte_range(pmd_t *p unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { + int err = 0; pte_t *pte;
/* @@ -530,12 +531,18 @@ static int vmap_pages_pte_range(pmd_t *p do { struct page *page = pages[*nr];
- if (WARN_ON(!pte_none(ptep_get(pte)))) - return -EBUSY; - if (WARN_ON(!page)) - return -ENOMEM; - if (WARN_ON(!pfn_valid(page_to_pfn(page)))) - return -EINVAL; + if (WARN_ON(!pte_none(ptep_get(pte)))) { + err = -EBUSY; + break; + } + if (WARN_ON(!page)) { + err = -ENOMEM; + break; + } + if (WARN_ON(!pfn_valid(page_to_pfn(page)))) { + err = -EINVAL; + break; + }
set_pte_at(&init_mm, addr, pte, mk_pte(page, prot)); (*nr)++; @@ -543,7 +550,8 @@ static int vmap_pages_pte_range(pmd_t *p
arch_leave_lazy_mmu_mode(); *mask |= PGTBL_PTE_MODIFIED; - return 0; + + return err; }
static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr, _
Patches currently in -mm which might be from agordeev@linux.ibm.com are
linux-stable-mirror@lists.linaro.org