The patch titled Subject: kasan: refactor pcpu kasan vmalloc unpoison has been added to the -mm mm-hotfixes-unstable branch. Its filename is kasan-refactor-pcpu-kasan-vmalloc-unpoison.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Maciej Wieczor-Retman maciej.wieczor-retman@intel.com Subject: kasan: refactor pcpu kasan vmalloc unpoison Date: Thu, 04 Dec 2025 19:00:04 +0000
A KASAN tag mismatch, possibly causing a kernel panic, can be observed on systems with a tag-based KASAN enabled and with multiple NUMA nodes. It was reported on arm64 and reproduced on x86. It can be explained in the following points:
1. There can be more than one virtual memory chunk. 2. Chunk's base address has a tag. 3. The base address points at the first chunk and thus inherits the tag of the first chunk. 4. The subsequent chunks will be accessed with the tag from the first chunk. 5. Thus, the subsequent chunks need to have their tag set to match that of the first chunk.
Refactor code by reusing __kasan_unpoison_vmalloc in a new helper in preparation for the actual fix.
Link: https://lkml.kernel.org/r/eb61d93b907e262eefcaa130261a08bcb6c5ce51.176487457... Signed-off-by: Maciej Wieczor-Retman maciej.wieczor-retman@intel.com Fixes: 1d96320f8d53 ("kasan, vmalloc: add vmalloc tagging for SW_TAGS") Cc: Alexander Potapenko glider@google.com Cc: Andrey Konovalov andreyknvl@gmail.com Cc: Andrey Ryabinin ryabinin.a.a@gmail.com Cc: Danilo Krummrich dakr@kernel.org Cc: Dmitriy Vyukov dvyukov@google.com Cc: Jiayuan Chen jiayuan.chen@linux.dev Cc: Kees Cook kees@kernel.org Cc: Marco Elver elver@google.com Cc: "Uladzislau Rezki (Sony)" urezki@gmail.com Cc: Vincenzo Frascino vincenzo.frascino@arm.com Cc: stable@vger.kernel.org [6.1+] Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
include/linux/kasan.h | 15 +++++++++++++++ mm/kasan/common.c | 17 +++++++++++++++++ mm/vmalloc.c | 4 +--- 3 files changed, 33 insertions(+), 3 deletions(-)
--- a/include/linux/kasan.h~kasan-refactor-pcpu-kasan-vmalloc-unpoison +++ a/include/linux/kasan.h @@ -615,6 +615,16 @@ static __always_inline void kasan_poison __kasan_poison_vmalloc(start, size); }
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms, + kasan_vmalloc_flags_t flags); +static __always_inline void +kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms, + kasan_vmalloc_flags_t flags) +{ + if (kasan_enabled()) + __kasan_unpoison_vmap_areas(vms, nr_vms, flags); +} + #else /* CONFIG_KASAN_VMALLOC */
static inline void kasan_populate_early_vm_area_shadow(void *start, @@ -639,6 +649,11 @@ static inline void *kasan_unpoison_vmall static inline void kasan_poison_vmalloc(const void *start, unsigned long size) { }
+static __always_inline void +kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms, + kasan_vmalloc_flags_t flags) +{ } + #endif /* CONFIG_KASAN_VMALLOC */
#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ --- a/mm/kasan/common.c~kasan-refactor-pcpu-kasan-vmalloc-unpoison +++ a/mm/kasan/common.c @@ -28,6 +28,7 @@ #include <linux/string.h> #include <linux/types.h> #include <linux/bug.h> +#include <linux/vmalloc.h>
#include "kasan.h" #include "../slab.h" @@ -582,3 +583,19 @@ bool __kasan_check_byte(const void *addr } return true; } + +#ifdef CONFIG_KASAN_VMALLOC +void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms, + kasan_vmalloc_flags_t flags) +{ + unsigned long size; + void *addr; + int area; + + for (area = 0 ; area < nr_vms ; area++) { + size = vms[area]->size; + addr = vms[area]->addr; + vms[area]->addr = __kasan_unpoison_vmalloc(addr, size, flags); + } +} +#endif --- a/mm/vmalloc.c~kasan-refactor-pcpu-kasan-vmalloc-unpoison +++ a/mm/vmalloc.c @@ -4872,9 +4872,7 @@ retry: * With hardware tag-based KASAN, marking is skipped for * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc(). */ - for (area = 0; area < nr_vms; area++) - vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr, - vms[area]->size, KASAN_VMALLOC_PROT_NORMAL); + kasan_unpoison_vmap_areas(vms, nr_vms, KASAN_VMALLOC_PROT_NORMAL);
kfree(vas); return vms; _
Patches currently in -mm which might be from maciej.wieczor-retman@intel.com are
kasan-refactor-pcpu-kasan-vmalloc-unpoison.patch kasan-unpoison-vms-addresses-with-a-common-tag.patch
linux-stable-mirror@lists.linaro.org