The patch titled Subject: mm/kasan: fix incorrect unpoisoning in vrealloc for KASAN has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Jiayuan Chen jiayuan.chen@linux.dev Subject: mm/kasan: fix incorrect unpoisoning in vrealloc for KASAN Date: Fri, 28 Nov 2025 19:15:14 +0800
Syzkaller reported a memory out-of-bounds bug [1]. This patch fixes two issues:
1. In vrealloc, we were missing the KASAN_VMALLOC_VM_ALLOC flag when unpoisoning the extended region. This flag is required to correctly associate the allocation with KASAN's vmalloc tracking.
Note: In contrast, vzalloc (via __vmalloc_node_range_noprof) explicitly sets KASAN_VMALLOC_VM_ALLOC and calls kasan_unpoison_vmalloc() with it. vrealloc must behave consistently ��� especially when reusing existing vmalloc regions ��� to ensure KASAN can track allocations correctly.
2. When vrealloc reuses an existing vmalloc region (without allocating new pages), KASAN previously generated a new tag, which broke tag-based memory access tracking. We now add a 'reuse_tag' parameter to __kasan_unpoison_vmalloc() to preserve the original tag in such cases.
A new helper kasan_unpoison_vralloc() is introduced to handle this reuse scenario, ensuring consistent tag behavior during reallocation.
Link: https://lkml.kernel.org/r/20251128111516.244497-1-jiayuan.chen@linux.dev Link: https://syzkaller.appspot.com/bug?extid=997752115a851cb0cf36 [1] Fixes: a0309faf1cb0 ("mm: vmalloc: support more granular vrealloc() sizing") Signed-off-by: Jiayuan Chen jiayuan.chen@linux.dev Reported-by: syzbot+997752115a851cb0cf36@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/68e243a2.050a0220.1696c6.007d.GAE@google.com/T/ Cc: Alexander Potapenko glider@google.com Cc: Andrey Konovalov andreyknvl@gmail.com Cc: Andrey Ryabinin ryabinin.a.a@gmail.com Cc: Danilo Krummrich dakr@kernel.org Cc: Dmitriy Vyukov dvyukov@google.com Cc: Kees Cook kees@kernel.org Cc: "Uladzislau Rezki (Sony)" urezki@gmail.com Cc: Vincenzo Frascino vincenzo.frascino@arm.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
include/linux/kasan.h | 21 +++++++++++++++++++-- mm/kasan/hw_tags.c | 4 ++-- mm/kasan/shadow.c | 6 ++++-- mm/vmalloc.c | 4 ++-- 4 files changed, 27 insertions(+), 8 deletions(-)
--- a/include/linux/kasan.h~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan +++ a/include/linux/kasan.h @@ -596,13 +596,23 @@ static inline void kasan_release_vmalloc #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
void *__kasan_unpoison_vmalloc(const void *start, unsigned long size, - kasan_vmalloc_flags_t flags); + kasan_vmalloc_flags_t flags, bool reuse_tag); + +static __always_inline void *kasan_unpoison_vrealloc(const void *start, + unsigned long size, + kasan_vmalloc_flags_t flags) +{ + if (kasan_enabled()) + return __kasan_unpoison_vmalloc(start, size, flags, true); + return (void *)start; +} + static __always_inline void *kasan_unpoison_vmalloc(const void *start, unsigned long size, kasan_vmalloc_flags_t flags) { if (kasan_enabled()) - return __kasan_unpoison_vmalloc(start, size, flags); + return __kasan_unpoison_vmalloc(start, size, flags, false); return (void *)start; }
@@ -629,6 +639,13 @@ static inline void kasan_release_vmalloc unsigned long free_region_end, unsigned long flags) { }
+static inline void *kasan_unpoison_vrealloc(const void *start, + unsigned long size, + kasan_vmalloc_flags_t flags) +{ + return (void *)start; +} + static inline void *kasan_unpoison_vmalloc(const void *start, unsigned long size, kasan_vmalloc_flags_t flags) --- a/mm/kasan/hw_tags.c~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan +++ a/mm/kasan/hw_tags.c @@ -317,7 +317,7 @@ static void init_vmalloc_pages(const voi }
void *__kasan_unpoison_vmalloc(const void *start, unsigned long size, - kasan_vmalloc_flags_t flags) + kasan_vmalloc_flags_t flags, bool reuse_tag) { u8 tag; unsigned long redzone_start, redzone_size; @@ -361,7 +361,7 @@ void *__kasan_unpoison_vmalloc(const voi return (void *)start; }
- tag = kasan_random_tag(); + tag = reuse_tag ? get_tag(start) : kasan_random_tag(); start = set_tag(start, tag);
/* Unpoison and initialize memory up to size. */ --- a/mm/kasan/shadow.c~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan +++ a/mm/kasan/shadow.c @@ -625,7 +625,7 @@ void kasan_release_vmalloc(unsigned long }
void *__kasan_unpoison_vmalloc(const void *start, unsigned long size, - kasan_vmalloc_flags_t flags) + kasan_vmalloc_flags_t flags, bool reuse_tag) { /* * Software KASAN modes unpoison both VM_ALLOC and non-VM_ALLOC @@ -648,7 +648,9 @@ void *__kasan_unpoison_vmalloc(const voi !(flags & KASAN_VMALLOC_PROT_NORMAL)) return (void *)start;
- start = set_tag(start, kasan_random_tag()); + if (!reuse_tag) + start = set_tag(start, kasan_random_tag()); + kasan_unpoison(start, size, false); return (void *)start; } --- a/mm/vmalloc.c~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan +++ a/mm/vmalloc.c @@ -4175,8 +4175,8 @@ void *vrealloc_node_align_noprof(const v * We already have the bytes available in the allocation; use them. */ if (size <= alloced_size) { - kasan_unpoison_vmalloc(p + old_size, size - old_size, - KASAN_VMALLOC_PROT_NORMAL); + kasan_unpoison_vrealloc(p, size, + KASAN_VMALLOC_PROT_NORMAL | KASAN_VMALLOC_VM_ALLOC); /* * No need to zero memory here, as unused memory will have * already been zeroed at initial allocation time or during _
Patches currently in -mm which might be from jiayuan.chen@linux.dev are
mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan.patch mm-vmscan-skip-increasing-kswapd_failures-when-reclaim-was-boosted.patch