When the SLAB_STORE_USER debug flag is used, any metadata placed after the original kmalloc request size (orig_size) is not properly aligned on 64-bit architectures because its type is unsigned int. When both KASAN and SLAB_STORE_USER are enabled, kasan_alloc_meta is misaligned.
Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS are assumed to require 64-bit accesses to be 64-bit aligned. See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.
Because not all architectures support unaligned memory accesses, ensure that all metadata (track, orig_size, kasan_{alloc,free}_meta) in a slab object are word-aligned. struct track, kasan_{alloc,free}_meta are aligned by adding __aligned(__alignof__(unsigned long)).
For orig_size, use ALIGN(sizeof(unsigned int), sizeof(unsigned long)) to make clear that its size remains unsigned int but it must be aligned to a word boundary. On 64-bit architectures, this reserves 8 bytes for orig_size, which is acceptable since kmalloc's original request size tracking is intended for debugging rather than production use.
Cc: stable@vger.kernel.org Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc") Acked-by: Andrey Konovalov andreyknvl@gmail.com Signed-off-by: Harry Yoo harry.yoo@oracle.com ---
v1 -> v2: - Added Andrey's Acked-by. - Added references to HAVE_64BIT_ALIGNED_ACCESS and the commit that resurrected it. - Used __alignof__() instead of sizeof(), as suggested by Pedro (off-list). Note: either __alignof__ or sizeof() produces the exactly same mm/slub.o files, so there's no functional difference.
Thanks!
mm/kasan/kasan.h | 4 ++-- mm/slub.c | 16 +++++++++++----- 2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 129178be5e64..b86b6e9f456a 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -265,7 +265,7 @@ struct kasan_alloc_meta { struct kasan_track alloc_track; /* Free track is stored in kasan_free_meta. */ depot_stack_handle_t aux_stack[2]; -}; +} __aligned(__alignof__(unsigned long));
struct qlist_node { struct qlist_node *next; @@ -289,7 +289,7 @@ struct qlist_node { struct kasan_free_meta { struct qlist_node quarantine_link; struct kasan_track free_track; -}; +} __aligned(__alignof__(unsigned long));
#endif /* CONFIG_KASAN_GENERIC */
diff --git a/mm/slub.c b/mm/slub.c index a585d0ac45d4..462a39d57b3a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -344,7 +344,7 @@ struct track { int cpu; /* Was running on cpu */ int pid; /* Pid context */ unsigned long when; /* When did the operation occur */ -}; +} __aligned(__alignof__(unsigned long));
enum track_item { TRACK_ALLOC, TRACK_FREE };
@@ -1196,7 +1196,7 @@ static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p) off += 2 * sizeof(struct track);
if (slub_debug_orig_size(s)) - off += sizeof(unsigned int); + off += ALIGN(sizeof(unsigned int), __alignof__(unsigned long));
off += kasan_metadata_size(s, false);
@@ -1392,7 +1392,8 @@ static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p) off += 2 * sizeof(struct track);
if (s->flags & SLAB_KMALLOC) - off += sizeof(unsigned int); + off += ALIGN(sizeof(unsigned int), + __alignof__(unsigned long)); }
off += kasan_metadata_size(s, false); @@ -7820,9 +7821,14 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s) */ size += 2 * sizeof(struct track);
- /* Save the original kmalloc request size */ + /* + * Save the original kmalloc request size. + * Although the request size is an unsigned int, + * make sure that is aligned to word boundary. + */ if (flags & SLAB_KMALLOC) - size += sizeof(unsigned int); + size += ALIGN(sizeof(unsigned int), + __alignof__(unsigned long)); } #endif
On Mon, Oct 27, 2025 at 09:00:28PM +0900, Harry Yoo wrote:
When the SLAB_STORE_USER debug flag is used, any metadata placed after the original kmalloc request size (orig_size) is not properly aligned on 64-bit architectures because its type is unsigned int. When both KASAN and SLAB_STORE_USER are enabled, kasan_alloc_meta is misaligned.
Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS are assumed to require 64-bit accesses to be 64-bit aligned. See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.
Because not all architectures support unaligned memory accesses, ensure that all metadata (track, orig_size, kasan_{alloc,free}_meta) in a slab object are word-aligned. struct track, kasan_{alloc,free}_meta are aligned by adding __aligned(__alignof__(unsigned long)).
For orig_size, use ALIGN(sizeof(unsigned int), sizeof(unsigned long)) to
^ Uh, here I intended to say: __aligneof__(unsigned long))
make clear that its size remains unsigned int but it must be aligned to a word boundary. On 64-bit architectures, this reserves 8 bytes for orig_size, which is acceptable since kmalloc's original request size tracking is intended for debugging rather than production use.
Cc: stable@vger.kernel.org Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc") Acked-by: Andrey Konovalov andreyknvl@gmail.com Signed-off-by: Harry Yoo harry.yoo@oracle.com
v1 -> v2:
- Added Andrey's Acked-by.
- Added references to HAVE_64BIT_ALIGNED_ACCESS and the commit that resurrected it.
- Used __alignof__() instead of sizeof(), as suggested by Pedro (off-list). Note: either __alignof__ or sizeof() produces the exactly same mm/slub.o files, so there's no functional difference.
Thanks!
mm/kasan/kasan.h | 4 ++-- mm/slub.c | 16 +++++++++++----- 2 files changed, 13 insertions(+), 7 deletions(-)
linux-stable-mirror@lists.linaro.org