From: Andrey Konovalov andreyknvl@google.com
[ Upstream commit a71012242837fe5e67d8c999cfc357174ed5dba0 ]
With tag based KASAN page_address() looks at the page flags to see whether the resulting pointer needs to have a tag set. Since we don't want to set a tag when page_address() is called on SLAB pages, we call page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab() page_address() is called before kasan_poison_slab(). Fix it by changing the order.
[andreyknvl@google.com: fix compilation error when CONFIG_SLUB_DEBUG=n] Link: http://lkml.kernel.org/r/ac27cc0bbaeb414ed77bcd6671a877cf3546d56e.1550066133... Link: http://lkml.kernel.org/r/cd895d627465a3f1c712647072d17f10883be2a1.1549921721... Signed-off-by: Andrey Konovalov andreyknvl@google.com Cc: Alexander Potapenko glider@google.com Cc: Andrey Ryabinin aryabinin@virtuozzo.com Cc: Catalin Marinas catalin.marinas@arm.com Cc: Christoph Lameter cl@linux.com Cc: David Rientjes rientjes@google.com Cc: Dmitry Vyukov dvyukov@google.com Cc: Evgeniy Stepanov eugenis@google.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Kostya Serebryany kcc@google.com Cc: Pekka Enberg penberg@kernel.org Cc: Qian Cai cai@lca.pw Cc: Vincenzo Frascino vincenzo.frascino@arm.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/slub.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c index 220d42e592ef..f14ef59c9e57 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1087,6 +1087,16 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page, init_tracking(s, object); }
+static void setup_page_debug(struct kmem_cache *s, void *addr, int order) +{ + if (!(s->flags & SLAB_POISON)) + return; + + metadata_access_enable(); + memset(addr, POISON_INUSE, PAGE_SIZE << order); + metadata_access_disable(); +} + static inline int alloc_consistency_checks(struct kmem_cache *s, struct page *page, void *object, unsigned long addr) @@ -1304,6 +1314,8 @@ unsigned long kmem_cache_flags(unsigned long object_size, #else /* !CONFIG_SLUB_DEBUG */ static inline void setup_object_debug(struct kmem_cache *s, struct page *page, void *object) {} +static inline void setup_page_debug(struct kmem_cache *s, + void *addr, int order) {}
static inline int alloc_debug_processing(struct kmem_cache *s, struct page *page, void *object, unsigned long addr) { return 0; } @@ -1599,12 +1611,11 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) if (page_is_pfmemalloc(page)) SetPageSlabPfmemalloc(page);
- start = page_address(page); + kasan_poison_slab(page);
- if (unlikely(s->flags & SLAB_POISON)) - memset(start, POISON_INUSE, PAGE_SIZE << order); + start = page_address(page);
- kasan_poison_slab(page); + setup_page_debug(s, start, order);
shuffle = shuffle_freelist(s, page);