On Tue, Oct 21, 2025 at 12:04 AM Vlastimil Babka vbabka@suse.cz wrote:
On 10/21/25 03:03, Hao Ge wrote:
From: Hao Ge gehao@kylinos.cn
If two competing threads enter alloc_slab_obj_exts() and one of them fails to allocate the object extension vector, it might override the valid slab->obj_exts allocated by the other thread with OBJEXTS_ALLOC_FAIL. This will cause the thread that lost this race and expects a valid pointer to dereference a NULL pointer later on.
Update slab->obj_exts atomically using cmpxchg() to avoid slab->obj_exts overrides by racing threads.
Thanks for Vlastimil and Suren's help with debugging.
Fixes: f7381b911640 ("slab: mark slab->obj_exts allocation failures unconditionally") Cc: stable@vger.kernel.org Suggested-by: Suren Baghdasaryan surenb@google.com Signed-off-by: Hao Ge gehao@kylinos.cn
Reviewed-by: Suren Baghdasaryan surenb@google.com
Thanks for the fix, Hao!
Added to slab/for-next-fixes, thanks!
v3: According to Suren's suggestion, simplify the commit message and the code comments. Thanks for Suren.
v2: Incorporate handling for the scenario where, if mark_failed_objexts_alloc wins the race, the other process (that previously succeeded in allocation) will lose the race, based on Suren's suggestion. Add Suggested-by: Suren Baghdasaryan surenb@google.com
mm/slub.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c index 2e4340c75be2..d4403341c9df 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2054,7 +2054,7 @@ static inline void mark_objexts_empty(struct slabobj_ext *obj_exts)
static inline void mark_failed_objexts_alloc(struct slab *slab) {
slab->obj_exts = OBJEXTS_ALLOC_FAIL;
cmpxchg(&slab->obj_exts, 0, OBJEXTS_ALLOC_FAIL);}
static inline void handle_failed_objexts_alloc(unsigned long obj_exts, @@ -2136,6 +2136,7 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, #ifdef CONFIG_MEMCG new_exts |= MEMCG_DATA_OBJEXTS; #endif +retry: old_exts = READ_ONCE(slab->obj_exts); handle_failed_objexts_alloc(old_exts, vec, objects); if (new_slab) { @@ -2145,8 +2146,7 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, * be simply assigned. */ slab->obj_exts = new_exts;
} else if ((old_exts & ~OBJEXTS_FLAGS_MASK) ||cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) {
} else if (old_exts & ~OBJEXTS_FLAGS_MASK) { /* * If the slab is already in use, somebody can allocate and * assign slabobj_exts in parallel. In this case the existing@@ -2158,6 +2158,9 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, else kfree(vec); return 0;
} else if (cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) {/* Retry if a racing thread changed slab->obj_exts from under us. */goto retry; } if (allow_spin)