At 2025-08-18 11:43:14, "Harry Yoo" harry.yoo@oracle.com wrote:
On Mon, Aug 18, 2025 at 10:33:51AM +0800, yangshiguang wrote:
At 2025-08-18 10:22:36, "Harry Yoo" harry.yoo@oracle.com wrote:
On Mon, Aug 18, 2025 at 10:07:40AM +0800, yangshiguang wrote:
At 2025-08-16 18:46:12, "Harry Yoo" harry.yoo@oracle.com wrote:
On Sat, Aug 16, 2025 at 06:05:15PM +0800, yangshiguang wrote:
At 2025-08-16 16:25:25, "Harry Yoo" harry.yoo@oracle.com wrote: >On Thu, Aug 14, 2025 at 07:16:42PM +0800, yangshiguang1011@163.com wrote: >> From: yangshiguang yangshiguang@xiaomi.com >> >> From: yangshiguang yangshiguang@xiaomi.com >> >> set_track_prepare() can incur lock recursion. >> The issue is that it is called from hrtimer_start_range_ns >> holding the per_cpu(hrtimer_bases)[n].lock, but when enabled >> CONFIG_DEBUG_OBJECTS_TIMERS, may wake up kswapd in set_track_prepare, >> and try to hold the per_cpu(hrtimer_bases)[n].lock. >> >> So avoid waking up kswapd.The oops looks something like: > >Hi yangshiguang, > >In the next revision, could you please elaborate the commit message >to reflect how this change avoids waking up kswapd? >
of course. Thanks for the reminder.
>> BUG: spinlock recursion on CPU#3, swapper/3/0 >> lock: 0xffffff8a4bf29c80, .magic: dead4ead, .owner: swapper/3/0, .owner_cpu: 3 >> Hardware name: Qualcomm Technologies, Inc. Popsicle based on SM8850 (DT) >> Call trace: >> spin_bug+0x0 >> _raw_spin_lock_irqsave+0x80 >> hrtimer_try_to_cancel+0x94 >> task_contending+0x10c >> enqueue_dl_entity+0x2a4 >> dl_server_start+0x74 >> enqueue_task_fair+0x568 >> enqueue_task+0xac >> do_activate_task+0x14c >> ttwu_do_activate+0xcc >> try_to_wake_up+0x6c8 >> default_wake_function+0x20 >> autoremove_wake_function+0x1c >> __wake_up+0xac >> wakeup_kswapd+0x19c >> wake_all_kswapds+0x78 >> __alloc_pages_slowpath+0x1ac >> __alloc_pages_noprof+0x298 >> stack_depot_save_flags+0x6b0 >> stack_depot_save+0x14 >> set_track_prepare+0x5c >> ___slab_alloc+0xccc >> __kmalloc_cache_noprof+0x470 >> __set_page_owner+0x2bc >> post_alloc_hook[jt]+0x1b8 >> prep_new_page+0x28 >> get_page_from_freelist+0x1edc >> __alloc_pages_noprof+0x13c >> alloc_slab_page+0x244 >> allocate_slab+0x7c >> ___slab_alloc+0x8e8 >> kmem_cache_alloc_noprof+0x450 >> debug_objects_fill_pool+0x22c >> debug_object_activate+0x40 >> enqueue_hrtimer[jt]+0xdc >> hrtimer_start_range_ns+0x5f8 >> ... >> >> Signed-off-by: yangshiguang yangshiguang@xiaomi.com >> Fixes: 5cf909c553e9 ("mm/slub: use stackdepot to save stack trace in objects") >> --- >> v1 -> v2: >> propagate gfp flags to set_track_prepare() >> >> [1] https://urldefense.com/v3/__https://lore.kernel.org/all/20250801065121.87679... >> --- >> mm/slub.c | 21 +++++++++++---------- >> 1 file changed, 11 insertions(+), 10 deletions(-) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index 30003763d224..dba905bf1e03 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -962,19 +962,20 @@ static struct track *get_track(struct kmem_cache *s, void *object, >> } >> >> #ifdef CONFIG_STACKDEPOT >> -static noinline depot_stack_handle_t set_track_prepare(void) >> +static noinline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags) >> { >> depot_stack_handle_t handle; >> unsigned long entries[TRACK_ADDRS_COUNT]; >> unsigned int nr_entries; >> + gfp_flags &= GFP_NOWAIT; > >Is there any reason to downgrade it to GFP_NOWAIT when the gfp flag allows >direct reclamation? >
Hi Harry,
The original allocation is GFP_NOWAIT. So I think it's better not to increase the allocation cost here.
I don't think the allocation cost is important here, because collecting a stack trace for each alloc/free is quite slow anyway. And we don't really care about performance in debug caches (it isn't designed to be performant).
I think it was GFP_NOWAIT because it was considered safe without regard to the GFP flags passed, rather than due to performance considerations.
Hi harry,
Is that so? gfp_flags &= (GFP_NOWAIT | __GFP_DIRECT_RECLAIM);
This still clears gfp flags passed by the caller to the allocator. Why not use gfp_flags directly without clearing some flags?
Hi Harry,
This introduces new problems.
call stackļ¼ dump_backtrace+0xfc/0x17c show_stack+0x18/0x28 dump_stack_lvl+0x40/0xc0 dump_stack+0x18/0x24 __might_resched+0x164/0x184 __might_sleep+0x38/0x84 prepare_alloc_pages+0xc0/0x17c __alloc_pages_noprof+0x130/0x3f8 stack_depot_save_flags+0x5a8/0x6bc stack_depot_save+0x14/0x24 set_track_prepare+0x64/0x90 ___slab_alloc+0xc14/0xc48 __kmalloc_cache_noprof+0x398/0x568 __kthread_create_on_node+0x8c/0x1f0 kthread_create_on_node+0x4c/0x74 create_worker+0xe0/0x298 workqueue_init+0x228/0x324 kernel_init_freeable+0x124/0x1c8 kernel_init+0x20/0x1ac ret_from_fork+0x10/0x20
Ok, because preemption is disabled in ___slab_alloc(), blocking allocations are not allowed even when gfp_flags allows it. So __GFP_DIRECT_RECLAIM should be cleared.
So,
/* Preemption is disabled in ___slab_alloc() */ gfp_flags &= ~(__GFP_DIRECT_RECLAIM);
should work?
Feedback after testing ASAP.
Of course there are other problems.
So it is best to limit gtp flags.
-- Cheers, Harry / Hyeonggon