On 9/3/24 20:36, Liam R. Howlett wrote:
- Guenter Roeck linux@roeck-us.net [240903 22:38]:
On 9/3/24 19:31, Liam R. Howlett wrote:
- SeongJae Park sj@kernel.org [240903 21:18]:
On Tue, 3 Sep 2024 17:58:15 -0700 SeongJae Park sj@kernel.org wrote:
On Tue, 3 Sep 2024 20:48:53 -0400 "Liam R. Howlett" Liam.Howlett@oracle.com wrote:
- SeongJae Park sj@kernel.org [240903 20:45]:
> damon_test_three_regions_in_vmas() initializes a maple tree with > MM_MT_FLAGS. The flags contains MT_FLAGS_LOCK_EXTERN, which means > mt_lock of the maple tree will not be used. And therefore the maple > tree initialization code skips initialization of the mt_lock. However, > __link_vmas(), which adds vmas for test to the maple tree, uses the > mt_lock. In other words, the uninitialized spinlock is used. The > problem becomes celar when spinlock debugging is turned on, since it > reports spinlock bad magic bug. Fix the issue by not using the mt_lock > as promised.
You can't do this, lockdep will tell you this is wrong.
Hmm, but lockdep was silence on my setup?
We need a lock and to use the lock for writes.
This code is executed by a single-thread test code. Do we still need the lock?
I'd suggest using different flags so the spinlock is used.
The reporter mentioned simply dropping MT_FLAGS_LOCK_EXTERN from the flags causes suspicious RCU usage message. May I ask if you have a suggestion of better flags?
That would be the lockdep complaining, so that's good.
I was actually thinking replacing the mt_init_flags() with mt_init(), which same to mt_init_flags() with zero flag, like below.
Yes. This will use the spinlock which should fix your issue, but it will use a different style of maple tree.
Perhaps use MT_FLAGS_ALLOC_RANGE to use the same type of maple tree, if you ever add threading you will want the rcu flag as well (MT_FLAGS_USE_RCU).
I would recommend those two and just use the spinlock.
I tried that (MT_FLAGS_ALLOC_RANGE | MT_FLAGS_USE_RCU). it also triggers the suspicious RCU usage message.
I am running ./tools/testing/kunit/kunit.py run '*damon*' --arch x86_64 --raw with: CONFIG_LOCKDEP=y CONFIG_DEBUG_SPINLOCK=y
and I don't have any issue with locking in the existing code. How do I recreate this issue?
I tested again, and I still see
[ 6.233483] ok 4 damon [ 6.234190] KTAP version 1 [ 6.234263] # Subtest: damon-operations [ 6.234335] # module: vaddr [ 6.234384] 1..6 [ 6.235726] [ 6.235931] ============================= [ 6.236018] WARNING: suspicious RCU usage [ 6.236280] 6.11.0-rc6-00029-gda66250b210f-dirty #1 Tainted: G N [ 6.236398] ----------------------------- [ 6.236474] lib/maple_tree.c:832 suspicious rcu_dereference_check() usage! [ 6.236579] [ 6.236579] other info that might help us debug this: [ 6.236579] [ 6.236738] [ 6.236738] rcu_scheduler_active = 2, debug_locks = 1 [ 6.237039] no locks held by kunit_try_catch/208. [ 6.237166] [ 6.237166] stack backtrace: [ 6.237385] CPU: 0 UID: 0 PID: 208 Comm: kunit_try_catch Tainted: G N 6.11.0-rc6-00029-gda66250b210f-dirty #1 [ 6.237629] Tainted: [N]=TEST [ 6.237714] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014 [ 6.238065] Call Trace: [ 6.238233] <TASK> [ 6.238547] dump_stack_lvl+0x9e/0xe0 [ 6.239473] lockdep_rcu_suspicious+0x145/0x1b0 [ 6.239621] mas_walk+0x19f/0x1d0 [ 6.239765] mas_find+0xb5/0x150 [ 6.239873] __damon_va_three_regions+0x7e/0x130 [ 6.240039] damon_test_three_regions_in_vmas+0x1ea/0x480 [ 6.240551] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10 [ 6.240712] kunit_try_run_case+0x93/0x190 [ 6.240850] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10 [ 6.240990] kunit_generic_run_threadfn_adapter+0x1c/0x40 [ 6.241124] kthread+0xdd/0x110 [ 6.241256] ? __pfx_kthread+0x10/0x10 [ 6.241368] ret_from_fork+0x2f/0x50 [ 6.241468] ? __pfx_kthread+0x10/0x10 [ 6.241573] ret_from_fork_asm+0x1a/0x30 [ 6.241765] </TASK> [ 6.242180] [ 6.242270] ============================= [ 6.242375] WARNING: suspicious RCU usage [ 6.242478] 6.11.0-rc6-00029-gda66250b210f-dirty #1 Tainted: G N [ 6.242634] ----------------------------- [ 6.242734] lib/maple_tree.c:788 suspicious rcu_dereference_check() usage! [ 6.242955] [ 6.242955] other info that might help us debug this: [ 6.242955] [ 6.243098] [ 6.243098] rcu_scheduler_active = 2, debug_locks = 1 [ 6.243215] no locks held by kunit_try_catch/208. [ 6.243331] [ 6.243331] stack backtrace: [ 6.243420] CPU: 0 UID: 0 PID: 208 Comm: kunit_try_catch Tainted: G N 6.11.0-rc6-00029-gda66250b210f-dirty #1 [ 6.243599] Tainted: [N]=TEST [ 6.243665] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014 [ 6.243844] Call Trace: [ 6.243907] <TASK> [ 6.243961] dump_stack_lvl+0x9e/0xe0 [ 6.244032] lockdep_rcu_suspicious+0x145/0x1b0 [ 6.244121] mtree_range_walk+0x2b9/0x350 [ 6.244211] mas_walk+0x107/0x1d0 [ 6.244278] mas_find+0xb5/0x150 [ 6.244341] __damon_va_three_regions+0x7e/0x130 [ 6.244445] damon_test_three_regions_in_vmas+0x1ea/0x480 [ 6.244770] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10 [ 6.244865] kunit_try_run_case+0x93/0x190 [ 6.244952] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10 [ 6.245044] kunit_generic_run_threadfn_adapter+0x1c/0x40 [ 6.245130] kthread+0xdd/0x110 [ 6.245191] ? __pfx_kthread+0x10/0x10 [ 6.245262] ret_from_fork+0x2f/0x50 [ 6.245326] ? __pfx_kthread+0x10/0x10 [ 6.245394] ret_from_fork_asm+0x1a/0x30 [ 6.245497] </TASK> [ 6.246605] # damon_test_three_regions_in_vmas: pass:1 fail:0 skip:0 total:1 [ 6.246668] ok 1 damon_test_three_regions_in_vmas
This is with
diff --git a/mm/damon/vaddr-test.h b/mm/damon/vaddr-test.h index 83626483f82b..a339d117150f 100644 --- a/mm/damon/vaddr-test.h +++ b/mm/damon/vaddr-test.h @@ -77,7 +77,7 @@ static void damon_test_three_regions_in_vmas(struct kunit *test) (struct vm_area_struct) {.vm_start = 307, .vm_end = 330}, };
- mt_init_flags(&mm.mm_mt, MM_MT_FLAGS); + mt_init_flags(&mm.mm_mt, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_USE_RCU); if (__link_vmas(&mm.mm_mt, vmas, ARRAY_SIZE(vmas))) kunit_skip(test, "Failed to create VMA tree");
I'll put it all together and make it available, but that will have to wait until tomorrow.
Thanks, Guenter