On Wed 2023-01-18 16:12:05, Petr Pavlu wrote:
On 1/18/23 01:04, Luis Chamberlain wrote:
On Tue, Dec 13, 2022 at 11:17:42AM +0100, Petr Mladek wrote:
On Mon 2022-12-12 21:09:19, Luis Chamberlain wrote:
- *Fixing* a kernel regression by adding new expected API for testing
against -EBUSY seems not ideal.
IMHO, the right solution is to fix the subsystems so that they send only one uevent.
Makes sense, but that can take time and some folks are stuck on old kernels and perhaps porting fixes for this on subsystems may take time to land to some enterprisy kernels. And then there is also systemd that issues the requests too, at least that was reflected in commit 6e6de3dee51a ("kernel/module.c: Only return -EEXIST for modules that have finished loading") that commit claims it was systemd issueing the requests which I mean to interpret finit_module(), not calling modprobe.
The rationale for making a regression fix with a new userspace return value is fair given the old fix made things even much worse the point some kernel boots would fail. So the rationale to suggest we *must* short-cut parallel loads as effectively as possible seems sensible *iff* that could not make things worse too but sadly I've found an isssue proactively with this fix, or at least that this issue is also not fixed:
./tools/testing/selftests/kmod/kmod.sh -t 0006 Tue Jan 17 23:18:13 UTC 2023 Running test: kmod_test_0006 - run #0 kmod_test_0006: OK! - loading kmod test kmod_test_0006: FAIL, test expects SUCCESS (0) - got -EINVAL (-22)
Custom trigger configuration for: test_kmod0 Number of threads: 50 Test_case: TEST_KMOD_FS_TYPE (2) driver: test_module fs: xfs
Test completed
When can multiple get_fs_type() calls be issued on a system? When mounting a large number of filesystems.
This patch should not change the behavior that much. The parallel load still waits until the pending one finishes.
The difference is that it does not try to load it once again. But when the first load fails then something is broken anyway.
I could imagine that this could cause regression from the user POV when the first failure is not important from some reasons. But it means that the things work only by chance and the problem might hit the user later anyway.
Another difference is that the parallel load finishes immediately also when it sees a going module. But it should not affect that much the multiple get_fs_type() calls. They are all trying to load the module.
Sadly though this issue seems
to have gone unnoticed for a while now. Even reverting commit 6e6de3dee51a doesn't fix it, and I've run into issues with trying to bisect, first due to missing Kees' patch which fixes a compiler failure on older kernel [0] and now I'm seeing this while trying to build v5.1:
ld: arch/x86/boot/compressed/pgtable_64.o:(.bss+0x0): multiple definition of `__force_order'; arch/x86/boot/compressed/kaslr_64.o:(.bss+0x0): first defined here ld: warning: arch/x86/boot/compressed/efi_thunk_64.o: missing .note.GNU-stack section implies executable stack ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker ld: arch/x86/boot/compressed/head_64.o: warning: relocation in read-only section `.head.text' ld: warning: arch/x86/boot/compressed/vmlinux has a LOAD segment with RWX permissions ld: warning: creating DT_TEXTREL in a PIE make[2]: *** [arch/x86/boot/compressed/Makefile:118: arch/x86/boot/compressed/vmlinux] Error 1 make[1]: *** [arch/x86/boot/Makefile:112: arch/x86/boot/compressed/vmlinux] Error 2 make: *** [arch/x86/Makefile:283: bzImage] Error 2
[0] http://lore.kernel.org/lkml/20220213182443.4037039-1-keescook@chromium.org
But we should try to bisect to see what cauased the above kmod test 0006 to start failing.
It is not clear to me from your description if the observed failure of kmod_test_0006 is related to the fix in this thread.
The problem was not possible for me to reproduce on my system. My test was on an 8-CPU x86_64 machine using v6.2-rc4 with "defconfig + kvm_guest.config + tools/testing/selftests/kmod/config".
I can't reproduce it either. I guess that it needs some "good" timing, probably more CPUs or so.
I wonder if it races with module -r that removes the module before it tries to load it multiple times in parallel.
Does the test pass when you add sleep after the module -r, like this:
diff --git a/tools/testing/selftests/kmod/kmod.sh b/tools/testing/selftests/kmod/kmod.sh index 7189715d7960..8a020f90a3f6 100755 --- a/tools/testing/selftests/kmod/kmod.sh +++ b/tools/testing/selftests/kmod/kmod.sh @@ -322,6 +322,7 @@ kmod_defaults_fs() { config_reset modprobe -r $DEFAULT_KMOD_FS + sleep 1 config_set_fs $DEFAULT_KMOD_FS config_set_test_case_fs }
Could you perhaps trace the test to determine where the EINVAL value comes from?
Yes, the -EINVAL error is strange. It is returned also in kernel/module/main.c on few locations. But neither of them looks like a good candidate.
My assumption is that it is more likely returned by the module_init() callback from the loaded module. But it is just a guess and I might be wrong.
I wonder if it is cause by a delayed release of some resources, when the module is removed, e.g. sysfs or so. It might theoretically cause conflict when they still exist and the reloaded module tries to create them again.
Best Regards, Petr