Hi Mark Brown,
On Thu, Jul 10, 2025 at 07:22:49PM +0800, wang lian wrote:
Add tests for process_madvise(), focusing on verifying behavior under various conditions including valid usage and error cases.
--- a/tools/testing/selftests/mm/guard-regions.c +++ b/tools/testing/selftests/mm/guard-regions.c
-static void handle_fatal(int c) -{
- if (!signal_jump_set)
return;
- siglongjmp(signal_jmp_buf, c);
-}
I see from looking later in the patch that you're factoring this out of the guard regions test into vm_util.c so that it can be used by your new test. This is good and sensible but it's a bit surprising, especially since your changelog only said you were adding a new test. It would be better to split this out into a separate refactoring patch that just does the code motion, as covered in submitting-patches.rst it's better if changes just do one thing.
Thanks for the suggestion. I���ll split this out into a separate patch that just moves the helper to vm_util.c, and follow up with the new test in a second patch.
+#include <linux/pidfd.h> +#include <linux/uio.h>
Does this work without 'make headers_install' for the systems that were affectd by missing headers? Lorenzo mentioned that we shouldn't depend on that for the mm tests (I'm not enthusiastic about that approach myself, but if it's what mm needs).
You're right, and I���ve seen build issues due to that as well. I���ll drop <linux/pidfd.h> and define PIDFD_SELF locally to avoid requiring installed headers.
- ret = read(pipe_info[0], &info, sizeof(info));
- if (ret <= 0) {
waitpid(self->child_pid, NULL, 0);
ksft_exit_skip("Failed to read child info from pipe.\n");
- }
If you're using the harness you should use SKIP() rather than the ksft APIs for reporting test results. Don't mix and match the result reporting APIs, harness will call the ksft_ APIs appropriately for you.
Understood. I���ll convert this and other cases to use SKIP() and ensure the test consistently uses the test harness macros.
if (errno == EAGAIN) {
ksft_test_result_skip(
"THP is 'always', process_madvise returned EAGAIN due to an expected race with khugepaged.\n");
} else {
ksft_test_result_fail(
"process_madvise failed with unexpected errno %d in 'always' mode.\n",
errno);
}
Similarly, to fail use an ASSERT or EXPECT. Note also that when using the ksft_ API for reporting results each test should report a consistent test name as the string, if you want to report an error message print it separately to the test result.
I���ll revise this to use ASSERT/EXPECT, and separate error output from test result strings, as you suggested.
- Test process_madvise() with various invalid pidfds to ensure correct
- error handling. This includes negative fds, non-pidfd fds, and pidfds for
- processes that no longer exist.
This sounds like it should be a series of small tests rather than a single omnibus test, that'd result in clearer error reporting from test frameworks since they will say which operation failed directly rather than having to look at the logs then match them to the source.
That makes sense. I���ll break this out into multiple smaller tests so each case reports independently.
- pidfd = syscall(__NR_pidfd_open, child_pid, 0);
- ASSERT_GE(pidfd, 0);
This is particularly the case given the use of ASSERTs, we'll not report any issues other than the first one we hit.
Thanks, I���ll switch to EXPECT_* where appropriate to allow multiple checks per test case.
Thanks again for the detailed review!
Best regards, Wang Lian