Hi Mark and Muhammad
On Fri, Oct 18, 2024 at 6:04 AM Mark Brown broonie@kernel.org wrote:
On Thu, Oct 17, 2024 at 12:49:40PM -0700, Jeff Xu wrote:
So it is not a problem with the MACRO, but where is it used ?
ret = sys_mseal(ptr, size); FAIL_TEST_IF_FALSE(!ret);
Take this example, it would be assert(!ret)
The problem is that the macro name is confusing and not terribly consistent with how the rest of the selftests work. The standard kselftest result reporting is
ksft_test_result(bool result, char *name_format, ...);
so the result of the test is a boolean flag which is passed in. This macro on the other hand sounds like a double negative so you have to stop and think what the logic is, and it's not seen anywhere else so nobody is going to be familiar with it. The main thing this is doing is burying a return statement in there, that's a bit surprising too.
Thanks for explaining the problem, naming is hard. Do you have a suggestion on a better naming?
I'll also note that these macros are resulting in broken kselftest output, the name for a test has to be stable for automated systems to be able to associate test results between runs but these print
ksft_test_result_fail("%s: line:%d\n", \ __func__, __LINE__); \ return; \
which includes the line number of the test in the name which is an obvious problem, automated systems won't be able to tell that any two failures are related to each other never mind the passing test. We should report why things failed but it's better to do that with a ksft_print_msg(), ideally one that's directly readable rather than requiring someone to go into the source code and look it up.
I don't know what the issue you described is ? Are you saying that we are missing line numbers ? it is not. here is the sample of output: Failure in the second test case from last:
ok 105 test_munmap_free_multiple_ranges not ok 106 test_munmap_free_multiple_ranges_with_split: line:2573 ok 107 test_munmap_free_multiple_ranges_with_split # Planned tests != run tests (106 != 107)
A more standard way to write what you've got here would be to have the tests return a bool then have a runner loop which iterates over the tests:
struct { char *name; bool (*func)(void); } tests[]; ... for (i = 0; i < ARRAY_SIZE(tests); i++) ksft_test_result(tests[i].test(), tests[i].name);
then the tests can just have explicit return statements and don't need to worry about logging anything other than diagnostics.
Depending on how much you need to share between tests you might also be able to use kselftest_harness.h which fork()s each test into a separate child and allows you to just fault to fail if that's easier.
We are writing unit tests in a test framework, let's use very well established industry practices please.
Plus also the fact that we have a framework here...
Also note that you don't even need to reinvent the wheel, there is a fully-featured test harness available in tools/testing/selftests/kselftest_harness.h with both ASSERT_xxx() and EXPECT_xxx() helpers.
The EXPECT_xxx() doesn't take care of reporting though, or maybe it
I rather think people would've noticed if the test harness was so broken that it was unable to report failures. If it is that broken we should fix it rather than open coding something else.
In general, I agree with those comments, but I would like to rely on domain experts in test infra to recommend what to use, or is acceptable.
In this case, I hope Muhammad, who reviewed this code in the first place, can make recommendations on a replacement of this macro.
I would image the needs of something similar to FAIL_TEST_IF_FALSE is common in selftest writing:
1> lightweight enough so dev can pick up quickly and adapt to existing tests, instead of rewriting everything from scratch. 2> assert like syntax 3> fail the current test case, but continue running the next test case 4> take care of reporting test failures.
Thanks -Jeff