HI Mark
On Fri, Oct 18, 2024 at 2:05 PM Mark Brown broonie@kernel.org wrote:
On Fri, Oct 18, 2024 at 12:32:37PM -0700, Jeff Xu wrote:
On Fri, Oct 18, 2024 at 11:37 AM Mark Brown broonie@kernel.org wrote:
On Fri, Oct 18, 2024 at 11:06:20AM -0700, Jeff Xu wrote:
Test 106 here is called "test_munmap_free_multiple_ranges_with_split: line:2573" which automated systems aren't going to be able to associate with the passing "test_munmap_free_multiple_ranges_with_split", nor with any failures that occur on any other lines in the function.
I see. That will happen when those tests are modified and line number changes. I could see reasoning for this argument, especially when those tests are flaky and get updated often.
In practice, I hope any of those kernel self-test failures should get fixed immediately, or even better, run before dev submitting the patch that affects the mm area.
That's not the entire issue - it is also a problem that the test name is not the same between passes and failures so automated systems can't associate the failures with the passes.
I failed to understand this part. Maybe you meant the failing logging is not the same across the multiple versions of test code, by testname you meant "failing logging"
When a test starts failing they will see the passing test disappear and a new test appears that has never worked. This will mean that for example if they have bisection support or UI for showing when a test started regressing those won't work. The test name needs to be stable, diagnostics identifying why or where it failed should be separate prints.
If the test hasn't been changed for a while, and start failing. Then it is quite easy to run the same test on recent code changes. I think you might agree with me on this. The only thing that bisec needs to check is if the entire tests are failing or not.
I haven't used the biset functionality, so I'm not sure how it works exactly, e.g. when it runs on the old version of kernel, does it use the test binary from the old kernel ? or the test binary provided by dev ?
Actually, prompted by the comments below about test variants I've now run the test and see that what's in -next is also broken in that it's running a lot of the tests twice with sealing enabled or disabled but not including this in the reported test name resulting in most of the tests reporting like this:
ok 11 test_seal_mprotect ok 12 test_seal_mprotect
which is also going to confuse automated systems, they have a hard time working out which instance is which (generally the test numbers get ignored between runs as they're not at all stable). The test names need to roll in the parameterisation:
ok 11 test_seal_mprotect seal=true ok 12 test_seal_mprotect seal=false
(or something, the specific format doesn't matter so long as the names are both stable and distinct).
Yes. Agree that this is a limitation of this macro.
Having line number does help dev to go to error directly, and I'm not against filling in the "action" field, but you might also agree with me, finding unique text for each error would require some decent amount of time, especially for large tests such as mseal_test.
In these situations if it's a typical Unix API function setting errno that failed I tend to end up writing diagnostics like:
ksft_perror("open()")
possibly with some of the arguments included as well, or something equivalently basic for other kinds of error. This is fairly mindless so quick and easy to do and more robust against line number slips if you're not looking at exactly the same version of the code, sometimes it's even enough you don't even need to look at the test to understand why it's upset.
I understand what you are saying, but personally, I still think line numbers are a faster and more direct way to failure.
Honestly this just sounds and looks like kselftest_harness.h, it's ASSERT_ and EXPECT_ macros sound exactly like what you're looking for for asserts. The main gotchas with it are that it's not particularly
OK, I didn't know that ASSERT_ and EXPECT_ were part of the test fixture.
If I switch to test_fixture, e,g, using TEST(test_name)
how do I pass the "seal" flag to it ? e.g. how do I run the same test twice, first seal = true, and second seal=false.
test_seal_mmap_shrink(false); test_seal_mmap_shrink(true);
That looks like fixture variants to me, using those with kselftest_harness.h will also fix the problem with duplicate test names being used since it generates different names for each instance of the test. Something like:
FIXTURE(with_seal) { };
FIXTURE_VARIANT(with_seal) { bool seal; };
FIXTURE_VARIANT_ADD(with_seal, yes) { .seal = true, };
FIXTURE_VARIANT_ADD(with_seal, no) { .seal = false, };
FIXTURE_SETUP(with_seal) { }
FIXTURE_TEARDOWN(with_seal) { }
then a bunch of tests using that fixture:
TEST_F(with_seal, test_seal_mmap_shrink) { if (variant->seal) { /* setup sealing */ }
...
}
TEST_F(with_seal, test_seal_mprotect) { if (variant->seal) { /* setup sealing */ }
...
}
You don't need to actually set up anything in your fixture, but you do need to have setup and teardown functions so the framework can emit required boilerplate. The gcs-locking.c test I recently added in -next is an example of a similar thing where we just need the variants, there's no actual fixture.
Thanks! This is really helpful, I think the existing mseal_test can be quickly converted using this example.
(A side note: if selftest documentation is updated to include this example, it will be much easier to future dev to follow)
Thanks -Jeff