On Fri, Oct 18, 2024 at 05:10:01PM -0700, Jeff Xu wrote:
On Fri, Oct 18, 2024 at 2:05 PM Mark Brown broonie@kernel.org wrote:
That's not the entire issue - it is also a problem that the test name is not the same between passes and failures so automated systems can't associate the failures with the passes.
I failed to understand this part. Maybe you meant the failing logging is not the same across the multiple versions of test code, by testname you meant "failing logging"
Tests are identified by the string given in the line reporting their result, that's not *really* a log message but rather a test name. The strings for a given test need to be the same between different runs of the test program for tooling to be able to see that it's the same test.
When a test starts failing they will see the passing test disappear and a new test appears that has never worked. This will mean that for example if they have bisection support or UI for showing when a test started regressing those won't work. The test name needs to be stable, diagnostics identifying why or where it failed should be separate prints.
If the test hasn't been changed for a while, and start failing. Then
Well, you'd hope that the tests never start failing but yet we still have tests and keep running them.
it is quite easy to run the same test on recent code changes. I think you might agree with me on this. The only thing that bisec needs to check is if the entire tests are failing or not.
Unfortunately we're not in a position where people can reliably assume that every test program will always work everywhere so people work on individual tests, and it's certainly useful for UIs to be able to give an overview of what specifically failed. A bunch of that is tests that just don't implement feature detection/skipping properly.
I haven't used the biset functionality, so I'm not sure how it works exactly, e.g. when it runs on the old version of kernel, does it use the test binary from the old kernel ? or the test binary provided by dev ?
That's up to whoever is doing the testing, but I think most people run the selftests from the version of the code they're testing. Some of the subsystems aren't very enthusiastic about supporting running on older kernels.
how do I pass the "seal" flag to it ? e.g. how do I run the same test twice, first seal = true, and second seal=false.
test_seal_mmap_shrink(false); test_seal_mmap_shrink(true);
That looks like fixture variants to me, using those with kselftest_harness.h will also fix the problem with duplicate test names being used since it generates different names for each instance of the test. Something like:
Thanks! This is really helpful, I think the existing mseal_test can be quickly converted using this example.
Great!
(A side note: if selftest documentation is updated to include this example, it will be much easier to future dev to follow)
Possibly send a patch adding that wherever you were looking? That was just a quick hack down of the gcs-locking program verifying that it had what I thought you needed.