Hi all,
for my talk at EOSS23 about object life-time issues[1], I created a loose set of tests checking some longstanding problems in the Linux Kernel. I would like to improve these tests. Now, I wonder where I could contribute them to because their scope seems different to me. They are not for regression testing because I don't have a fix for most of them. Some fixes mean rewriting private data allocations for a whole subsystem and drivers. The tests are rather meant for documenting known problems and checking if someone started working on it. But it seems that kselftest (and LTP also?) only accept tests which do not fail by default. The question is now, is there another test collection project I could contribute these tests to? I'd be very happy for pointers, I started looking around but to no avail...
Thanks and happy hacking,
Wolfram
On 8/23/23 07:45, Wolfram Sang wrote:
Hi all,
for my talk at EOSS23 about object life-time issues[1], I created a loose set of tests checking some longstanding problems in the Linux Kernel. I would like to improve these tests. Now, I wonder where I could contribute them to because their scope seems different to me. They are not for regression testing because I don't have a fix for most of them. Some fixes mean rewriting private data allocations for a whole subsystem and drivers. The tests are rather meant for documenting known problems and checking if someone started working on it. But it seems that kselftest (and LTP also?) only accept tests which do not fail by default. The question is now, is there another test collection project I could contribute these tests to? I'd be very happy for pointers, I started looking around but to no avail...
I don't have any good answers, but I did have a similar question a few years ago about expected build failures. At the time, I was working on a build script where I wanted to detect some unsupported situations and bail out. I had written tests to verify that the script was performing as expected, but from I gathered, kselftests were always expected to succeed (and build).
Anyway maybe your question is less about mechanics (could you invert the result, i.e. failure is success?) and more about where to collect such tests?
Hi Joe,
thanks for your reply!
Anyway maybe your question is less about mechanics (could you invert the result, i.e. failure is success?) and more about where to collect such tests?
It is the latter, exactly. A failure is a failure and should be marked red. But where do we put tests which we know will fail and currently no one is working on fixing the issues? I wouldn't mind setting up a repo for this but let me elaborate more in the reply to Greg.
Regards,
Wolfram
On Wed, Aug 23, 2023 at 01:45:12PM +0200, Wolfram Sang wrote:
Hi all,
for my talk at EOSS23 about object life-time issues[1], I created a loose set of tests checking some longstanding problems in the Linux Kernel. I would like to improve these tests. Now, I wonder where I could contribute them to because their scope seems different to me. They are not for regression testing because I don't have a fix for most of them. Some fixes mean rewriting private data allocations for a whole subsystem and drivers. The tests are rather meant for documenting known problems and checking if someone started working on it. But it seems that kselftest (and LTP also?) only accept tests which do not fail by default. The question is now, is there another test collection project I could contribute these tests to? I'd be very happy for pointers, I started looking around but to no avail...
Why not just add them to the kernel tree, with ksft_test_result_skip() being the result for now while they still fail, and then when the kernel code is fixed up, change that back to the correct ksft_test_result_error() call instead?
"SKIP" is a good thing to take advantage of here.
thanks,
greg k-h
Hi Greg,
Why not just add them to the kernel tree, with ksft_test_result_skip() being the result for now while they still fail, and then when the kernel code is fixed up, change that back to the correct ksft_test_result_error() call instead?
Well, I don't want the tests to be skipped, I want them to be run :) So, they will indicate that someone is working on the issue when they turn from red to yellow / green. I expect the issues to be all over the place and I don't want to monitor all that manually.
But since I do want them in the kernel tree and kselftest already has some nice infrastructure (like required config options), I wondered about a seperate directory, like "kfailtest". These tests are not run by default but whenever an issue from there gets fixed, an inverted / improved test can go to the proper kselftest folder. A bit like the staging folder where items are expected to move out. Except, here not the tests are ugly only their result is.
Maybe I'll start with this direction and see how it goes...
All the best,
Wolfram
On Wed, Aug 23, 2023 at 05:05:17PM +0200, Wolfram Sang wrote:
Hi Greg,
Why not just add them to the kernel tree, with ksft_test_result_skip() being the result for now while they still fail, and then when the kernel code is fixed up, change that back to the correct ksft_test_result_error() call instead?
Well, I don't want the tests to be skipped, I want them to be run :) So, they will indicate that someone is working on the issue when they turn from red to yellow / green. I expect the issues to be all over the place and I don't want to monitor all that manually.
The test will run, it will report failed, but then be allowed to "SKIP" to keep the build clean.
But since I do want them in the kernel tree and kselftest already has some nice infrastructure (like required config options), I wondered about a seperate directory, like "kfailtest". These tests are not run by default but whenever an issue from there gets fixed, an inverted / improved test can go to the proper kselftest folder. A bit like the staging folder where items are expected to move out. Except, here not the tests are ugly only their result is.
Maybe I'll start with this direction and see how it goes...
Try it and see. Worst case, submit all your tests, have them all fail, and then fix the kernel code :)
thanks,
greg k-h
linux-kselftest-mirror@lists.linaro.org