Hi Tim,
On Tue, Oct 15, 2024 at 12:01 PM Bird, Tim Tim.Bird@sony.com wrote:
-----Original Message----- From: automated-testing@lists.yoctoproject.org automated-testing@lists.yoctoproject.org On Behalf Of Don Zickus Hi,
At Linux Plumbers, a few dozen of us gathered together to discuss how to expose what tests subsystem maintainers would like to run for every patch submitted or when CI runs tests. We agreed on a mock up of a yaml template to start gathering info. The yaml file could be temporarily stored on kernelci.org until a more permanent home could be found. Attached is a template to start the conversation.
Don,
I'm interested in this initiative. Is discussion going to be on a kernel mailing list, or on this e-mail, or somewhere else?
I was going to keep it on this mailing list. Open to adding other lists or moving it.
See a few comments below.
Longer story.
The current problem is CI systems are not unanimous about what tests they run on submitted patches or git branches. This makes it difficult to figure out why a test failed or how to reproduce. Further, it isn't always clear what tests a normal contributor should run before posting patches.
It has been long communicated that the tests LTP, xfstest and/or kselftests should be the tests to run.
Just saying "LTP" is not granular enough. LTP has hundreds of individual test programs, and it would be useful to specify the individual tests from LTP that should be run per sub-system.
Agreed. Just reiterating what Greg has told me.
I was particularly intrigued by the presentation at Plumbers about test coverage. It would be nice to have data (or easily replicable methods) for determining the code coverage of a test or set of tests, to indicate what parts of the kernel are being missed and help drive new test development.
It would be nice. I see that as orthogonal to this effort for now. But I think this might be a good step towards that idea.
However, not all maintainers use those tests for their subsystems. I am hoping to either capture those tests or find ways to convince them to add their tests to the preferred locations.
The goal is for a given subsystem (defined in MAINTAINERS), define a set of tests that should be run for any contributions to that subsystem. The hope is the collective CI results can be triaged collectively (because they are related) and even have the numerous flakes waived collectively (same reason) improving the ability to find and debug new test failures. Because the tests and process are known, having a human help debug any failures becomes easier.
The plan is to put together a minimal yaml template that gets us going (even if it is not optimized yet) and aim for about a dozen or so subsystems. At that point we should have enough feedback to promote this more seriously and talk optimizations.
Sounds like a good place to start. Do we have some candidate sub-systems in mind? Has anyone volunteered to lead the way?
At our meeting, someone suggested Kunit as it was easy to understand for starters and then add a few other volunteer systems in. I know we have a few maintainers who can probably help us get started. I think arm and media were ones thrown about at our meeting.
Feedback encouraged.
Cheers, Don
# List of tests by subsystem # # Tests should adhere to KTAP definitions for results # # Description of section entries # # maintainer: test maintainer - name <email> # list: mailing list for discussion # version: stable version of the test # dependency: necessary distro package for testing # test: # path: internal git path or url to fetch from # cmd: command to run; ability to run locally # param: additional param necessary to run test # hardware: hardware necessary for validation
Is this something new in MAINTAINERS, or is it a separate file?
For now a separate file. It isn't clear where this could go long term. The thought was to gather data to see what is necessary first. Long term it will probably stay a separate file. *shrugs*
# # Subsystems (alphabetical)
KUNIT TEST: maintainer: - name: name1 email: email1 - name: name2 email: email2 list: version: dependency: - dep1 - dep2 test: - path: tools/testing/kunit cmd: param: - path: cmd: param: hardware: none
Looks OK so far - it'd be nice to have a few concrete examples.
Fair enough. Let me try and work on some.
Cheers, Don
-- Tim