Hey Knut and Shuah,
Following up on our offline discussion on Wednesday night:
We decided that it would make sense for Knut to try to implement Hybrid Testing (testing that crosses the kernel userspace boundary) that he introduced here[1] on top of the existing KUnit infrastructure.
We discussed several possible things in the kernel that Knut could test with the new Hybrid Testing feature as an initial example. Those were (in reverse order of expected difficulty):
1. RDS (Reliable Datagram Sockets) - We decided that, although this was one of the more complicated subsystems to work with, it was probably the best candidate for Knut to start with because it was in desperate need of better testing, much of the testing would require crossing the kernel userspace boundary to be effective, and Knut has access to RDS (since he works at Oracle).
2. KMOD - Probably much simpler than RDS, and the maintainer, Luis Chamberlain (CC'ed) would like to see better testing here, but probably still not as good as RDS because it is in less dire need of testing, collaboration on this would be more difficult, and Luis is currently on an extended vacation. Luis and I had already been discussing testing KMOD here[2].
3. IP over USB - Least desirable option, but still possible. More complicated than KMOD, and not as easy to collaborate on as RDS.
I don't really think we discussed how this would work. I remember that I mentioned that it would be easier if I sent out a patch that centralizes where KUnit tests are dispatched from in the kernel; I will try to get an RFC for that out, probably sometime next week. That should provide a pretty straightforward place for Knut to move his work on top of.
The next question is what the userspace component of this should look like. To me it seems like we should probably have the kselftest test runner manage when the test gets run, and collecting and reporting the result of the test, but I think Knut has thought more about this than I, and Shuah is the kselftest maintainer, so I am guessing this will probably mostly be a discussion between the two of you.
So I think we have a couple of TODOs between us:
Brendan: - Need to send out patch that provides a single place where all tests are dispatched from.
Knut: - Start splitting out the hybrid test stuff from the rest of the RFC you sent previously.
Knut and Shuah: - Start figuring out what the userspace component of this will look like.
Cheers!
[1] https://lore.kernel.org/linux-kselftest/524b4e062500c6a240d4d7c0e1d0a2996800... [2] https://groups.google.com/forum/#%21topic/kunit-dev/CdIytJtii00
Hi Brendan,
On 9/13/19 3:02 PM, Brendan Higgins wrote:
Hey Knut and Shuah,
Following up on our offline discussion on Wednesday night:
Awesome. Thanks for doing this.
We decided that it would make sense for Knut to try to implement Hybrid Testing (testing that crosses the kernel userspace boundary) that he introduced here[1] on top of the existing KUnit infrastructure.
We discussed several possible things in the kernel that Knut could test with the new Hybrid Testing feature as an initial example. Those were (in reverse order of expected difficulty):
RDS (Reliable Datagram Sockets) - We decided that, although this was one of the more complicated subsystems to work with, it was probably the best candidate for Knut to start with because it was in desperate need of better testing, much of the testing would require crossing the kernel userspace boundary to be effective, and Knut has access to RDS (since he works at Oracle).
KMOD - Probably much simpler than RDS, and the maintainer, Luis Chamberlain (CC'ed) would like to see better testing here, but probably still not as good as RDS because it is in less dire need of testing, collaboration on this would be more difficult, and Luis is currently on an extended vacation. Luis and I had already been discussing testing KMOD here[2].
IP over USB - Least desirable option, but still possible. More complicated than KMOD, and not as easy to collaborate on as RDS.
I don't really think we discussed how this would work. I remember that I mentioned that it would be easier if I sent out a patch that centralizes where KUnit tests are dispatched from in the kernel; I will try to get an RFC for that out, probably sometime next week. That should provide a pretty straightforward place for Knut to move his work on top of.
That will be awesome.
The next question is what the userspace component of this should look like. To me it seems like we should probably have the kselftest test runner manage when the test gets run, and collecting and reporting the result of the test, but I think Knut has thought more about this than I, and Shuah is the kselftest maintainer, so I am guessing this will probably mostly be a discussion between the two of you.
Yes. This is what I have in mind.
So I think we have a couple of TODOs between us:
Brendan:
- Need to send out patch that provides a single place where all tests are dispatched from.
Knut:
- Start splitting out the hybrid test stuff from the rest of the RFC you sent previously.
Knut and Shuah:
- Start figuring out what the userspace component of this will look like.
Once Knut decides which one of the above options he chooses and sends me RFC patches, we can start discussing the details based on the RFC.
thanks, -- Shuah
On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
Hey Knut and Shuah,
Following up on our offline discussion on Wednesday night:
We decided that it would make sense for Knut to try to implement Hybrid Testing (testing that crosses the kernel userspace boundary) that he introduced here[1] on top of the existing KUnit infrastructure.
We discussed several possible things in the kernel that Knut could test with the new Hybrid Testing feature as an initial example. Those were (in reverse order of expected difficulty):
RDS (Reliable Datagram Sockets) - We decided that, although this was one of the more complicated subsystems to work with, it was probably the best candidate for Knut to start with because it was in desperate need of better testing, much of the testing would require crossing the kernel userspace boundary to be effective, and Knut has access to RDS (since he works at Oracle).
KMOD - Probably much simpler than RDS, and the maintainer, Luis Chamberlain (CC'ed) would like to see better testing here, but probably still not as good as RDS because it is in less dire need of testing, collaboration on this would be more difficult, and Luis is currently on an extended vacation. Luis and I had already been discussing testing KMOD here[2].
I'm back!
I'm also happy and thrilled to help review the infrastructure in great detail given I have lofty future objectives with testing in the kernel. Also, kmod is a bit more complex to test, if Knut wants a simpler *easy* target I think test_sysctl.c would be a good target. I think the goal there would be to add probes for a few of the sysctl callers, and then test them through userspace somehow, for instance?
The complexities with testing kmod is the threading aspect. So that is more of a challenge for a test infrastructure as a whole. However kmod also already has a pretty sound kthread solution which could be used as basis for any sound kernel multithread test solution.
Curious, what was decided with the regards to the generic netlink approach?
Luis
On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote:
On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
Hey Knut and Shuah,
Following up on our offline discussion on Wednesday night:
We decided that it would make sense for Knut to try to implement Hybrid Testing (testing that crosses the kernel userspace boundary) that he introduced here[1] on top of the existing KUnit infrastructure.
We discussed several possible things in the kernel that Knut could test with the new Hybrid Testing feature as an initial example. Those were (in reverse order of expected difficulty):
RDS (Reliable Datagram Sockets) - We decided that, although this was one of the more complicated subsystems to work with, it was probably the best candidate for Knut to start with because it was in desperate need of better testing, much of the testing would require crossing the kernel userspace boundary to be effective, and Knut has access to RDS (since he works at Oracle).
KMOD - Probably much simpler than RDS, and the maintainer, Luis Chamberlain (CC'ed) would like to see better testing here, but probably still not as good as RDS because it is in less dire need of testing, collaboration on this would be more difficult, and Luis is currently on an extended vacation. Luis and I had already been discussing testing KMOD here[2].
I'm back!
I'm also happy and thrilled to help review the infrastructure in great detail given I have lofty future objectives with testing in the kernel. Also, kmod is a bit more complex to test, if Knut wants a simpler *easy* target I think test_sysctl.c would be a good target. I think the goal there would be to add probes for a few of the sysctl callers, and then test them through userspace somehow, for instance?
That sounds like a good case for the hybrid tests. The challenge in a kunit setting would be that it relies on a significant part of KTF to work as we have used it so far:
- module support - Alan has been working on this - netlink approach from KTF (to allow user space execution of kernel part of test, and gathering reporting in one place) - probe infrastructure
The complexities with testing kmod is the threading aspect. So that is more of a challenge for a test infrastructure as a whole. However kmod also already has a pretty sound kthread solution which could be used as basis for any sound kernel multithread test solution.
Curious, what was decided with the regards to the generic netlink approach?
I think in some way functionality similar to the netlink support is needed for the features in KTF that we discussed, so I get it is a "yes" to add support for it?
Knut
Luis
On 10/14/19 12:38 PM, Knut Omang wrote:
On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote:
On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
Hey Knut and Shuah,
Following up on our offline discussion on Wednesday night:
We decided that it would make sense for Knut to try to implement Hybrid Testing (testing that crosses the kernel userspace boundary) that he introduced here[1] on top of the existing KUnit infrastructure.
We discussed several possible things in the kernel that Knut could test with the new Hybrid Testing feature as an initial example. Those were (in reverse order of expected difficulty):
- RDS (Reliable Datagram Sockets) - We decided that, although this was one of the more complicated subsystems to work with, it was probably the best candidate for Knut to start with because it was in desperate need of better testing, much of the testing would require crossing the kernel userspace boundary to be effective, and Knut has access to RDS (since he works at Oracle).
Any update on if you are able to explore this work.
- KMOD - Probably much simpler than RDS, and the maintainer, Luis Chamberlain (CC'ed) would like to see better testing here, but probably still not as good as RDS because it is in less dire need of testing, collaboration on this would be more difficult, and Luis is currently on an extended vacation. Luis and I had already been discussing testing KMOD here[2].
I'm back!
I'm also happy and thrilled to help review the infrastructure in great detail given I have lofty future objectives with testing in the kernel. Also, kmod is a bit more complex to test, if Knut wants a simpler *easy* target I think test_sysctl.c would be a good target. I think the goal there would be to add probes for a few of the sysctl callers, and then test them through userspace somehow, for instance?
That sounds like a good case for the hybrid tests. The challenge in a kunit setting would be that it relies on a significant part of KTF to work as we have used it so far:
- module support - Alan has been working on this
I see the patches. Thanks for working on this.
- netlink approach from KTF (to allow user space execution of kernel part of test, and gathering reporting in one place)
- probe infrastructure
The complexities with testing kmod is the threading aspect. So that is more of a challenge for a test infrastructure as a whole. However kmod also already has a pretty sound kthread solution which could be used as basis for any sound kernel multithread test solution.
Curious, what was decided with the regards to the generic netlink approach?
Can this work be be done without netlink approach? At least some of it. I would like to see some patches and would like to get a better feel for the dependency on generic netlink.
I think in some way functionality similar to the netlink support is needed for the features in KTF that we discussed, so I get it is a "yes" to add support for it?
See above.
thanks, -- Shuah
On Mon, 2019-10-14 at 13:01 -0600, shuah wrote:
On 10/14/19 12:38 PM, Knut Omang wrote:
On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote:
On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
Hey Knut and Shuah,
Following up on our offline discussion on Wednesday night:
We decided that it would make sense for Knut to try to implement Hybrid Testing (testing that crosses the kernel userspace boundary) that he introduced here[1] on top of the existing KUnit infrastructure.
We discussed several possible things in the kernel that Knut could test with the new Hybrid Testing feature as an initial example. Those were (in reverse order of expected difficulty):
- RDS (Reliable Datagram Sockets) - We decided that, although this was one of the more complicated subsystems to work with, it was probably the best candidate for Knut to start with because it was in desperate need of better testing, much of the testing would require crossing the kernel userspace boundary to be effective, and Knut has access to RDS (since he works at Oracle).
Any update on if you are able to explore this work.
I am working on this, but it's going to take some time, as this ties in with internal projects at Oracle. Basing work on RDS or RDS related tests (such as generic socket etc) is the best option for us, since that allows progress on our internal deliverables as well ;-)
- KMOD - Probably much simpler than RDS, and the maintainer, Luis Chamberlain (CC'ed) would like to see better testing here, but probably still not as good as RDS because it is in less dire need of testing, collaboration on this would be more difficult, and Luis is currently on an extended vacation. Luis and I had already been discussing testing KMOD here[2].
I'm back!
I'm also happy and thrilled to help review the infrastructure in great detail given I have lofty future objectives with testing in the kernel. Also, kmod is a bit more complex to test, if Knut wants a simpler *easy* target I think test_sysctl.c would be a good target. I think the goal there would be to add probes for a few of the sysctl callers, and then test them through userspace somehow, for instance?
That sounds like a good case for the hybrid tests. The challenge in a kunit setting would be that it relies on a significant part of KTF to work as we have used it so far:
- module support - Alan has been working on this
I see the patches. Thanks for working on this.
- netlink approach from KTF (to allow user space execution of kernel part of test, and gathering reporting in one place)
- probe infrastructure
The complexities with testing kmod is the threading aspect. So that is more of a challenge for a test infrastructure as a whole. However kmod also already has a pretty sound kthread solution which could be used as basis for any sound kernel multithread test solution.
Curious, what was decided with the regards to the generic netlink approach?
Can this work be be done without netlink approach? At least some of it. I would like to see some patches and would like to get a better feel for the dependency on generic netlink.
A flexible out-of-band communication channel is needed for several of the features, and definitely for hybrid tests. It does not need to be netlink in principle, but that has served the purpose well so far in KTF, and reimplementing something will be at the cost of the core task of getting more and better tests, which after all is the goal of this effort.
It would be really good if more people had a closer look at the KTF patch set before we embark on significant work of porting it to Kunit.
For reference, the netlink code in KTF: https://lkml.org/lkml/2019/8/13/92
Note that unlike kunit, in KTF no tests are executed by default, instead KTF provides an API to query about, set up and trigger execution of tests and test parts in the kernel, and leave the actual initiation to user space tools.
Thanks! Knut
I think in some way functionality similar to the netlink support is needed for the features in KTF that we discussed, so I get it is a "yes" to add support for it?
See above.
thanks, -- Shuah
On Wed, Oct 16, 2019 at 12:52:12PM +0200, Knut Omang wrote:
On Mon, 2019-10-14 at 13:01 -0600, shuah wrote:
On 10/14/19 12:38 PM, Knut Omang wrote:
On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote:
On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
Hey Knut and Shuah,
Following up on our offline discussion on Wednesday night:
We decided that it would make sense for Knut to try to implement Hybrid Testing (testing that crosses the kernel userspace boundary) that he introduced here[1] on top of the existing KUnit infrastructure.
We discussed several possible things in the kernel that Knut could test with the new Hybrid Testing feature as an initial example. Those were (in reverse order of expected difficulty):
- RDS (Reliable Datagram Sockets) - We decided that, although this was one of the more complicated subsystems to work with, it was probably the best candidate for Knut to start with because it was in desperate need of better testing, much of the testing would require crossing the kernel userspace boundary to be effective, and Knut has access to RDS (since he works at Oracle).
Any update on if you are able to explore this work.
I am working on this, but it's going to take some time, as this ties in with internal projects at Oracle. Basing work on RDS or RDS related tests (such as generic socket etc) is the best option for us, since that allows progress on our internal deliverables as well ;-)
- KMOD - Probably much simpler than RDS, and the maintainer, Luis Chamberlain (CC'ed) would like to see better testing here, but probably still not as good as RDS because it is in less dire need of testing, collaboration on this would be more difficult, and Luis is currently on an extended vacation. Luis and I had already been discussing testing KMOD here[2].
I'm back!
I'm also happy and thrilled to help review the infrastructure in great detail given I have lofty future objectives with testing in the kernel. Also, kmod is a bit more complex to test, if Knut wants a simpler *easy* target I think test_sysctl.c would be a good target. I think the goal there would be to add probes for a few of the sysctl callers, and then test them through userspace somehow, for instance?
That sounds like a good case for the hybrid tests. The challenge in a kunit setting would be that it relies on a significant part of KTF to work as we have used it so far:
- module support - Alan has been working on this
I see the patches. Thanks for working on this.
- netlink approach from KTF (to allow user space execution of kernel part of test, and gathering reporting in one place)
- probe infrastructure
The complexities with testing kmod is the threading aspect. So that is more of a challenge for a test infrastructure as a whole. However kmod also already has a pretty sound kthread solution which could be used as basis for any sound kernel multithread test solution.
Curious, what was decided with the regards to the generic netlink approach?
Can this work be be done without netlink approach? At least some of it. I would like to see some patches and would like to get a better feel for the dependency on generic netlink.
A flexible out-of-band communication channel is needed for several of the features, and definitely for hybrid tests. It does not need to be netlink in principle, but that has served the purpose well so far in KTF, and reimplementing something will be at the cost of the core task of getting more and better tests, which after all is the goal of this effort.
I don't think you did justice to *why* netlink would be good, but in principle I suspect its the right thing long term if we want a nice interface to decide what to test and how.
So kselftest today implicates you know what you want to test. Or rather this is defined through what you enable and then you run *all* enabled kselftests.
Similar logic follwos for kunit.
Yes, you can have scripts and your own tests infrastructure that knows what to test or avoid, but this is not nice for autonegotiation. Consider also the complexity of dealing with testing new tests on older kenrels. Today we address this on kselftests by striving to make sure the old scripts / tests work or yield for old kernel who don't have a feature. What if we want to really figure out what is there or not concretely? A generic netlink interface could easily allow for these sorts of things to grow and be auto negotiated.
Then, collection of results: we have each kselftest desiging its own scatter/gather set of way to go and do what it can to test what it can and expose what should be tested, or in what order, or to allow knobs. A generic netlink interface allows a standard interface to be sketched up.
I don't think the generic netlink interface implemented in the KTF patches accomplished any of this, but it at least moved the needle forward IMHO in terms of what we should consider long term.
It would be really good if more people had a closer look at the KTF patch set before we embark on significant work of porting it to Kunit.
For reference, the netlink code in KTF: https://lkml.org/lkml/2019/8/13/92
I read it, and it seems like a lot of stuff implemented but without a lot of proper justification and also without a lot of community feedback. Kind of like: too much done in house without proper feedback. Then it became bloated and once kunit was almost about to be merged you guys knee-jerk reacted and wanted to merge your stuff too...
And the quality of code. Gosh, much to be said about that...
So I think that asking to consolidate with kunit is the right thing at this point. Because we need to grow it in *community*. kunit itself has been heavily modifed to adjust to early feedback, for example, so its an example of evolution needed.
Note that unlike kunit, in KTF no tests are executed by default, instead KTF provides an API to query about, set up and trigger execution of tests and test parts in the kernel, and leave the actual initiation to user space tools.
This is all excellent. However baby steps. Let's demo it with a few simple tests, rather than trying to ensure it works with *all* the stuff you guys probably already have in house. That will probably have to be phased out in the future with whatever we grow *together*.
Luis
On Wed, 2019-10-16 at 13:08 +0000, Luis Chamberlain wrote:
On Wed, Oct 16, 2019 at 12:52:12PM +0200, Knut Omang wrote:
On Mon, 2019-10-14 at 13:01 -0600, shuah wrote:
On 10/14/19 12:38 PM, Knut Omang wrote:
On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote:
On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
Hey Knut and Shuah,
Following up on our offline discussion on Wednesday night:
We decided that it would make sense for Knut to try to implement Hybrid Testing (testing that crosses the kernel userspace boundary) that he introduced here[1] on top of the existing KUnit infrastructure.
We discussed several possible things in the kernel that Knut could test with the new Hybrid Testing feature as an initial example. Those were (in reverse order of expected difficulty):
- RDS (Reliable Datagram Sockets) - We decided that, although this was one of the more complicated subsystems to work with, it was probably the best candidate for Knut to start with because it was in desperate need of better testing, much of the testing would require crossing the kernel userspace boundary to be effective, and Knut has access to RDS (since he works at Oracle).
Any update on if you are able to explore this work.
I am working on this, but it's going to take some time, as this ties in with internal projects at Oracle. Basing work on RDS or RDS related tests (such as generic socket etc) is the best option for us, since that allows progress on our internal deliverables as well ;-)
- KMOD - Probably much simpler than RDS, and the maintainer, Luis Chamberlain (CC'ed) would like to see better testing here, but probably still not as good as RDS because it is in less dire need of testing, collaboration on this would be more difficult, and Luis is currently on an extended vacation. Luis and I had already been discussing testing KMOD here[2].
I'm back!
I'm also happy and thrilled to help review the infrastructure in great detail given I have lofty future objectives with testing in the kernel. Also, kmod is a bit more complex to test, if Knut wants a simpler *easy* target I think test_sysctl.c would be a good target. I think the goal there would be to add probes for a few of the sysctl callers, and then test them through userspace somehow, for instance?
That sounds like a good case for the hybrid tests. The challenge in a kunit setting would be that it relies on a significant part of KTF to work as we have used it so far:
- module support - Alan has been working on this
I see the patches. Thanks for working on this.
- netlink approach from KTF (to allow user space execution of kernel part of test, and gathering reporting in one place)
- probe infrastructure
The complexities with testing kmod is the threading aspect. So that is more of a challenge for a test infrastructure as a whole. However kmod also already has a pretty sound kthread solution which could be used as basis for any sound kernel multithread test solution.
Curious, what was decided with the regards to the generic netlink approach?
Can this work be be done without netlink approach? At least some of it. I would like to see some patches and would like to get a better feel for the dependency on generic netlink.
A flexible out-of-band communication channel is needed for several of the features, and definitely for hybrid tests. It does not need to be netlink in principle, but that has served the purpose well so far in KTF, and reimplementing something will be at the cost of the core task of getting more and better tests, which after all is the goal of this effort.
I don't think you did justice to *why* netlink would be good, but in principle I suspect its the right thing long term if we want a nice interface to decide what to test and how.
I tried to do that in my response here to Brendan:
https://lore.kernel.org/linux-kselftest/644ff48481f3dd7295798dcef88b4abcc869...
So kselftest today implicates you know what you want to test. Or rather this is defined through what you enable and then you run *all* enabled kselftests.
Similar logic follwos for kunit.
The same applies to KTF. The trivial default user application for KTF (see https://lkml.org/lkml/2019/8/13/101, file tools/testing/selftests/ktf/user/ktfrun.cc) would run all the kernel tests that has been enabled (depending what test modules that have been inserted).
But if I just want to run a single test, or I want to randomize the order the tests are run (to avoid unintended interdependencies), or whatever other ideas people might have outside the default setup, KTF offers that flexibility via netlink, and allows use of a mature user land unit test suite to facilitate those features, instead of having to reinvent the wheel in kernel code.
Yes, you can have scripts and your own tests infrastructure that knows what to test or avoid, but this is not nice for autonegotiation.
We haven't really been setting out to do anything ambitions it this area with KTF, except that we have been using simple naming of tests into classes and used wilcards etc to run a subset of tests.
Consider also the complexity of dealing with testing new tests on older kenrels. Today we address this on kselftests by striving to make sure the old scripts / tests work or yield for old kernel who don't have a feature. What if we want to really figure out what is there or not concretely?
A generic netlink interface could easily allow for these sorts of things to grow and be auto negotiated.
In general I agree that such types of negotiation is something that can be explored. You can view the KTF code to set up network contexts as something of a start in that sense. We have code in place that uses such network context configuration to allow a unit test suite to exchange address information as part of a single program multiple data (SPMD) program, using MPI, to allow unit tests that requires more than a single node to run.
Then, collection of results: we have each kselftest desiging its own scatter/gather set of way to go and do what it can to test what it can and expose what should be tested, or in what order, or to allow knobs. A generic netlink interface allows a standard interface to be sketched up.
I think we solve this by exposing flexibility to user space. Why you would want to have the kernel involved in that instead of just giving user mode enough info?
I don't think the generic netlink interface implemented in the KTF patches accomplished any of this, but it at least moved the needle forward IMHO in terms of what we should consider long term.
It would be really good if more people had a closer look at the KTF patch set before we embark on significant work of porting it to Kunit.
For reference, the netlink code in KTF: https://lkml.org/lkml/2019/8/13/92
I read it, and it seems like a lot of stuff implemented but without a lot of proper justification and also without a lot of community feedback. Kind of like: too much done in house without proper feedback.
Then it became bloated and once kunit was almost about to be merged you guys knee-jerk reacted and wanted to merge your stuff too...
Again, just for the record, the fact is that we presented KTF at LPC in 2017 (see https://lwn.net/Articles/735034/) and pushed it to github for review and comments at the same time (https://github.com/oracle/ktf). Looking in the git repo, this was just a few weeks after the initial refactoring from a project specific predecessor was complete. I discussed collaboration with Brendan, who had a very early draft of what later became Kunit with him, and expected from our conversation that we would work together on a common proposal that covered both main use cases.
As to the features we added to KTF, they were in response to real test needs (multithreaded test execution, override of return values to verify error codepaths, the need for environment/node specific configuration like different device presence/capabilities, different network configurations of the test nodes, and so on..
And the quality of code. Gosh, much to be said about that...
We're of course happy to take constructive critics - do you have any specifics in mind?
So I think that asking to consolidate with kunit is the right thing at this point. Because we need to grow it in *community*. kunit itself has been heavily modifed to adjust to early feedback, for example, so its an example of evolution needed.
Note that unlike kunit, in KTF no tests are executed by default, instead KTF provides an API to query about, set up and trigger execution of tests and test parts in the kernel, and leave the actual initiation to user space tools.
This is all excellent. However baby steps. Let's demo it with a few simple tests, rather than trying to ensure it works with *all* the stuff you guys probably already have in house. That will probably have to be phased out in the future with whatever we grow *together*.
Which brings me back to my original request for (constructive) feedback on what we already have created.
Thanks, Knut
On 10/17/19 11:46 AM, Knut Omang wrote:
On Wed, 2019-10-16 at 13:08 +0000, Luis Chamberlain wrote:
On Wed, Oct 16, 2019 at 12:52:12PM +0200, Knut Omang wrote:
On Mon, 2019-10-14 at 13:01 -0600, shuah wrote:
On 10/14/19 12:38 PM, Knut Omang wrote:
On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote:
On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote: > Hey Knut and Shuah, > > Following up on our offline discussion on Wednesday night: > > We decided that it would make sense for Knut to try to implement Hybrid > Testing (testing that crosses the kernel userspace boundary) that he > introduced here[1] on top of the existing KUnit infrastructure. > > We discussed several possible things in the kernel that Knut could test > with the new Hybrid Testing feature as an initial example. Those were > (in reverse order of expected difficulty): > > 1. RDS (Reliable Datagram Sockets) - We decided that, although this was > one of the more complicated subsystems to work with, it was probably > the best candidate for Knut to start with because it was in desperate > need of better testing, much of the testing would require crossing > the kernel userspace boundary to be effective, and Knut has access to > RDS (since he works at Oracle). >
Any update on if you are able to explore this work.
I am working on this, but it's going to take some time, as this ties in with internal projects at Oracle. Basing work on RDS or RDS related tests (such as generic socket etc) is the best option for us, since that allows progress on our internal deliverables as well ;-)
> 2. KMOD - Probably much simpler than RDS, and the maintainer, Luis > Chamberlain (CC'ed) would like to see better testing here, but > probably still not as good as RDS because it is in less dire need of > testing, collaboration on this would be more difficult, and Luis is > currently on an extended vacation. Luis and I had already been > discussing testing KMOD here[2].
I'm back!
I'm also happy and thrilled to help review the infrastructure in great detail given I have lofty future objectives with testing in the kernel. Also, kmod is a bit more complex to test, if Knut wants a simpler *easy* target I think test_sysctl.c would be a good target. I think the goal there would be to add probes for a few of the sysctl callers, and then test them through userspace somehow, for instance?
That sounds like a good case for the hybrid tests. The challenge in a kunit setting would be that it relies on a significant part of KTF to work as we have used it so far:
- module support - Alan has been working on this
I see the patches. Thanks for working on this.
- netlink approach from KTF (to allow user space execution of kernel part of test, and gathering reporting in one place)
- probe infrastructure
The complexities with testing kmod is the threading aspect. So that is more of a challenge for a test infrastructure as a whole. However kmod also already has a pretty sound kthread solution which could be used as basis for any sound kernel multithread test solution.
Curious, what was decided with the regards to the generic netlink approach?
Can this work be be done without netlink approach? At least some of it. I would like to see some patches and would like to get a better feel for the dependency on generic netlink.
A flexible out-of-band communication channel is needed for several of the features, and definitely for hybrid tests. It does not need to be netlink in principle, but that has served the purpose well so far in KTF, and reimplementing something will be at the cost of the core task of getting more and better tests, which after all is the goal of this effort.
I don't think you did justice to *why* netlink would be good, but in principle I suspect its the right thing long term if we want a nice interface to decide what to test and how.
I tried to do that in my response here to Brendan:
https://lore.kernel.org/linux-kselftest/644ff48481f3dd7295798dcef88b4abcc869...
So kselftest today implicates you know what you want to test. Or rather this is defined through what you enable and then you run *all* enabled kselftests.
Similar logic follwos for kunit.
The same applies to KTF. The trivial default user application for KTF (see https://lkml.org/lkml/2019/8/13/101, file tools/testing/selftests/ktf/user/ktfrun.cc) would run all the kernel tests that has been enabled (depending what test modules that have been inserted).
But if I just want to run a single test, or I want to randomize the order the tests are run (to avoid unintended interdependencies), or whatever other ideas people might have outside the default setup, KTF offers that flexibility via netlink, and allows use of a mature user land unit test suite to facilitate those features, instead of having to reinvent the wheel in kernel code.
Yes, you can have scripts and your own tests infrastructure that knows what to test or avoid, but this is not nice for autonegotiation.
We haven't really been setting out to do anything ambitions it this area with KTF, except that we have been using simple naming of tests into classes and used wilcards etc to run a subset of tests.
Consider also the complexity of dealing with testing new tests on older kenrels. Today we address this on kselftests by striving to make sure the old scripts / tests work or yield for old kernel who don't have a feature. What if we want to really figure out what is there or not concretely?
A generic netlink interface could easily allow for these sorts of things to grow and be auto negotiated.
In general I agree that such types of negotiation is something that can be explored. You can view the KTF code to set up network contexts as something of a start in that sense. We have code in place that uses such network context configuration to allow a unit test suite to exchange address information as part of a single program multiple data (SPMD) program, using MPI, to allow unit tests that requires more than a single node to run.
Then, collection of results: we have each kselftest desiging its own scatter/gather set of way to go and do what it can to test what it can and expose what should be tested, or in what order, or to allow knobs. A generic netlink interface allows a standard interface to be sketched up.
I think we solve this by exposing flexibility to user space. Why you would want to have the kernel involved in that instead of just giving user mode enough info?
I don't think the generic netlink interface implemented in the KTF patches accomplished any of this, but it at least moved the needle forward IMHO in terms of what we should consider long term.
It would be really good if more people had a closer look at the KTF patch set before we embark on significant work of porting it to Kunit.
For reference, the netlink code in KTF: https://lkml.org/lkml/2019/8/13/92
I read it, and it seems like a lot of stuff implemented but without a lot of proper justification and also without a lot of community feedback. Kind of like: too much done in house without proper feedback.
Then it became bloated and once kunit was almost about to be merged you guys knee-jerk reacted and wanted to merge your stuff too...
Again, just for the record, the fact is that we presented KTF at LPC in 2017 (see https://lwn.net/Articles/735034/) and pushed it to github for review and comments at the same time (https://github.com/oracle/ktf). Looking in the git repo, this was just a few weeks after the initial refactoring from a project specific predecessor was complete. I discussed collaboration with Brendan, who had a very early draft of what later became Kunit with him, and expected from our conversation that we would work together on a common proposal that covered both main use cases.
As to the features we added to KTF, they were in response to real test needs (multithreaded test execution, override of return values to verify error codepaths, the need for environment/node specific configuration like different device presence/capabilities, different network configurations of the test nodes, and so on..
And the quality of code. Gosh, much to be said about that...
We're of course happy to take constructive critics - do you have any specifics in mind?
So I think that asking to consolidate with kunit is the right thing at this point. Because we need to grow it in *community*. kunit itself has been heavily modifed to adjust to early feedback, for example, so its an example of evolution needed.
Note that unlike kunit, in KTF no tests are executed by default, instead KTF provides an API to query about, set up and trigger execution of tests and test parts in the kernel, and leave the actual initiation to user space tools.
This is all excellent. However baby steps. Let's demo it with a few simple tests, rather than trying to ensure it works with *all* the stuff you guys probably already have in house. That will probably have to be phased out in the future with whatever we grow *together*.
Which brings me back to my original request for (constructive) feedback on what we already have created.
Knut,
You have gotten some review comments and we talked about it in a couple of offline discussion as mentioned in the email that started this thread.
I mentioned to you that adding the ability to trigger KUNit tests from user-space is feature I am interested in.
Is that netlink? I don't know.
That is why I asked you to start with a KUnit test for RDS or kmod with user-space trigger for us to review. This will help add and evolve the KUnit hybrid testing. My recollection is that you are open to working on it.
It appears we might be back to square one, you asking us to review KFT as is.
thanks, -- Shuah
On Thu, Oct 17, 2019 at 07:46:48PM +0200, Knut Omang wrote:
On Wed, 2019-10-16 at 13:08 +0000, Luis Chamberlain wrote:
On Wed, Oct 16, 2019 at 12:52:12PM +0200, Knut Omang wrote:
On Mon, 2019-10-14 at 13:01 -0600, shuah wrote:
On 10/14/19 12:38 PM, Knut Omang wrote:
On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote:
On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote: > Hey Knut and Shuah, > > Following up on our offline discussion on Wednesday night: > > We decided that it would make sense for Knut to try to implement Hybrid > Testing (testing that crosses the kernel userspace boundary) that he > introduced here[1] on top of the existing KUnit infrastructure. > > We discussed several possible things in the kernel that Knut could test > with the new Hybrid Testing feature as an initial example. Those were > (in reverse order of expected difficulty): > > 1. RDS (Reliable Datagram Sockets) - We decided that, although this was > one of the more complicated subsystems to work with, it was probably > the best candidate for Knut to start with because it was in desperate > need of better testing, much of the testing would require crossing > the kernel userspace boundary to be effective, and Knut has access to > RDS (since he works at Oracle). >
Any update on if you are able to explore this work.
I am working on this, but it's going to take some time, as this ties in with internal projects at Oracle. Basing work on RDS or RDS related tests (such as generic socket etc) is the best option for us, since that allows progress on our internal deliverables as well ;-)
> 2. KMOD - Probably much simpler than RDS, and the maintainer, Luis > Chamberlain (CC'ed) would like to see better testing here, but > probably still not as good as RDS because it is in less dire need of > testing, collaboration on this would be more difficult, and Luis is > currently on an extended vacation. Luis and I had already been > discussing testing KMOD here[2].
I'm back!
I'm also happy and thrilled to help review the infrastructure in great detail given I have lofty future objectives with testing in the kernel. Also, kmod is a bit more complex to test, if Knut wants a simpler *easy* target I think test_sysctl.c would be a good target. I think the goal there would be to add probes for a few of the sysctl callers, and then test them through userspace somehow, for instance?
That sounds like a good case for the hybrid tests. The challenge in a kunit setting would be that it relies on a significant part of KTF to work as we have used it so far:
- module support - Alan has been working on this
I see the patches. Thanks for working on this.
- netlink approach from KTF (to allow user space execution of kernel part of test, and gathering reporting in one place)
- probe infrastructure
The complexities with testing kmod is the threading aspect. So that is more of a challenge for a test infrastructure as a whole. However kmod also already has a pretty sound kthread solution which could be used as basis for any sound kernel multithread test solution.
Curious, what was decided with the regards to the generic netlink approach?
Can this work be be done without netlink approach? At least some of it. I would like to see some patches and would like to get a better feel for the dependency on generic netlink.
A flexible out-of-band communication channel is needed for several of the features, and definitely for hybrid tests. It does not need to be netlink in principle, but that has served the purpose well so far in KTF, and reimplementing something will be at the cost of the core task of getting more and better tests, which after all is the goal of this effort.
I don't think you did justice to *why* netlink would be good, but in principle I suspect its the right thing long term if we want a nice interface to decide what to test and how.
I tried to do that in my response here to Brendan:
https://lore.kernel.org/linux-kselftest/644ff48481f3dd7295798dcef88b4abcc869...
I'm suggesting there are more gains than what you listed and that the ability to have new userspace talk to an old kernel to figure out what makes sense is also another example gain.
It begs the question if a simple header alone could be used to annotate if what tests cases are available to userspace, or if we really need this aspect of userspace <--> kernel communication to figure things out.
A header approach to this problem can be something like, a new kernel would have something like:
#define SELFTEST_FOO_BAR SELFTEST_FOO_BAR
Then userspace tests / tools can just #ifdef for this. This is the approach we used on 802.11 for exposing new generic netlink features and having / allowing *one* iw release compile against any kernel for instance.
So kselftest today implicates you know what you want to test. Or rather this is defined through what you enable and then you run *all* enabled kselftests.
Similar logic follwos for kunit.
The same applies to KTF. The trivial default user application for KTF (see https://lkml.org/lkml/2019/8/13/101, file tools/testing/selftests/ktf/user/ktfrun.cc) would run all the kernel tests that has been enabled (depending what test modules that have been inserted).
I think this could be improved as well.
IMHO if we are doing all these bells and whistles, ideally we should be able to only compile the userspace components which a target kernel needs. Not just the target kernel from which the code came from but also plugging that userspace test suites, say against another kernel and it figuring out what makes sense to expose, and allowing then one to only compile the test cases which make sense for that kernel.
Also, even if one *can* compile and run a test, as you suggest, one may not want to run it. This has me thinking that perhaps using our own kconfig entries for userspace targets / test cases to run makes sense.
But if I just want to run a single test, or I want to randomize the order the tests are run (to avoid unintended interdependencies), or whatever other ideas people might have outside the default setup, KTF offers that flexibility via netlink, and allows use of a mature user land unit test suite to facilitate those features, instead of having to reinvent the wheel in kernel code.
There is a difference between saying generic netlink is a good option we should seriously consider for this problem Vs KTF's generic netlink implementation is suitable for this. I am stating only the former. The KTF netlink solution left much to be said and I am happy we are moving slowly with integration of what you had with KTF and what kunit is. For instance the level of documetnation could heavily be improved. My litmus test for a good generic netlink interface is to be docuemnted as well as say include/uapi/linux/nl80211.h.
Yes, you can have scripts and your own tests infrastructure that knows what to test or avoid, but this is not nice for autonegotiation.
We haven't really been setting out to do anything ambitions it this area with KTF, except that we have been using simple naming of tests into classes and used wilcards etc to run a subset of tests.
Its a good start.
One questions stand out to me based on what you say you had:
Do we want to have an interface to have userspace send information to the kernel about what tests we want to run, or should userspace just run a test at a time using its own heuristics?
One possible gain of having the kernel be informed of a series of tests is it would handle order, say if it wanted to parallelize things. But I think handling this in the kernel can be complex. Consider fixes to it, then we'd have to ensure a fix in the logic for this would have to be propagated as well. Userspace could just batch out tests, and if there issues with parallelizing them one can hope the kernel would know what it can or cannot do. This would allow the batching / gathering to done completely in userspace.
Consider also the complexity of dealing with testing new tests on older kenrels. Today we address this on kselftests by striving to make sure the old scripts / tests work or yield for old kernel who don't have a feature. What if we want to really figure out what is there or not concretely?
A generic netlink interface could easily allow for these sorts of things to grow and be auto negotiated.
In general I agree that such types of negotiation is something that can be explored. You can view the KTF code to set up network contexts as something of a start in that sense. We have code in place that uses such network context configuration to allow a unit test suite to exchange address information as part of a single program multiple data (SPMD) program, using MPI, to allow unit tests that requires more than a single node to run.
You went from one host to multiple hosts.. and MPI... I was just thinking about communications about what tests to run locally depending on the kernel. I'll ignore the MPI stuff for now.
Then, collection of results: we have each kselftest desiging its own scatter/gather set of way to go and do what it can to test what it can and expose what should be tested, or in what order, or to allow knobs. A generic netlink interface allows a standard interface to be sketched up.
I think we solve this by exposing flexibility to user space. Why you would want to have the kernel involved in that instead of just giving user mode enough info?
Indeed, userspace is best for this.
I don't think the generic netlink interface implemented in the KTF patches accomplished any of this, but it at least moved the needle forward IMHO in terms of what we should consider long term.
It would be really good if more people had a closer look at the KTF patch set before we embark on significant work of porting it to Kunit.
For reference, the netlink code in KTF: https://lkml.org/lkml/2019/8/13/92
I read it, and it seems like a lot of stuff implemented but without a lot of proper justification and also without a lot of community feedback. Kind of like: too much done in house without proper feedback.
Then it became bloated and once kunit was almost about to be merged you guys knee-jerk reacted and wanted to merge your stuff too...
Again, just for the record, the fact is that we presented KTF at LPC in 2017 (see https://lwn.net/Articles/735034/) and pushed it to github for review and comments at the same time (https://github.com/oracle/ktf). Looking in the git repo, this was just a few weeks after the initial refactoring from a project specific predecessor was complete. I discussed collaboration with Brendan, who had a very early draft of what later became Kunit with him, and expected from our conversation that we would work together on a common proposal that covered both main use cases.
Alright, so kunit was just too late to lay the ground work in so far as upstream is concerned. Thanks for the corrections.
As to the features we added to KTF, they were in response to real test needs (multithreaded test execution, override of return values to verify error codepaths, the need for environment/node specific configuration like different device presence/capabilities, different network configurations of the test nodes, and so on..
I see.. But the upstreaming never happened. Well at least the experience you have now with KTF can be leveraged for a proper upstream architecture.
And the quality of code. Gosh, much to be said about that...
We're of course happy to take constructive critics - do you have any specifics in mind?
Slowly but surely. I'm happy with where we stand on trying to blend the two kunit and KTF together. The generic netlink aspect of KTF however *does* strike me as important to scale long term, I wasn't to happy with its level of documentation though, I think that could be improved.
So I think that asking to consolidate with kunit is the right thing at this point. Because we need to grow it in *community*. kunit itself has been heavily modifed to adjust to early feedback, for example, so its an example of evolution needed.
Note that unlike kunit, in KTF no tests are executed by default, instead KTF provides an API to query about, set up and trigger execution of tests and test parts in the kernel, and leave the actual initiation to user space tools.
This is all excellent. However baby steps. Let's demo it with a few simple tests, rather than trying to ensure it works with *all* the stuff you guys probably already have in house. That will probably have to be phased out in the future with whatever we grow *together*.
Which brings me back to my original request for (constructive) feedback on what we already have created.
I think I've provided enough. My focus on this thread was that I am in hopes we don't loose sight of the possible gains of a generic netlink interface for testing. I think the discussions that have followed will help pave the way for if and how we start this.
It leaves standing the question if we really need the generic netlink interface to figure out if a test is available or if we can just use header files for this as I mentioned.
Can you think of a test / case where its needed and a header file may not suffice?
Luis
On Fri, Oct 18, 2019 at 2:47 AM Luis Chamberlain mcgrof@kernel.org wrote:
On Thu, Oct 17, 2019 at 07:46:48PM +0200, Knut Omang wrote:
On Wed, 2019-10-16 at 13:08 +0000, Luis Chamberlain wrote:
On Wed, Oct 16, 2019 at 12:52:12PM +0200, Knut Omang wrote:
On Mon, 2019-10-14 at 13:01 -0600, shuah wrote:
On 10/14/19 12:38 PM, Knut Omang wrote:
On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote: > On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
[...]
So kselftest today implicates you know what you want to test. Or rather this is defined through what you enable and then you run *all* enabled kselftests.
Similar logic follwos for kunit.
The same applies to KTF. The trivial default user application for KTF (see https://lkml.org/lkml/2019/8/13/101, file tools/testing/selftests/ktf/user/ktfrun.cc) would run all the kernel tests that has been enabled (depending what test modules that have been inserted).
I think this could be improved as well.
IMHO if we are doing all these bells and whistles, ideally we should be able to only compile the userspace components which a target kernel needs. Not just the target kernel from which the code came from but also plugging that userspace test suites, say against another kernel and it figuring out what makes sense to expose, and allowing then one to only compile the test cases which make sense for that kernel.
Sorry, I don't think I understand what point you are trying to make here. I don't see the problem with treating the kernel and userspace component of a hybrid test as a unit where they are built and run together. Is your point maybe in reference to new versions of kselftest being used to test stable kernels?
Also, even if one *can* compile and run a test, as you suggest, one may not want to run it. This has me thinking that perhaps using our own
That makes sense. I imagine that most developers will only run their own tests over the course of normal development. Running all tests is probably something that is mostly for CI/CD systems (or sad souls doing large scale refactoring).
kconfig entries for userspace targets / test cases to run makes sense.
That's my position at this point. For hybrid testing, it seems obvious to me that userspace should be able to dictate when a test gets run (since we have to have the two tests coordinate when they run, we might as well have userspace - where it is easiest - take control of the coordination). Nevertheless, I don't see why we cannot just lean on Kconfigs or Kselftest for deciding which tests to run.
But if I just want to run a single test, or I want to randomize the order the tests are run (to avoid unintended interdependencies), or whatever other ideas people might have outside the default setup, KTF offers that flexibility via netlink, and allows use of a mature user land unit test suite to facilitate those features, instead of having to reinvent the wheel in kernel code.
There is a difference between saying generic netlink is a good option we should seriously consider for this problem Vs KTF's generic netlink implementation is suitable for this. I am stating only the former. The KTF netlink solution left much to be said and I am happy we are moving slowly with integration of what you had with KTF and what kunit is. For instance the level of documetnation could heavily be improved. My litmus test for a good generic netlink interface is to be docuemnted as well as say include/uapi/linux/nl80211.h.
Yes, you can have scripts and your own tests infrastructure that knows what to test or avoid, but this is not nice for autonegotiation.
We haven't really been setting out to do anything ambitions it this area with KTF, except that we have been using simple naming of tests into classes and used wilcards etc to run a subset of tests.
Its a good start.
One questions stand out to me based on what you say you had:
Do we want to have an interface to have userspace send information to the kernel about what tests we want to run, or should userspace just run a test at a time using its own heuristics?
One possible gain of having the kernel be informed of a series of tests is it would handle order, say if it wanted to parallelize things. But I think handling this in the kernel can be complex. Consider fixes to it, then we'd have to ensure a fix in the logic for this would have to be propagated as well. Userspace could just batch out tests, and if there issues with parallelizing them one can hope the kernel would know what it can or cannot do. This would allow the batching / gathering to done completely in userspace.
I guess that seems reasonable. We are still talking about hybrid testing, right? If so, can we just rely on Kselftest for this functionality? I am not saying that already does everything that we need; I am just making the point that Kselftest seems like the obvious place to orchestrate tests on a system that have a userspace component. (Sorry if that is obvious, I just get the sense that we have lost sight of that point.)
Consider also the complexity of dealing with testing new tests on older kenrels. Today we address this on kselftests by striving to make sure the old scripts / tests work or yield for old kernel who don't have a feature. What if we want to really figure out what is there or not concretely?
A generic netlink interface could easily allow for these sorts of things to grow and be auto negotiated.
In general I agree that such types of negotiation is something that can be explored. You can view the KTF code to set up network contexts as something of a start in that sense. We have code in place that uses such network context configuration to allow a unit test suite to exchange address information as part of a single program multiple data (SPMD) program, using MPI, to allow unit tests that requires more than a single node to run.
You went from one host to multiple hosts.. and MPI... I was just thinking about communications about what tests to run locally depending on the kernel. I'll ignore the MPI stuff for now.
Then, collection of results: we have each kselftest desiging its own scatter/gather set of way to go and do what it can to test what it can and expose what should be tested, or in what order, or to allow knobs. A generic netlink interface allows a standard interface to be sketched up.
I think we solve this by exposing flexibility to user space. Why you would want to have the kernel involved in that instead of just giving user mode enough info?
Indeed, userspace is best for this.
Yeah, sounds pretty obvious to me too. I mean, if we have a test with both a user space and kernel space component, we have to orchestrate them from either kernel or user space. Since orchestration already has to exist in userspace for kselftest, we might as well do the orchestration for hybrid tests in the same place.
However, I feel like Knut is trying to make a point here beyond the obvious, as I don't think anyone at anytime has suggested that the orchestration for tests with a user space component should be done in the kernel. Then again, maybe not; I just want to make sure we are on the same page.
I don't think the generic netlink interface implemented in the KTF patches accomplished any of this, but it at least moved the needle forward IMHO in terms of what we should consider long term.
It would be really good if more people had a closer look at the KTF patch set before we embark on significant work of porting it to Kunit.
For reference, the netlink code in KTF: https://lkml.org/lkml/2019/8/13/92
I read it, and it seems like a lot of stuff implemented but without a lot of proper justification and also without a lot of community feedback. Kind of like: too much done in house without proper feedback.
Then it became bloated and once kunit was almost about to be merged you guys knee-jerk reacted and wanted to merge your stuff too...
Again, just for the record, the fact is that we presented KTF at LPC in 2017 (see https://lwn.net/Articles/735034/) and pushed it to github for review and comments at the same time (https://github.com/oracle/ktf). Looking in the git repo, this was just a few weeks after the initial refactoring from a project specific predecessor was complete. I discussed collaboration with Brendan, who had a very early draft of what later became Kunit with him, and expected from our conversation that we would work together on a common proposal that covered both main use cases.
Alright, so kunit was just too late to lay the ground work in so far as upstream is concerned. Thanks for the corrections.
Huh? KUnit was too late to lay the groundwork for what?
I didn't respond to this previously because I didn't want to put fuel on the fire that it looked like this thread was becoming. Nevertheless, I don't think Knut's interpretation of what happened is entirely fair. I recall quite well that Knut and I did talk after the conference, and I remember that Knut was not interested in working with upstream at the time, which was a pretty significant sticking point with me. I thought that pretty much ended the collaboration discussion. I also don't remember a common proposal; I just wanted to start sending code upstream.
As to the features we added to KTF, they were in response to real test needs (multithreaded test execution, override of return values to verify error codepaths, the need for environment/node specific configuration like different device presence/capabilities, different network configurations of the test nodes, and so on..
I see.. But the upstreaming never happened. Well at least the experience you have now with KTF can be leveraged for a proper upstream architecture.
And the quality of code. Gosh, much to be said about that...
We're of course happy to take constructive critics - do you have any specifics in mind?
Slowly but surely. I'm happy with where we stand on trying to blend the two kunit and KTF together. The generic netlink aspect of KTF however *does* strike me as important to scale long term, I wasn't to happy with its level of documentation though, I think that could be improved.
I'm still skeptical that netlink is the right approach. I imagine all we need is just to be able to trigger testing events in the kernel from user space and vice versa. I guess that could get pretty complex, but I think it is best to start with something that isn't (as I hope we have already identified in previous emails).
So I think that asking to consolidate with kunit is the right thing at this point. Because we need to grow it in *community*. kunit itself has been heavily modifed to adjust to early feedback, for example, so its an example of evolution needed.
Note that unlike kunit, in KTF no tests are executed by default, instead KTF provides an API to query about, set up and trigger execution of tests and test parts in the kernel, and leave the actual initiation to user space tools.
This is all excellent. However baby steps. Let's demo it with a few simple tests, rather than trying to ensure it works with *all* the stuff you guys probably already have in house. That will probably have to be phased out in the future with whatever we grow *together*.
I think this is a really great point. I know it sucks building something that you know will be thrown out on the way to the thing that you actually want to build; however, it is unfortunately often the right way to do things upstream. Unfortunate might not even be the right word - you don't really know what your ideas will evolve into when you throw them into such a large and varied community; maybe they will go the direction that you expect and some of that work may in fact be for naught; however, it is more likely that they will evolve into something else.
So yeah, I think finding a simple case to start with that pairs down the hybrid test orchestration/signaling bit as much as possible is the right place to start.
Which brings me back to my original request for (constructive) feedback on what we already have created.
I think I've provided enough. My focus on this thread was that I am in hopes we don't loose sight of the possible gains of a generic netlink interface for testing. I think the discussions that have followed will help pave the way for if and how we start this.
It leaves standing the question if we really need the generic netlink interface to figure out if a test is available or if we can just use header files for this as I mentioned.
Can you think of a test / case where its needed and a header file may not suffice?
Do we really want to focus on this point now? I think it's better to start off with something simple and evolve it. We already need to find a way to interface KUnit to kselftest, which at this point neither are intended to do. Netlink seems like a lot of unnecessary complexity to introduce at the same time as trying to build up the other necessary infrastructure in KUnit and kselftest.
Still, I don't feel that strongly about it. I don't actually really care what mechanism we end up using to tie everything together, as long as it doesn't add unnecessary complexity to standalone KUnit. I just figure we can move faster if we start off really simple and evolve it over time.
Cheers!
On Fri, Oct 18, 2019 at 11:35:00AM -0700, Brendan Higgins wrote:
On Fri, Oct 18, 2019 at 2:47 AM Luis Chamberlain mcgrof@kernel.org wrote:
On Thu, Oct 17, 2019 at 07:46:48PM +0200, Knut Omang wrote:
On Wed, 2019-10-16 at 13:08 +0000, Luis Chamberlain wrote:
On Wed, Oct 16, 2019 at 12:52:12PM +0200, Knut Omang wrote:
On Mon, 2019-10-14 at 13:01 -0600, shuah wrote:
On 10/14/19 12:38 PM, Knut Omang wrote: > On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote: > > On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
Is your point maybe in reference to new versions of kselftest being used to test stable kernels?
Yes, that's exactly right.
Today we deal with this by ensuring that selftests *should* in theory bail gracefully if a feature was not present in older kernels.
Also, we have two ways to run selftests:
* run all tests * user runs a test manually
There's no configuration for trying to see which tests we should try to run, for instance. There is no handy orchestration infrastructure.
In fact, each test also has its own series of orchestration tidbits, for instance the kmod test has a series of tests cases to run and a number of times it should run. Similarly, sysctl has its own nomenclature for this. For each refer to ALL_TESTS:
tools/testing/selftests/sysctl/sysctl.sh ------------------------------------------------------------------------ # This represents # # TEST_ID:TEST_COUNT:ENABLED:TARGET # # TEST_ID: is the test id number # TEST_COUNT: number of times we should run the test # ENABLED: 1 if enabled, 0 otherwise # TARGET: test target file required on the test_sysctl module # # Once these are enabled please leave them as-is. Write your own test, # we have tons of space. ALL_TESTS="0001:1:1:int_0001" ALL_TESTS="$ALL_TESTS 0002:1:1:string_0001" ALL_TESTS="$ALL_TESTS 0003:1:1:int_0002" ALL_TESTS="$ALL_TESTS 0004:1:1:uint_0001" ALL_TESTS="$ALL_TESTS 0005:3:1:int_0003" ALL_TESTS="$ALL_TESTS 0006:50:1:bitmap_0001" ------------------------------------------------------------------------
tools/testing/selftests/kmod/kmod.sh ------------------------------------------------------------------------ # This represents # # TEST_ID:TEST_COUNT:ENABLED # # TEST_ID: is the test id number # TEST_COUNT: number of times we should run the test # ENABLED: 1 if enabled, 0 otherwise # # Once these are enabled please leave them as-is. Write your own test, # we have tons of space. ALL_TESTS="0001:3:1" ALL_TESTS="$ALL_TESTS 0002:3:1" ALL_TESTS="$ALL_TESTS 0003:1:1" ALL_TESTS="$ALL_TESTS 0004:1:1" ALL_TESTS="$ALL_TESTS 0005:10:1" ALL_TESTS="$ALL_TESTS 0006:10:1" ALL_TESTS="$ALL_TESTS 0007:5:1" ALL_TESTS="$ALL_TESTS 0008:150:1" ALL_TESTS="$ALL_TESTS 0009:150:1" ------------------------------------------------------------------------
A while ago we decided it didn't make sense sharing an infrastructure to specify this. But now is a good time to ask ourselves this, given that this seems part of "orchestration".
Also, even if one *can* compile and run a test, as you suggest, one may not want to run it. This has me thinking that perhaps using our own
That makes sense. I imagine that most developers will only run their own tests over the course of normal development. Running all tests is probably something that is mostly for CI/CD systems (or sad souls doing large scale refactoring).
kconfig entries for userspace targets / test cases to run makes sense.
That's my position at this point. For hybrid testing, it seems obvious to me that userspace should be able to dictate when a test gets run (since we have to have the two tests coordinate when they run, we might as well have userspace - where it is easiest - take control of the coordination). Nevertheless, I don't see why we cannot just lean on Kconfigs or Kselftest for deciding which tests to run.
We can, its just not there yet. There is a difference between say enabling CONFIG_TEST_KMOD and running the resepctive test, we don't currently have a userspace configuration mechanism, and while it seems kconfig seems trivial for this, it is important to point out that if we want to consider this as a path forward we may want to consider allowing for this configuration to be *independent* from the kernel. This would allow allowing a tarball of new selftests be deployed on another kernel let's say, and you just run 'make menuconig' for the test suite.
It doesn't *need* to be independent, however I see the possibility of detachment as a gain. Its also rather slightly odd to consider say having CONFIG_SELFTEST_KMOD or something like that on a kernel config.
Doing this is rather easy, however it just would need to be discussed eventually if we want it.
But if I just want to run a single test, or I want to randomize the order the tests are run (to avoid unintended interdependencies), or whatever other ideas people might have outside the default setup, KTF offers that flexibility via netlink, and allows use of a mature user land unit test suite to facilitate those features, instead of having to reinvent the wheel in kernel code.
There is a difference between saying generic netlink is a good option we should seriously consider for this problem Vs KTF's generic netlink implementation is suitable for this. I am stating only the former. The KTF netlink solution left much to be said and I am happy we are moving slowly with integration of what you had with KTF and what kunit is. For instance the level of documetnation could heavily be improved. My litmus test for a good generic netlink interface is to be docuemnted as well as say include/uapi/linux/nl80211.h.
Yes, you can have scripts and your own tests infrastructure that knows what to test or avoid, but this is not nice for autonegotiation.
We haven't really been setting out to do anything ambitions it this area with KTF, except that we have been using simple naming of tests into classes and used wilcards etc to run a subset of tests.
Its a good start.
One questions stand out to me based on what you say you had:
Do we want to have an interface to have userspace send information to the kernel about what tests we want to run, or should userspace just run a test at a time using its own heuristics?
One possible gain of having the kernel be informed of a series of tests is it would handle order, say if it wanted to parallelize things. But I think handling this in the kernel can be complex. Consider fixes to it, then we'd have to ensure a fix in the logic for this would have to be propagated as well. Userspace could just batch out tests, and if there issues with parallelizing them one can hope the kernel would know what it can or cannot do. This would allow the batching / gathering to done completely in userspace.
I guess that seems reasonable. We are still talking about hybrid testing, right?
Yes
If so, can we just rely on Kselftest for this functionality? I am not saying that already does everything that we need; I am just making the point that Kselftest seems like the obvious place to orchestrate tests on a system that have a userspace component. (Sorry if that is obvious, I just get the sense that we have lost sight of that point.)
Yes it is just that selftests lets you test all or you run your own test that see fit, it has no orchestration.
Consider also the complexity of dealing with testing new tests on older kenrels. Today we address this on kselftests by striving to make sure the old scripts / tests work or yield for old kernel who don't have a feature. What if we want to really figure out what is there or not concretely?
A generic netlink interface could easily allow for these sorts of things to grow and be auto negotiated.
In general I agree that such types of negotiation is something that can be explored. You can view the KTF code to set up network contexts as something of a start in that sense. We have code in place that uses such network context configuration to allow a unit test suite to exchange address information as part of a single program multiple data (SPMD) program, using MPI, to allow unit tests that requires more than a single node to run.
You went from one host to multiple hosts.. and MPI... I was just thinking about communications about what tests to run locally depending on the kernel. I'll ignore the MPI stuff for now.
Then, collection of results: we have each kselftest desiging its own scatter/gather set of way to go and do what it can to test what it can and expose what should be tested, or in what order, or to allow knobs. A generic netlink interface allows a standard interface to be sketched up.
I think we solve this by exposing flexibility to user space. Why you would want to have the kernel involved in that instead of just giving user mode enough info?
Indeed, userspace is best for this.
Yeah, sounds pretty obvious to me too. I mean, if we have a test with both a user space and kernel space component, we have to orchestrate them from either kernel or user space. Since orchestration already has to exist in userspace for kselftest, we might as well do the orchestration for hybrid tests in the same place.
However, I feel like Knut is trying to make a point here beyond the obvious, as I don't think anyone at anytime has suggested that the orchestration for tests with a user space component should be done in the kernel. Then again, maybe not; I just want to make sure we are on the same page.
Knut was pointing out the generic netlink interface from KTF allowed userspace to figure out what to orchestrate.
I don't think the generic netlink interface implemented in the KTF patches accomplished any of this, but it at least moved the needle forward IMHO in terms of what we should consider long term.
It would be really good if more people had a closer look at the KTF patch set before we embark on significant work of porting it to Kunit.
For reference, the netlink code in KTF: https://lkml.org/lkml/2019/8/13/92
I read it, and it seems like a lot of stuff implemented but without a lot of proper justification and also without a lot of community feedback. Kind of like: too much done in house without proper feedback.
Then it became bloated and once kunit was almost about to be merged you guys knee-jerk reacted and wanted to merge your stuff too...
Again, just for the record, the fact is that we presented KTF at LPC in 2017 (see https://lwn.net/Articles/735034/) and pushed it to github for review and comments at the same time (https://github.com/oracle/ktf). Looking in the git repo, this was just a few weeks after the initial refactoring from a project specific predecessor was complete. I discussed collaboration with Brendan, who had a very early draft of what later became Kunit with him, and expected from our conversation that we would work together on a common proposal that covered both main use cases.
Alright, so kunit was just too late to lay the ground work in so far as upstream is concerned. Thanks for the corrections.
Huh? KUnit was too late to lay the groundwork for what?
I meant KTF.
As to the features we added to KTF, they were in response to real test needs (multithreaded test execution, override of return values to verify error codepaths, the need for environment/node specific configuration like different device presence/capabilities, different network configurations of the test nodes, and so on..
I see.. But the upstreaming never happened. Well at least the experience you have now with KTF can be leveraged for a proper upstream architecture.
And the quality of code. Gosh, much to be said about that...
We're of course happy to take constructive critics - do you have any specifics in mind?
Slowly but surely. I'm happy with where we stand on trying to blend the two kunit and KTF together. The generic netlink aspect of KTF however *does* strike me as important to scale long term, I wasn't to happy with its level of documentation though, I think that could be improved.
I'm still skeptical that netlink is the right approach. I imagine all we need is just to be able to trigger testing events in the kernel from user space and vice versa. I guess that could get pretty complex, but I think it is best to start with something that isn't (as I hope we have already identified in previous emails).
Simplicity is king, however I don't think Knut did sufficient justice to that one possible gain of it that I saw from it. Consider that Knut also has an infrastructure for testing already which relies on orchestration -- that means an alternative is needed anyway, which is why I figured I'd him in, to point out at least one value I see from it.
Whether or not we embrace generic netlink *should be still be debated*, however if we *don't* go with it, Knut will need a way to figure out a way to allow orchestration.
Which brings me back to my original request for (constructive) feedback on what we already have created.
I think I've provided enough. My focus on this thread was that I am in hopes we don't loose sight of the possible gains of a generic netlink interface for testing. I think the discussions that have followed will help pave the way for if and how we start this.
It leaves standing the question if we really need the generic netlink interface to figure out if a test is available or if we can just use header files for this as I mentioned.
Can you think of a test / case where its needed and a header file may not suffice?
Do we really want to focus on this point now? I think it's better to start off with something simple and evolve it.
Addressing it can help simplfiy things long term, as perhaps we really don't need something like generic netlink to orchestrate.
Luis
On Fri, Oct 18, 2019 at 12:23 PM Luis Chamberlain mcgrof@kernel.org wrote:
On Fri, Oct 18, 2019 at 11:35:00AM -0700, Brendan Higgins wrote:
On Fri, Oct 18, 2019 at 2:47 AM Luis Chamberlain mcgrof@kernel.org wrote:
On Thu, Oct 17, 2019 at 07:46:48PM +0200, Knut Omang wrote:
On Wed, 2019-10-16 at 13:08 +0000, Luis Chamberlain wrote:
On Wed, Oct 16, 2019 at 12:52:12PM +0200, Knut Omang wrote:
On Mon, 2019-10-14 at 13:01 -0600, shuah wrote: > On 10/14/19 12:38 PM, Knut Omang wrote: > > On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote: > > > On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
Is your point maybe in reference to new versions of kselftest being used to test stable kernels?
Yes, that's exactly right.
Today we deal with this by ensuring that selftests *should* in theory bail gracefully if a feature was not present in older kernels.
Also, we have two ways to run selftests:
- run all tests
- user runs a test manually
There's no configuration for trying to see which tests we should try to run, for instance. There is no handy orchestration infrastructure.
In fact, each test also has its own series of orchestration tidbits, for instance the kmod test has a series of tests cases to run and a number of times it should run. Similarly, sysctl has its own nomenclature for this. For each refer to ALL_TESTS:
tools/testing/selftests/sysctl/sysctl.sh
# This represents # # TEST_ID:TEST_COUNT:ENABLED:TARGET # # TEST_ID: is the test id number # TEST_COUNT: number of times we should run the test # ENABLED: 1 if enabled, 0 otherwise # TARGET: test target file required on the test_sysctl module # # Once these are enabled please leave them as-is. Write your own test, # we have tons of space. ALL_TESTS="0001:1:1:int_0001" ALL_TESTS="$ALL_TESTS 0002:1:1:string_0001" ALL_TESTS="$ALL_TESTS 0003:1:1:int_0002" ALL_TESTS="$ALL_TESTS 0004:1:1:uint_0001" ALL_TESTS="$ALL_TESTS 0005:3:1:int_0003" ALL_TESTS="$ALL_TESTS 0006:50:1:bitmap_0001"
tools/testing/selftests/kmod/kmod.sh
# This represents # # TEST_ID:TEST_COUNT:ENABLED # # TEST_ID: is the test id number # TEST_COUNT: number of times we should run the test # ENABLED: 1 if enabled, 0 otherwise # # Once these are enabled please leave them as-is. Write your own test, # we have tons of space. ALL_TESTS="0001:3:1" ALL_TESTS="$ALL_TESTS 0002:3:1" ALL_TESTS="$ALL_TESTS 0003:1:1" ALL_TESTS="$ALL_TESTS 0004:1:1" ALL_TESTS="$ALL_TESTS 0005:10:1" ALL_TESTS="$ALL_TESTS 0006:10:1" ALL_TESTS="$ALL_TESTS 0007:5:1" ALL_TESTS="$ALL_TESTS 0008:150:1" ALL_TESTS="$ALL_TESTS 0009:150:1"
A while ago we decided it didn't make sense sharing an infrastructure to specify this. But now is a good time to ask ourselves this, given that this seems part of "orchestration".
Got it. Okay, that makes a lot of sense.
So given we need orchestration for hybrid tests and orchestration for the already existing kselftest. Would it maybe be easier to solve the orchestration problem for regular kselftests first? Sorry, I don't mean to totally derail the conversation, but it seems that those two things should work together and hybrid test orchestration is a much bigger, harder problem.
I am also realizing that I think we are talking about two things under orchestration:
1) We are talking about deciding which tests to run, how, and when.
2) We are talking about signaling within a test between the kernel and user space components.
I feel like the two points are related, but (1) is applicable to all of kselftest, whereas (2) is only applicable to hybrid testing.
Also, even if one *can* compile and run a test, as you suggest, one may not want to run it. This has me thinking that perhaps using our own
That makes sense. I imagine that most developers will only run their own tests over the course of normal development. Running all tests is probably something that is mostly for CI/CD systems (or sad souls doing large scale refactoring).
kconfig entries for userspace targets / test cases to run makes sense.
That's my position at this point. For hybrid testing, it seems obvious to me that userspace should be able to dictate when a test gets run (since we have to have the two tests coordinate when they run, we might as well have userspace - where it is easiest - take control of the coordination). Nevertheless, I don't see why we cannot just lean on Kconfigs or Kselftest for deciding which tests to run.
We can, its just not there yet. There is a difference between say enabling CONFIG_TEST_KMOD and running the resepctive test, we don't currently have a userspace configuration mechanism, and while it seems kconfig seems trivial for this, it is important to point out that if we want to consider this as a path forward we may want to consider allowing for this configuration to be *independent* from the kernel. This would allow allowing a tarball of new selftests be deployed on another kernel let's say, and you just run 'make menuconig' for the test suite.
Got it. That makes sense.
I like your idea of a kconfig like configuration for kselftest that is, well at least somewhat, independent. Maybe we could have a separate kselftest configuration mechanism that is able to depend on the in kernel kconfig and some of the relevant information can be stored in the kernel and then extracted at runtime by kselftest? I guess we already have the kconfig the kernel was configured with available in sysfs.
Man, this is a pretty deep problem. On one hand you definitely need to involve existing Kconfig/Kbuild for the in-kernel stuff (even if KUnit was configured separately, it still depends on what code is compiled into the kernel), but I also see the point of wanting to configure outside of the kernel for kselftest. Netlink or not, this seems like a pretty massive problem that we could have a thread about all on its own.
It doesn't *need* to be independent, however I see the possibility of detachment as a gain. Its also rather slightly odd to consider say having CONFIG_SELFTEST_KMOD or something like that on a kernel config.
Right.
Doing this is rather easy, however it just would need to be discussed eventually if we want it.
I don't know about that; this seems like a pretty substantial undertaking, but then again, it sounds like you have thought about this a lot more than me :-)
But if I just want to run a single test, or I want to randomize the order the tests are run (to avoid unintended interdependencies), or whatever other ideas people might have outside the default setup, KTF offers that flexibility via netlink, and allows use of a mature user land unit test suite to facilitate those features, instead of having to reinvent the wheel in kernel code.
There is a difference between saying generic netlink is a good option we should seriously consider for this problem Vs KTF's generic netlink implementation is suitable for this. I am stating only the former. The KTF netlink solution left much to be said and I am happy we are moving slowly with integration of what you had with KTF and what kunit is. For instance the level of documetnation could heavily be improved. My litmus test for a good generic netlink interface is to be docuemnted as well as say include/uapi/linux/nl80211.h.
Yes, you can have scripts and your own tests infrastructure that knows what to test or avoid, but this is not nice for autonegotiation.
We haven't really been setting out to do anything ambitions it this area with KTF, except that we have been using simple naming of tests into classes and used wilcards etc to run a subset of tests.
Its a good start.
One questions stand out to me based on what you say you had:
Do we want to have an interface to have userspace send information to the kernel about what tests we want to run, or should userspace just run a test at a time using its own heuristics?
One possible gain of having the kernel be informed of a series of tests is it would handle order, say if it wanted to parallelize things. But I think handling this in the kernel can be complex. Consider fixes to it, then we'd have to ensure a fix in the logic for this would have to be propagated as well. Userspace could just batch out tests, and if there issues with parallelizing them one can hope the kernel would know what it can or cannot do. This would allow the batching / gathering to done completely in userspace.
I guess that seems reasonable. We are still talking about hybrid testing, right?
Yes
If so, can we just rely on Kselftest for this functionality? I am not saying that already does everything that we need; I am just making the point that Kselftest seems like the obvious place to orchestrate tests on a system that have a userspace component. (Sorry if that is obvious, I just get the sense that we have lost sight of that point.)
Yes it is just that selftests lets you test all or you run your own test that see fit, it has no orchestration.
Got it. Sorry, for the unnecessary comment.
Consider also the complexity of dealing with testing new tests on older kenrels. Today we address this on kselftests by striving to make sure the old scripts / tests work or yield for old kernel who don't have a feature. What if we want to really figure out what is there or not concretely?
A generic netlink interface could easily allow for these sorts of things to grow and be auto negotiated.
In general I agree that such types of negotiation is something that can be explored. You can view the KTF code to set up network contexts as something of a start in that sense. We have code in place that uses such network context configuration to allow a unit test suite to exchange address information as part of a single program multiple data (SPMD) program, using MPI, to allow unit tests that requires more than a single node to run.
You went from one host to multiple hosts.. and MPI... I was just thinking about communications about what tests to run locally depending on the kernel. I'll ignore the MPI stuff for now.
Then, collection of results: we have each kselftest desiging its own scatter/gather set of way to go and do what it can to test what it can and expose what should be tested, or in what order, or to allow knobs. A generic netlink interface allows a standard interface to be sketched up.
I think we solve this by exposing flexibility to user space. Why you would want to have the kernel involved in that instead of just giving user mode enough info?
Indeed, userspace is best for this.
Yeah, sounds pretty obvious to me too. I mean, if we have a test with both a user space and kernel space component, we have to orchestrate them from either kernel or user space. Since orchestration already has to exist in userspace for kselftest, we might as well do the orchestration for hybrid tests in the same place.
However, I feel like Knut is trying to make a point here beyond the obvious, as I don't think anyone at anytime has suggested that the orchestration for tests with a user space component should be done in the kernel. Then again, maybe not; I just want to make sure we are on the same page.
Knut was pointing out the generic netlink interface from KTF allowed userspace to figure out what to orchestrate.
Cool. Sorry again.
I don't think the generic netlink interface implemented in the KTF patches accomplished any of this, but it at least moved the needle forward IMHO in terms of what we should consider long term.
It would be really good if more people had a closer look at the KTF patch set before we embark on significant work of porting it to Kunit.
For reference, the netlink code in KTF: https://lkml.org/lkml/2019/8/13/92
I read it, and it seems like a lot of stuff implemented but without a lot of proper justification and also without a lot of community feedback. Kind of like: too much done in house without proper feedback.
Then it became bloated and once kunit was almost about to be merged you guys knee-jerk reacted and wanted to merge your stuff too...
Again, just for the record, the fact is that we presented KTF at LPC in 2017 (see https://lwn.net/Articles/735034/) and pushed it to github for review and comments at the same time (https://github.com/oracle/ktf). Looking in the git repo, this was just a few weeks after the initial refactoring from a project specific predecessor was complete. I discussed collaboration with Brendan, who had a very early draft of what later became Kunit with him, and expected from our conversation that we would work together on a common proposal that covered both main use cases.
Alright, so kunit was just too late to lay the ground work in so far as upstream is concerned. Thanks for the corrections.
Huh? KUnit was too late to lay the groundwork for what?
I meant KTF.
Okay, that makes a lot more sense. Nevermind. Sorry about the tangent.
As to the features we added to KTF, they were in response to real test needs (multithreaded test execution, override of return values to verify error codepaths, the need for environment/node specific configuration like different device presence/capabilities, different network configurations of the test nodes, and so on..
I see.. But the upstreaming never happened. Well at least the experience you have now with KTF can be leveraged for a proper upstream architecture.
And the quality of code. Gosh, much to be said about that...
We're of course happy to take constructive critics - do you have any specifics in mind?
Slowly but surely. I'm happy with where we stand on trying to blend the two kunit and KTF together. The generic netlink aspect of KTF however *does* strike me as important to scale long term, I wasn't to happy with its level of documentation though, I think that could be improved.
I'm still skeptical that netlink is the right approach. I imagine all we need is just to be able to trigger testing events in the kernel from user space and vice versa. I guess that could get pretty complex, but I think it is best to start with something that isn't (as I hope we have already identified in previous emails).
Simplicity is king, however I don't think Knut did sufficient justice to that one possible gain of it that I saw from it. Consider that Knut also has an infrastructure for testing already which relies on orchestration -- that means an alternative is needed anyway, which is why I figured I'd him in, to point out at least one value I see from it.
Fair enough.
Whether or not we embrace generic netlink *should be still be debated*, however if we *don't* go with it, Knut will need a way to figure out a way to allow orchestration.
Absolutely. I wasn't trying to shut the conversation down entirely. I just figured the scope of what we were thinking about was swelling pretty dramatically. I thought Shuah, Knut, and myself had an executable plan to work from, and here we were pretty significantly expanding the scope. Nevertheless, I guess those two things are not exclusive. We can start out with something simple while also concurrently discussing what the long term plan is.
Which brings me back to my original request for (constructive) feedback on what we already have created.
I think I've provided enough. My focus on this thread was that I am in hopes we don't loose sight of the possible gains of a generic netlink interface for testing. I think the discussions that have followed will help pave the way for if and how we start this.
It leaves standing the question if we really need the generic netlink interface to figure out if a test is available or if we can just use header files for this as I mentioned.
Can you think of a test / case where its needed and a header file may not suffice?
Do we really want to focus on this point now? I think it's better to start off with something simple and evolve it.
Addressing it can help simplfiy things long term, as perhaps we really don't need something like generic netlink to orchestrate.
Sure, I just think it sounds like we don't need it for a lot of things, so if something significantly simpler exists, maybe we should just start there. But yeah, starting that discussion here and now doesn't hurt as long as we don't lose sight of something concrete in the short term.
Cool, sounds like we are on the same page then.
Thanks!
On Fri, Oct 18, 2019 at 12:58:01PM -0700, Brendan Higgins wrote:
On Fri, Oct 18, 2019 at 12:23 PM Luis Chamberlain mcgrof@kernel.org wrote:
Do we really want to focus on this point now? I think it's better to start off with something simple and evolve it.
Addressing it can help simplfiy things long term, as perhaps we really don't need something like generic netlink to orchestrate.
Sure, I just think it sounds like we don't need it for a lot of things, so if something significantly simpler exists, maybe we should just start there. But yeah, starting that discussion here and now doesn't hurt as long as we don't lose sight of something concrete in the short term.
Cool, sounds like we are on the same page then.
Indeed.
Luis
On 10/16/19 7:08 AM, Luis Chamberlain wrote:
On Wed, Oct 16, 2019 at 12:52:12PM +0200, Knut Omang wrote:
On Mon, 2019-10-14 at 13:01 -0600, shuah wrote:
On 10/14/19 12:38 PM, Knut Omang wrote:
On Mon, 2019-10-14 at 10:42 +0000, Luis Chamberlain wrote:
On Fri, Sep 13, 2019 at 02:02:47PM -0700, Brendan Higgins wrote:
Hey Knut and Shuah,
Following up on our offline discussion on Wednesday night:
We decided that it would make sense for Knut to try to implement Hybrid Testing (testing that crosses the kernel userspace boundary) that he introduced here[1] on top of the existing KUnit infrastructure.
We discussed several possible things in the kernel that Knut could test with the new Hybrid Testing feature as an initial example. Those were (in reverse order of expected difficulty):
- RDS (Reliable Datagram Sockets) - We decided that, although this was one of the more complicated subsystems to work with, it was probably the best candidate for Knut to start with because it was in desperate need of better testing, much of the testing would require crossing the kernel userspace boundary to be effective, and Knut has access to RDS (since he works at Oracle).
Any update on if you are able to explore this work.
I am working on this, but it's going to take some time, as this ties in with internal projects at Oracle. Basing work on RDS or RDS related tests (such as generic socket etc) is the best option for us, since that allows progress on our internal deliverables as well ;-)
- KMOD - Probably much simpler than RDS, and the maintainer, Luis Chamberlain (CC'ed) would like to see better testing here, but probably still not as good as RDS because it is in less dire need of testing, collaboration on this would be more difficult, and Luis is currently on an extended vacation. Luis and I had already been discussing testing KMOD here[2].
I'm back!
I'm also happy and thrilled to help review the infrastructure in great detail given I have lofty future objectives with testing in the kernel. Also, kmod is a bit more complex to test, if Knut wants a simpler *easy* target I think test_sysctl.c would be a good target. I think the goal there would be to add probes for a few of the sysctl callers, and then test them through userspace somehow, for instance?
That sounds like a good case for the hybrid tests. The challenge in a kunit setting would be that it relies on a significant part of KTF to work as we have used it so far:
- module support - Alan has been working on this
I see the patches. Thanks for working on this.
- netlink approach from KTF (to allow user space execution of kernel part of test, and gathering reporting in one place)
- probe infrastructure
The complexities with testing kmod is the threading aspect. So that is more of a challenge for a test infrastructure as a whole. However kmod also already has a pretty sound kthread solution which could be used as basis for any sound kernel multithread test solution.
Curious, what was decided with the regards to the generic netlink approach?
Can this work be be done without netlink approach? At least some of it. I would like to see some patches and would like to get a better feel for the dependency on generic netlink.
A flexible out-of-band communication channel is needed for several of the features, and definitely for hybrid tests. It does not need to be netlink in principle, but that has served the purpose well so far in KTF, and reimplementing something will be at the cost of the core task of getting more and better tests, which after all is the goal of this effort.
I don't think you did justice to *why* netlink would be good, but in principle I suspect its the right thing long term if we want a nice interface to decide what to test and how.
So kselftest today implicates you know what you want to test. Or rather this is defined through what you enable and then you run *all* enabled kselftests.
Right. Kselftests are static in nature and that is intended. There is some level of flexibility to select tests using TARGETS and a newly SKIP_TARGETS allows flexibility to skip. It is inline with it being the developer tests. The rest of the nice wrapper stuff is left for users and CIs to figure out.
Similar logic follwos for kunit.
Yes, you can have scripts and your own tests infrastructure that knows what to test or avoid, but this is not nice for autonegotiation. Consider also the complexity of dealing with testing new tests on older kenrels. Today we address this on kselftests by striving to make sure the old scripts / tests work or yield for old kernel who don't have a feature. What if we want to really figure out what is there or not concretely? A generic netlink interface could easily allow for these sorts of things to grow and be auto negotiated.
Then, collection of results: we have each kselftest desiging its own scatter/gather set of way to go and do what it can to test what it can and expose what should be tested, or in what order, or to allow knobs. A generic netlink interface allows a standard interface to be sketched up.
I don't think the generic netlink interface implemented in the KTF
patches accomplished any of this, but it at least moved the needle forward IMHO in terms of what we should consider long term.
It would be really good if more people had a closer look at the KTF patch set before we embark on significant work of porting it to Kunit.
For reference, the netlink code in KTF: https://lkml.org/lkml/2019/8/13/92
I read it, and it seems like a lot of stuff implemented but without a lot of proper justification and also without a lot of community feedback. Kind of like: too much done in house without proper feedback. Then it became bloated and once kunit was almost about to be merged you guys knee-jerk reacted and wanted to merge your stuff too...
And the quality of code. Gosh, much to be said about that...
So I think that asking to consolidate with kunit is the right thing at this point. Because we need to grow it in *community*. kunit itself has been heavily modifed to adjust to early feedback, for example, so its an example of evolution needed.
Note that unlike kunit, in KTF no tests are executed by default, instead KTF provides an API to query about, set up and trigger execution of tests and test parts in the kernel, and leave the actual initiation to user space tools.
This is all excellent. However baby steps. Let's demo it with a few simple tests, rather than trying to ensure it works with *all* the stuff you guys probably already have in house. That will probably have to be phased out in the future with whatever we grow *together*.
I am in favor of adding features that would make it easier for testing the kernel. As I said in another response to this thread, it would be nice to start simple with one or two tests and go from there.
thanks, -- Shuah
linux-kselftest-mirror@lists.linaro.org