On 21/05/2025 17:10, Jakub Kicinski wrote:
We could've saved this extra cycle if my questions [1] exactly about this topic weren't ignored. Area is vague and not well defined. We can continue with the iterative guess and fix cycles, or alternatively get it clearly and formally defined.
I started a couple of times on answering but my hands go a little limb when I have to explain things so obvious like "testing is a crucial part of software development" :S
You're acting as if kernel testing is obvious, and the only way to test the kernel is through selftests. The kernel and networking subsystem are tested regardless of selftests, the fact that you don't see the tests does not mean it doesn't happen.
We are not new contributors, plenty of uapi changes have been merged without selftests, it's natural for us to try and understand this new requirement.
I mean.. nvidia certainly tests their code, so I'm not sure where the disconnect is.
Absolutely! Our testing and coverage are far more complex and valuable than what the existing selftests provide.
We have testing infrastructure that predates these recent selftests addition by years, and we don't plan on switching them to something inferior.
Writing selftests will not replace our internal tests, we're wasting extra cycles adapting our tests to yet another project, just for the sake of having them in the kernel tree?
I had a short conversation with Gal at some conference where he, AFAIU, was doubting that device testing can be part of an open source project.
What's your point? Did that prevent me from being a top 10 netdev selftests contributor in v6.15?
I don't care if tests are open source or not, I care about having good tests that run often and report bugs and regressions.
It certainly is not advantageous to companies to have to share their test code. So when you ask me for details on the rules what I hear is "how can we make sure we do as little as possible".
No, I'm asking for predictability, we've been circling this point for quite some time.
We shouldn't have to wait until v9 to guess whether a certain submission requires the addition of a test. We shouldn't submit a bug fix, to find out that it's blocked due to lack of a test.
As an example, we have well documented coding style, so: - You don't have to waste your time commenting on things that could have been handled before submission. - Rules are clear, comments don't rely on personal taste or mood. - Developers learn, number of review iterations are reduced.
What's the difference with documenting requirements for tests? We can't read your mind.
Broadly, any new uAPI should come with tests which exercise the functionality.
I'm fine with this definition (though I think it's too vague), can I add it to maintainer-netdev.rst?