On Tue, Oct 17, 2017 at 04:46:22PM -0500, Ryan Arnold wrote:
On Tue, Oct 17, 2017 at 11:26 AM, Dan Rue dan.rue@linaro.org wrote:
The problem that I have is that I don't know where coverage is strong, or where it is weak. Before last week, if someone suggested adding a 'dhclient' test, I would have told them it is redundant. Now, I know that dhclient actually uses a different code path than both init and udhcpc. The only way I know to measure feature coverage is to look at the LTP tests that we're running, and which we're not, but that is a secondary measure.
Do you have a good suggestion for evaluating feature coverage? I don't disagree with your feedback, but it would be good to have some shared perspective on coverage analysis so that we can improve it strategically rather than based on gut feelings, or as a reaction to uncaught problems.
I also agree with Mark's response that my coverage suggestion is premature. This whole thread is premature. But it's also premature to bring in additional test suites at this time. Have to stabilize and expand on what we have, namely LTP.
Some projects with a more disciplined testing approach ask developers to submit reasonably complete feature based tests along side the enablement patch and in the future a new test is required for each encountered regression. If at least the latter is enforced it can build reasonable coverage over time.
We try to ask for a new test to be added for every new syscall, which is how kselftest has been growing over the past few years. For other things, like networking and storage features and filesystems, there are other test suites that are managed by the community to test those functions.
Is it premature to work with the test suite projects right now to make sure that these regressions (dhclient & KASAN) have a test created _somewhere_ to document them?
Try implementing all of our known test suites first before worrying about this.
Oh, and a simple 'make allmodconfig' please, that would have caught the KASAN issue...
thanks,
greg k-h