On Tue, Oct 17, 2017 at 11:26 AM, Dan Rue dan.rue@linaro.org wrote:
The problem that I have is that I don't know where coverage is strong, or where it is weak. Before last week, if someone suggested adding a 'dhclient' test, I would have told them it is redundant. Now, I know that dhclient actually uses a different code path than both init and udhcpc. The only way I know to measure feature coverage is to look at the LTP tests that we're running, and which we're not, but that is a secondary measure.
Do you have a good suggestion for evaluating feature coverage? I don't disagree with your feedback, but it would be good to have some shared perspective on coverage analysis so that we can improve it strategically rather than based on gut feelings, or as a reaction to uncaught problems.
I also agree with Mark's response that my coverage suggestion is premature. This whole thread is premature. But it's also premature to bring in additional test suites at this time. Have to stabilize and expand on what we have, namely LTP.
Some projects with a more disciplined testing approach ask developers to submit reasonably complete feature based tests along side the enablement patch and in the future a new test is required for each encountered regression. If at least the latter is enforced it can build reasonable coverage over time.
Is it premature to work with the test suite projects right now to make sure that these regressions (dhclient & KASAN) have a test created _somewhere_ to document them?