On Tue, Oct 17, 2017 at 11:26:36AM -0500, Dan Rue wrote:
On Tue, Oct 17, 2017 at 03:39:15PM +0000, Neil Williams wrote:
What we will also need is a map of which tests are stressing which features - so a sane metric for feature coverage inside and outside the kernel would be needed here.
From the blog post:
Some people use it to find areas where coverage is weak. There may be good reasons that some parts of a code base are sparsely covered by tests, but doing a manual inspection once in a while is a good idea. Perhaps you find that all is good, but you may also discover that a quality effort is overdue.
You are aware of the tendencies people have to latch onto metrics, right? In this case it's not like it's going to come as a sudden revelation that there are holes in coverage.
Do you have a good suggestion for evaluating feature coverage? I don't disagree with your feedback, but it would be good to have some shared perspective on coverage analysis so that we can improve it strategically rather than based on gut feelings, or as a reaction to uncaught problems.
I made a couple of concrete suggestions on this in my prior mail - picking up existing testsuites and looking at areas where there's active development or an awareness of frequent problems (including things like what's getting a lot of attention in terms of stable fixes). We could also just look at a phone and think about the subsystems it relies on, glancing at mine graphics, multimedia, extcon, bluetooth and networking jump out off the top of my head as having weak coverage.
I also agree with Mark's response that my coverage suggestion is premature. This whole thread is premature. But it's also premature to bring in additional test suites at this time. Have to stabilize and expand on what we have, namely LTP.
I see where you're coming from but I don't think it's quite that black and white. Getting testsuites integrated with the framework and getting them to run cleanly are two different activities which probably want to be carried out by different people so there's something to be said for looking at the next batch of testsuites to stage into production before we're ready to do that. There will also be cases where different people should be looking at different testsuites due to their domain specific knowledge or where we can pull in people from the community so that work can be parallelized as well. There are limits to how far that can go though, and we do need to be careful we're not just flinging stuff at the wall.