I was looking into improving the bundle detail view today. At first I thought this would be easy, but I'm wondering if my ideas are just going to be worse.
Some background. Take a view like:
For those without proper permission you roughly have a table like:
<test run> <test> <passes> <fail> ..... "wifi-enablement results" wifi-enablement 28 2 "perf results" perf 3 5 "lava results" lava 19 1 ....
There are a couple of problems with this:
1) people have always been confused by "lava results". ie - whether or not lava was actually able to perform a test run.
2) The "why doesn't this table include testXXX" question. Where the answer is look at "lava results" and it shows testXXX failed to run.
My current thought is to update the table to include all the "lava_test_install (XXX)" test cases from the lava_results run that have failed. This way they show up (maybe bolded or highlighted in some way).
Thoughts on that? I worry the approach is going to be a bit too hard-coded. ie, you have to query something like:
models.TestResult.objects.filter( test_run = b.test_runs.filter(test__test_id='lava') ).exclude( result=0 ) You might even want to add
test_case__test_case_id__contains='lava_test'
to the exclude filter. Which means we are hard coding for "lava" as a test id and "lava_test*" as test cases.
Andy Doan andy.doan@linaro.org writes:
I was looking into improving the bundle detail view today. At first I thought this would be easy, but I'm wondering if my ideas are just going to be worse.
Some background. Take a view like:
For those without proper permission you roughly have a table like:
<test run> <test> <passes> <fail> ..... "wifi-enablement results" wifi-enablement 28 2 "perf results" perf 3 5 "lava results" lava 19 1 ....
There are a couple of problems with this:
- people have always been confused by "lava results". ie - whether or
not lava was actually able to perform a test run.
- The "why doesn't this table include testXXX" question. Where the
answer is look at "lava results" and it shows testXXX failed to run.
My current thought is to update the table to include all the "lava_test_install (XXX)" test cases from the lava_results run that have failed. This way they show up (maybe bolded or highlighted in some way).
Thoughts on that?
I agree it's a problem.
I worry the approach is going to be a bit too hard-coded. ie, you have to query something like:
models.TestResult.objects.filter( test_run = b.test_runs.filter(test__test_id='lava') ).exclude( result=0 ) You might even want to add
test_case__test_case_id__contains='lava_test'
to the exclude filter. Which means we are hard coding for "lava" as a test id and "lava_test*" as test cases.
I also agree that this is a problem :-) We do have bundles with no lava test run.
So what _is_ the problem with this page? I think it's that it tells a poor "story" about what happened during the job execution. We actually have most of (all of?) the information needed to tell a good story, but it's not really structured in a way that makes it natural to use. One of the problems is something that I've banged on and on about before, which is that we don't really record _intent_, which tests we wanted to execute (and, indeed, which cases we expected to be executed). It's in the very nature of testing to have an expectation about what will happen, and it's LAVA's job to explain the difference between what was expected and what happened. On the level of cases, we do an OK job of these (it's at least easy-ish to find the cases that failed) but on the level of jobs, we're not very good.
Although... we do now have a link to the job that produced the bundle. So maybe we could have an "if bundle.test_job" check, and if that's set, use the job and/or the lava test run to display a more interesting record of what happened. Or maybe that's what the job page itself should be doing.
Cheers, mwh
linaro-validation@lists.linaro.org