On Sun, Apr 1, 2012 at 10:24 PM, Zach Pfeffer <zach.pfeffer@linaro.org> wrote:
On 1 April 2012 22:07, Andy Doan <andy.doan@linaro.org> wrote:
> On 04/01/2012 08:26 PM, Zach Pfeffer wrote:
>>>
>>> In other words, are we really submitting LAVA jobs and not caring about
>>> the
>>> >  results?
>>
>> Since LAVA:
>>
>> 1. Can't reliably boot all the builds in all configurations
>> 2. Doesn't use linaro-android-media-create (which we tell users to use)
>> 3. Doesn't use the right bootloaders
>>
>> We've always hand tested our builds to ensure they work. Until LAVA:
>>
>> 1. Can program a build in the same manner we tell users to
>> 2. Doesn't assume anything about the target, like it even booting
>>
>> We have to keep hand testing.
>
>
> I think even if LAVA were perfect, hand testing is still required. And I
> won't (in this thread) debate the limitations your bringing up.
>
> In my case, LAVA has been working pretty reliably for Panda for about 4
> months now (at least for my benchmark jobs). When I saw it broken, I pushed
> the issue and the team found a fix pretty quickly. So shouldn't we have
> someone paying attention to at least Panda builds and raise an issue when
> they trend from mostly working to completely broken?

Yeah, Panda's been pretty good. I think monitoring the builds fits
pretty squarely in the new QA groups area. Paul, perhaps you can add,
Android LAVA health to your daily checklist.
I think that's similar to what I suggested earlier in this thread when I said:
That's how it would appear. My assumption has been that those jobs coming from the android ci system are looked at when the android team makes releases.  One of the things I'm looking to do on the QA team is to start looking at manual and automated tests together, while starting to transition some of the manual tests to run in lava instead.
However, I'm a bit surprised to learn that this isn't already part of the process.  I was always under the impression that all this work we did for getting results from lava into your build pages in a way that could be displayed the way you wanted them was so that they *would* be looked at as part of the testing and release process, and that the goal of the work yongqin has been doing to push more and more automated tests into lava-android-test was to grow the automation and reduce the manual effort for testing builds.  This really seems like the only sane option, considering there are 12 builds on android alone to test!

What you are suggesting now, sounds as if none of it was even worth our time until we get the hardware dongle.  I don't think it has to be an all-or-nothing approach though. I think lava can provide quite a bit of usefulness in its current form, and even testing with the hardware dongle is likely to break from time to time.

As a place to get started, how about if we add a lava-0xbench, lava-busybox, lava-cts,... test tag in your spreadsheet where the results of each of those would get logged?  At least that way we know if someone looked at it or not.

Thanks,
Paul Larson