LAVA and Android Toolchain Benchmarks
zygmunt.krynicki at linaro.org
Wed Jan 18 16:21:49 UTC 2012
On Wed, Jan 18, 2012 at 5:07 PM, Andy Doan <andy.doan at linaro.org> wrote:
> On 01/18/2012 05:25 AM, Alexander Sack wrote:
>> On Wed, Jan 18, 2012 at 12:16 PM, Zygmunt Krynicki
>> <zygmunt.krynicki at linaro.org> wrote:
>>> Hi, looks nice :)
>>> On Wed, Jan 18, 2012 at 5:59 AM, Andy Doan <andy.doan at linaro.org> wrote:
>>>> Sorry for the wide distribution, but I wasn't sure who all would be
>>>> I spent time over the last month updating the Android monthly toolchain
>>>> benchmark process to pull its benchmark data from LAVA tests that are
>>>> stored in validation.linaro.org. Here's an example test run.
>>>> This month's results will be published to the wiki as I normally do.
>>>> However, I spent some time last weekend looking at how to handle this on
>>>> the validation server as well. I first toyed with trying to do a simple
>>>> report plugin. However, it really didn't quite have everything I thought
>>>> was needed.
>>>> I wound up using the "LAVA kernel CI views" project as a skeleton to
>>>> create something for Android. I've got a local prototype that's starting
>>>> to do just about everything I want (I'm fighting some issues the the
>>>> you can get a rough idea.
>>>> Before I really invest time, I wanted to get people's thoughts. Some big
>>>> questions for me:
>>>> 1) Is anyone against doing this?
>>> That's a question to TSC (regarding benchmark data)
>> The data policy doesn't block doing it. Worst case, it might block
>> publishing such view unmodified to the public. But let's look at what
>> we want to do first and then discuss the implications of the data
> There might be another way to look at the code I'm doing. While it has a
> very specific title right now "Android Toolchain Benchmark Report". Its
> actually quite generic and might be useful for other things. In essence,
> it does two things:
> 1) Collate measurements from multiple tests. Reduce these down to one
> set of data with average measurements. I also include standard
> deviation in there, so you can get an idea of the quality of the data.
> 2) Take multiple "combined results" and compare them with each other.
Excellent. I think we should keep evolving this (and keep it separate
from lava-android I did).
Again, could you share the code?
More information about the linaro-android