3D Demo at ARM
michael.hope at linaro.org
Thu Aug 18 23:23:49 UTC 2011
On Fri, Aug 19, 2011 at 2:21 AM, Andy Doan <andy.doan at linaro.org> wrote:
> On 08/17/2011 04:59 PM, Michael Hope wrote:
>> On Wed, Aug 17, 2011 at 11:12 PM, Dave Martin <dave.martin at linaro.org> wrote:
>>> On Tue, Aug 16, 2011 at 7:14 PM, Zach Pfeffer <zach.pfeffer at linaro.org> wrote:
>>>> Thanks for the notes. As you say there are many, many things that can
>>>> affect this demo. What notes like this really underscore is the
>>>> importance of staying up-to-date. This demo is more about the
>>>> macroscopic effects from tip support than anything else. We do have
>>>> some more specific benchmark numbers at:
>>> If we're confident that the benchmark produces results of a
>>> trustworthy quality, then that's fine. I don't know this benchmark in
>>> detail, so I can't really judge, other than that the results look a
>>> bit odd.
>> Ditto on that. Have these benchmarks been qualified? Do they
>> represent real workloads? Where do they come from? What aspects of
>> the system (CPU, memory, I/O, kernel, SMP) do they exercise? How
>> sensitive are they to minor changes?
> The benchmark code comes from Android:
> I'm not an expert on benchmarking. I've just tried to focus on running
> these in a way that's as fair and repeatable as possible.
OK. Just keep an eye out then. If the benchmarks are dominated by
things that Linaro isn't working on (such as I/O performance or memory
bandwidth) then the results won't change. If they're dominated by
certain inner functions that are very sensitive to environment
changes, then you may see a regression. Benchmarks need to represent
the workloads of a real system.
More information about the linaro-dev