W dniu 20.03.2012 17:54, Dave Pigott pisze:
As per Zygmunt's request, adding linaro-validation for discussion.
Dave
Dave Pigott Validation Engineer T: +44 1223 45 00 24 | M +44 7940 45 93 44 Linaro.org http://www.linaro.org/***│ *Open source software for ARM SoCs Follow *Linaro: *Facebook http://www.facebook.com/pages/Linaro| Twitter http://twitter.com/#!/linaroorg | Blog http://www.linaro.org/linaro-blog/
On 20 Mar 2012, at 14:33, Zygmunt Krynicki wrote:
W dniu 20.03.2012 15:22, Dave Pigott pisze:
Hi,
I think we need to seriously look at allowing a timeout parameter around "deploy_linaro_image", because vexpress is going to need around 4 hours to deploy a full desktop, and it seems crazy that we should hard code that big a timeout into master.py for *all* platforms.
Could we move that to linaro-validation please?
I realise there is an issue as to which part the timeout is applied to and how it's apportioned, but even if we just allowed to pass through the whole timeout to each stage it would be better than what we have at the moment.
I think that we have two general problems with timeouts:
1) We've pulled most of the initial values out of a hat 2) Timeouts are expressions, not constants.
I'm very glad that with our health jobs we're actually looking at the constants we're using. I'd like to see a more scientific and thorough approach to this problem:
-> Keep a shared google doc spreadsheet with timeouts for various actions that we put in our health jobs -> Track that per board -> Track the age and cycle count for each SD we purchase and allocate in the lab -> Benchmark the SD periodically
Given that data we could turn timeout constants into timeout expressions that can use the following variables:
$normalized_cpu_time $average_sd_speed
Having that we could say that, for example: deploy_linaro_image takes 2GB of writes and an 30 minutes of normalized CPU. The expression would then evaluate to actual values for each test job. We could also track how average_sd_speed changes over time.
Thanks ZK