On 5 June 2012 21:22, YongQin Liu yongqin.liu@linaro.org wrote:
On 6 June 2012 10:11, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 21:08, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, all
Some information:
- the boot time is about 2 minutes.
tested with panda stable build.
- the specification of timeout for each test is doing via this bp:
https://blueprints.launchpad.net/lava-dispatcher/+spec/timeout-for-android-t... when this BP is completed, the timeout option is assumed that can be specified for each test in LAVA_TEST_PLAN and each test of LAVA_TEST_X and each test of MONKEY_RUNNER_URL_X
Hi, Zach
| If glmark2 hangs, reboot the unit and restart on 0xbench.
Do you mean reboot just when glmark2 timeout, or reboot when any test timeout?
Reboot when any test times out and start the next test in the list if there is one.
OK, I see, thanks.
And, If the android hangs at some test, lava should be able to reboot the image and continue the next test now.
This is working now?
Yes, lava will reboot the android image before do any test action if the android hangs or has no response or is not on android image session.
Awesome!
Thanks, Yongqin Liu
On 6 June 2012 02:52, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:57 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 23:34, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:26 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote: > On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote: >> I feel stopping and rebooting and continuing with the next test is >> what we want to aim for. >> >> On this front I wonder if we should directly go for rebooting for >> _all_ tests to ensure that every test gets executed the same >> runtime >> environment. >> >> Big benefit is obviously that tests can then stop services, change >> runtime state etc. as much as they like without bothering about >> bringing the system back into a clean state. >> >> Would that be hard? Why wouldn't we do this? > > Seems like a good idea in theory, but in practice it may cause > testing > to take a long time. Plus, what constitutes a test boundary? I > think
I don't think this would really extend the testing time much. Majority of time is usually spend in flashing the image. The booting and running should be not of a big time sink; the little bit we loose we get back from keeping the suite running "isolated".
> if we do the fail then restart then we get the best of both worlds, > we're able to run multiple tests and restart if we get the system > into > a really weird state.
I would think "one suite" is a good test boundary (basically the names you currently put in TEST_PLAN).
Sure. I actually okay with this.
If we do this though, we may want to change things so that each test is a tuple with a test name and a timeout for that test.
you mean you want to tune the timeout in your LAVA_TEST_PLAN=... line? I guess that would be doable... however, ultimately we should think about a better structured format to setup parameterization. I guess we might want to tweak more stuff in the future :)...
-- Alexander Sack Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
-- Zach Pfeffer Android Platform Team Lead, Linaro Platform Teams Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog