Hi, Zach
I am now working on a bp that adding the timeout process for android test action in lava. I am not sure if we should reboot the android image when the test run for a time longer than expected. so I'd like to know your opinion which you want to do. a. stop the test and restart the android image, then continue the next test b. stop the test and continue the next test c. others
Like we submit a job to lava for running "monkey,glmark2,0xbench" test on android, while glmark2 test runs for a longer time than expected(like in which case if we don't stop it it will run for ever).
In this case which one would you like? reboot the android and continue the next 0xbench test, or just stopped the glmark2 test and continue to run the next 0xbench test?
Thanks, Yongqin Liu
Hi, Zach
Could you help to give some comment about the mail below?
Thanks, Yongqin Liu
On 28 May 2012 15:29, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, Zach
I am now working on a bp that adding the timeout process for android test action in lava. I am not sure if we should reboot the android image when the test run for a time longer than expected. so I'd like to know your opinion which you want to do. a. stop the test and restart the android image, then continue the next test b. stop the test and continue the next test c. others
Like we submit a job to lava for running "monkey,glmark2,0xbench" test on android, while glmark2 test runs for a longer time than expected(like in which case if we don't stop it it will run for ever).
In this case which one would you like? reboot the android and continue the next 0xbench test, or just stopped the glmark2 test and continue to run the next 0xbench test?
Thanks, Yongqin Liu
I feel stopping and rebooting and continuing with the next test is what we want to aim for.
On this front I wonder if we should directly go for rebooting for _all_ tests to ensure that every test gets executed the same runtime environment.
Big benefit is obviously that tests can then stop services, change runtime state etc. as much as they like without bothering about bringing the system back into a clean state.
Would that be hard? Why wouldn't we do this?
On Tue, Jun 5, 2012 at 4:18 AM, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, Zach
Could you help to give some comment about the mail below?
Thanks, Yongqin Liu
On 28 May 2012 15:29, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, Zach
I am now working on a bp that adding the timeout process for android test action in lava. I am not sure if we should reboot the android image when the test run for a time longer than expected. so I'd like to know your opinion which you want to do. a. stop the test and restart the android image, then continue the next test b. stop the test and continue the next test c. others
Like we submit a job to lava for running "monkey,glmark2,0xbench" test on android, while glmark2 test runs for a longer time than expected(like in which case if we don't stop it it will run for ever).
In this case which one would you like? reboot the android and continue the next 0xbench test, or just stopped the glmark2 test and continue to run the next 0xbench test?
Thanks, Yongqin Liu
linaro-validation mailing list linaro-validation@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-validation
On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote:
I feel stopping and rebooting and continuing with the next test is what we want to aim for.
On this front I wonder if we should directly go for rebooting for _all_ tests to ensure that every test gets executed the same runtime environment.
Big benefit is obviously that tests can then stop services, change runtime state etc. as much as they like without bothering about bringing the system back into a clean state.
Would that be hard? Why wouldn't we do this?
Seems like a good idea in theory, but in practice it may cause testing to take a long time. Plus, what constitutes a test boundary? I think if we do the fail then restart then we get the best of both worlds, we're able to run multiple tests and restart if we get the system into a really weird state.
On Tue, Jun 5, 2012 at 4:18 AM, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, Zach
Could you help to give some comment about the mail below?
Thanks, Yongqin Liu
On 28 May 2012 15:29, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, Zach
I am now working on a bp that adding the timeout process for android test action in lava. I am not sure if we should reboot the android image when the test run for a time longer than expected. so I'd like to know your opinion which you want to do. a. stop the test and restart the android image, then continue the next test b. stop the test and continue the next test c. others
Like we submit a job to lava for running "monkey,glmark2,0xbench" test on android, while glmark2 test runs for a longer time than expected(like in which case if we don't stop it it will run for ever).
In this case which one would you like? reboot the android and continue the next 0xbench test, or just stopped the glmark2 test and continue to run the next 0xbench test?
Thanks, Yongqin Liu
linaro-validation mailing list linaro-validation@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-validation
-- Alexander Sack Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
On Tue, Jun 5, 2012 at 5:26 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote:
I feel stopping and rebooting and continuing with the next test is what we want to aim for.
On this front I wonder if we should directly go for rebooting for _all_ tests to ensure that every test gets executed the same runtime environment.
Big benefit is obviously that tests can then stop services, change runtime state etc. as much as they like without bothering about bringing the system back into a clean state.
Would that be hard? Why wouldn't we do this?
Seems like a good idea in theory, but in practice it may cause testing to take a long time. Plus, what constitutes a test boundary? I think
I don't think this would really extend the testing time much. Majority of time is usually spend in flashing the image. The booting and running should be not of a big time sink; the little bit we loose we get back from keeping the suite running "isolated".
if we do the fail then restart then we get the best of both worlds, we're able to run multiple tests and restart if we get the system into a really weird state.
I would think "one suite" is a good test boundary (basically the names you currently put in TEST_PLAN).
On 5 June 2012 23:34, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:26 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote:
I feel stopping and rebooting and continuing with the next test is what we want to aim for.
On this front I wonder if we should directly go for rebooting for _all_ tests to ensure that every test gets executed the same runtime environment.
Big benefit is obviously that tests can then stop services, change runtime state etc. as much as they like without bothering about bringing the system back into a clean state.
Would that be hard? Why wouldn't we do this?
Seems like a good idea in theory, but in practice it may cause testing to take a long time. Plus, what constitutes a test boundary? I think
I don't think this would really extend the testing time much. Majority of time is usually spend in flashing the image. The booting and running should be not of a big time sink; the little bit we loose we get back from keeping the suite running "isolated".
if we do the fail then restart then we get the best of both worlds, we're able to run multiple tests and restart if we get the system into a really weird state.
I would think "one suite" is a good test boundary (basically the names you currently put in TEST_PLAN).
Sure. I actually okay with this.
If we do this though, we may want to change things so that each test is a tuple with a test name and a timeout for that test.
-- Alexander Sack Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
On Tue, Jun 5, 2012 at 10:57 AM, Zach Pfeffer zach.pfeffer@linaro.orgwrote:
On 5 June 2012 23:34, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:26 PM, Zach Pfeffer zach.pfeffer@linaro.org
wrote:
On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote:
I feel stopping and rebooting and continuing with the next test is what we want to aim for.
On this front I wonder if we should directly go for rebooting for _all_ tests to ensure that every test gets executed the same runtime environment.
Big benefit is obviously that tests can then stop services, change runtime state etc. as much as they like without bothering about bringing the system back into a clean state.
Would that be hard? Why wouldn't we do this?
Seems like a good idea in theory, but in practice it may cause testing to take a long time. Plus, what constitutes a test boundary? I think
I don't think this would really extend the testing time much. Majority of time is usually spend in flashing the image. The booting and running should be not of a big time sink; the little bit we loose we get back from keeping the suite running "isolated".
if we do the fail then restart then we get the best of both worlds, we're able to run multiple tests and restart if we get the system into a really weird state.
I would think "one suite" is a good test boundary (basically the names you currently put in TEST_PLAN).
Sure. I actually okay with this.
If we do this though, we may want to change things so that each test is a tuple with a test name and a timeout for that test.
The test timeout has always been a supported parameter.
On Tue, Jun 5, 2012 at 5:57 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 23:34, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:26 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote:
I feel stopping and rebooting and continuing with the next test is what we want to aim for.
On this front I wonder if we should directly go for rebooting for _all_ tests to ensure that every test gets executed the same runtime environment.
Big benefit is obviously that tests can then stop services, change runtime state etc. as much as they like without bothering about bringing the system back into a clean state.
Would that be hard? Why wouldn't we do this?
Seems like a good idea in theory, but in practice it may cause testing to take a long time. Plus, what constitutes a test boundary? I think
I don't think this would really extend the testing time much. Majority of time is usually spend in flashing the image. The booting and running should be not of a big time sink; the little bit we loose we get back from keeping the suite running "isolated".
if we do the fail then restart then we get the best of both worlds, we're able to run multiple tests and restart if we get the system into a really weird state.
I would think "one suite" is a good test boundary (basically the names you currently put in TEST_PLAN).
Sure. I actually okay with this.
If we do this though, we may want to change things so that each test is a tuple with a test name and a timeout for that test.
you mean you want to tune the timeout in your LAVA_TEST_PLAN=... line? I guess that would be doable... however, ultimately we should think about a better structured format to setup parameterization. I guess we might want to tweak more stuff in the future :)...
Hi, all
Some information:
1. the boot time is about 2 minutes. tested with panda stable build.
2. the specification of timeout for each test is doing via this bp:
https://blueprints.launchpad.net/lava-dispatcher/+spec/timeout-for-android-t... when this BP is completed, the timeout option is assumed that can be specified for each test in LAVA_TEST_PLAN and each test of LAVA_TEST_X and each test of MONKEY_RUNNER_URL_X
Hi, Zach
| If glmark2 hangs, reboot the unit and restart on 0xbench.
Do you mean reboot just when glmark2 timeout, or reboot when any test timeout? And, If the android hangs at some test, lava should be able to reboot the image and continue the next test now.
Thanks, Yongqin Liu
On 6 June 2012 02:52, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:57 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 23:34, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:26 PM, Zach Pfeffer zach.pfeffer@linaro.org
wrote:
On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote:
I feel stopping and rebooting and continuing with the next test is what we want to aim for.
On this front I wonder if we should directly go for rebooting for _all_ tests to ensure that every test gets executed the same runtime environment.
Big benefit is obviously that tests can then stop services, change runtime state etc. as much as they like without bothering about bringing the system back into a clean state.
Would that be hard? Why wouldn't we do this?
Seems like a good idea in theory, but in practice it may cause testing to take a long time. Plus, what constitutes a test boundary? I think
I don't think this would really extend the testing time much. Majority of time is usually spend in flashing the image. The booting and running should be not of a big time sink; the little bit we loose we get back from keeping the suite running "isolated".
if we do the fail then restart then we get the best of both worlds, we're able to run multiple tests and restart if we get the system into a really weird state.
I would think "one suite" is a good test boundary (basically the names you currently put in TEST_PLAN).
Sure. I actually okay with this.
If we do this though, we may want to change things so that each test is a tuple with a test name and a timeout for that test.
you mean you want to tune the timeout in your LAVA_TEST_PLAN=... line? I guess that would be doable... however, ultimately we should think about a better structured format to setup parameterization. I guess we might want to tweak more stuff in the future :)...
-- Alexander Sack Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
On 5 June 2012 21:08, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, all
Some information:
- the boot time is about 2 minutes.
tested with panda stable build.
- the specification of timeout for each test is doing via this bp:
https://blueprints.launchpad.net/lava-dispatcher/+spec/timeout-for-android-t... when this BP is completed, the timeout option is assumed that can be specified for each test in LAVA_TEST_PLAN and each test of LAVA_TEST_X and each test of MONKEY_RUNNER_URL_X
Hi, Zach
| If glmark2 hangs, reboot the unit and restart on 0xbench.
Do you mean reboot just when glmark2 timeout, or reboot when any test timeout?
Reboot when any test times out and start the next test in the list if there is one.
And, If the android hangs at some test, lava should be able to reboot the image and continue the next test now.
This is working now?
Thanks, Yongqin Liu
On 6 June 2012 02:52, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:57 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 23:34, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:26 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote:
I feel stopping and rebooting and continuing with the next test is what we want to aim for.
On this front I wonder if we should directly go for rebooting for _all_ tests to ensure that every test gets executed the same runtime environment.
Big benefit is obviously that tests can then stop services, change runtime state etc. as much as they like without bothering about bringing the system back into a clean state.
Would that be hard? Why wouldn't we do this?
Seems like a good idea in theory, but in practice it may cause testing to take a long time. Plus, what constitutes a test boundary? I think
I don't think this would really extend the testing time much. Majority of time is usually spend in flashing the image. The booting and running should be not of a big time sink; the little bit we loose we get back from keeping the suite running "isolated".
if we do the fail then restart then we get the best of both worlds, we're able to run multiple tests and restart if we get the system into a really weird state.
I would think "one suite" is a good test boundary (basically the names you currently put in TEST_PLAN).
Sure. I actually okay with this.
If we do this though, we may want to change things so that each test is a tuple with a test name and a timeout for that test.
you mean you want to tune the timeout in your LAVA_TEST_PLAN=... line? I guess that would be doable... however, ultimately we should think about a better structured format to setup parameterization. I guess we might want to tweak more stuff in the future :)...
-- Alexander Sack Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
On 6 June 2012 10:11, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 21:08, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, all
Some information:
the boot time is about 2 minutes. tested with panda stable build.
the specification of timeout for each test is doing via this bp:
https://blueprints.launchpad.net/lava-dispatcher/+spec/timeout-for-android-t...
when this BP is completed, the timeout option is assumed that can be
specified for each test in LAVA_TEST_PLAN and each test of LAVA_TEST_X and each test of MONKEY_RUNNER_URL_X
Hi, Zach
| If glmark2 hangs, reboot the unit and restart on 0xbench.
Do you mean reboot just when glmark2 timeout, or reboot when any test timeout?
Reboot when any test times out and start the next test in the list if there is one.
OK, I see, thanks.
And, If the android hangs at some test, lava should be able to reboot the image and continue the next test now.
This is working now?
Yes, lava will reboot the android image before do any test action if the android hangs or has no response or is not on android image session.
Thanks, Yongqin Liu
On 6 June 2012 02:52, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:57 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 23:34, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:26 PM, Zach Pfeffer <
zach.pfeffer@linaro.org>
wrote:
On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote: > I feel stopping and rebooting and continuing with the next test is > what we want to aim for. > > On this front I wonder if we should directly go for rebooting for > _all_ tests to ensure that every test gets executed the same
runtime
> environment. > > Big benefit is obviously that tests can then stop services, change > runtime state etc. as much as they like without bothering about > bringing the system back into a clean state. > > Would that be hard? Why wouldn't we do this?
Seems like a good idea in theory, but in practice it may cause
testing
to take a long time. Plus, what constitutes a test boundary? I think
I don't think this would really extend the testing time much.
Majority
of time is usually spend in flashing the image. The booting and running should be not of a big time sink; the little bit we loose we get back from keeping the suite running "isolated".
if we do the fail then restart then we get the best of both worlds, we're able to run multiple tests and restart if we get the system
into
a really weird state.
I would think "one suite" is a good test boundary (basically the
names
you currently put in TEST_PLAN).
Sure. I actually okay with this.
If we do this though, we may want to change things so that each test is a tuple with a test name and a timeout for that test.
you mean you want to tune the timeout in your LAVA_TEST_PLAN=... line? I guess that would be doable... however, ultimately we should think about a better structured format to setup parameterization. I guess we might want to tweak more stuff in the future :)...
-- Alexander Sack Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
-- Zach Pfeffer Android Platform Team Lead, Linaro Platform Teams Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
On 5 June 2012 21:22, YongQin Liu yongqin.liu@linaro.org wrote:
On 6 June 2012 10:11, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 21:08, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, all
Some information:
- the boot time is about 2 minutes.
tested with panda stable build.
- the specification of timeout for each test is doing via this bp:
https://blueprints.launchpad.net/lava-dispatcher/+spec/timeout-for-android-t... when this BP is completed, the timeout option is assumed that can be specified for each test in LAVA_TEST_PLAN and each test of LAVA_TEST_X and each test of MONKEY_RUNNER_URL_X
Hi, Zach
| If glmark2 hangs, reboot the unit and restart on 0xbench.
Do you mean reboot just when glmark2 timeout, or reboot when any test timeout?
Reboot when any test times out and start the next test in the list if there is one.
OK, I see, thanks.
And, If the android hangs at some test, lava should be able to reboot the image and continue the next test now.
This is working now?
Yes, lava will reboot the android image before do any test action if the android hangs or has no response or is not on android image session.
Awesome!
Thanks, Yongqin Liu
On 6 June 2012 02:52, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:57 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 23:34, Alexander Sack asac@linaro.org wrote:
On Tue, Jun 5, 2012 at 5:26 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote: > On 5 June 2012 18:23, Alexander Sack asac@linaro.org wrote: >> I feel stopping and rebooting and continuing with the next test is >> what we want to aim for. >> >> On this front I wonder if we should directly go for rebooting for >> _all_ tests to ensure that every test gets executed the same >> runtime >> environment. >> >> Big benefit is obviously that tests can then stop services, change >> runtime state etc. as much as they like without bothering about >> bringing the system back into a clean state. >> >> Would that be hard? Why wouldn't we do this? > > Seems like a good idea in theory, but in practice it may cause > testing > to take a long time. Plus, what constitutes a test boundary? I > think
I don't think this would really extend the testing time much. Majority of time is usually spend in flashing the image. The booting and running should be not of a big time sink; the little bit we loose we get back from keeping the suite running "isolated".
> if we do the fail then restart then we get the best of both worlds, > we're able to run multiple tests and restart if we get the system > into > a really weird state.
I would think "one suite" is a good test boundary (basically the names you currently put in TEST_PLAN).
Sure. I actually okay with this.
If we do this though, we may want to change things so that each test is a tuple with a test name and a timeout for that test.
you mean you want to tune the timeout in your LAVA_TEST_PLAN=... line? I guess that would be doable... however, ultimately we should think about a better structured format to setup parameterization. I guess we might want to tweak more stuff in the future :)...
-- Alexander Sack Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
-- Zach Pfeffer Android Platform Team Lead, Linaro Platform Teams Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
On Wed, Jun 6, 2012 at 4:22 AM, YongQin Liu yongqin.liu@linaro.org wrote:
On 6 June 2012 10:11, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 21:08, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, all
Some information:
- the boot time is about 2 minutes.
tested with panda stable build.
- the specification of timeout for each test is doing via this bp:
https://blueprints.launchpad.net/lava-dispatcher/+spec/timeout-for-android-t... when this BP is completed, the timeout option is assumed that can be specified for each test in LAVA_TEST_PLAN and each test of LAVA_TEST_X and each test of MONKEY_RUNNER_URL_X
Hi, Zach
| If glmark2 hangs, reboot the unit and restart on 0xbench.
Do you mean reboot just when glmark2 timeout, or reboot when any test timeout?
Reboot when any test times out and start the next test in the list if there is one.
OK, I see, thanks.
And, If the android hangs at some test, lava should be able to reboot the image and continue the next test now.
This is working now?
Yes, lava will reboot the android image before do any test action if the android hangs or has no response or is not on android image session.
With this already available it feels like it should be super easy to just reboot in between all test suites? ... what's the effort? I would really like to give that a try. It definitely would be much cleaner and as I explained before it would also ensure that test suite always get started while system is in a pristine "fresh boot" state.
On 13 June 2012 04:51, Alexander Sack asac@linaro.org wrote:
On Wed, Jun 6, 2012 at 4:22 AM, YongQin Liu yongqin.liu@linaro.org wrote:
On 6 June 2012 10:11, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 21:08, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, all
Some information:
the boot time is about 2 minutes. tested with panda stable build.
the specification of timeout for each test is doing via this bp:
https://blueprints.launchpad.net/lava-dispatcher/+spec/timeout-for-android-t...
when this BP is completed, the timeout option is assumed that can
be
specified for each test in LAVA_TEST_PLAN and each test of LAVA_TEST_X and each test of MONKEY_RUNNER_URL_X
Hi, Zach
| If glmark2 hangs, reboot the unit and restart on 0xbench.
Do you mean reboot just when glmark2 timeout, or reboot when any test timeout?
Reboot when any test times out and start the next test in the list if there is one.
OK, I see, thanks.
And, If the android hangs at some test, lava should be able to reboot the image and continue the next test now.
This is working now?
Yes, lava will reboot the android image before do any test action if the android hangs or has no response or is not on android image session.
With this already available it feels like it should be super easy to just reboot in between all test suites? ... what's the effort? I would really like to give that a try. It definitely would be much cleaner and as I explained before it would also ensure that test suite always get started while system is in a pristine "fresh boot" state.
This is the BP that just reboot when the test timeout.
If you like to reboot each time, I think you can just add the reboot source into the job.py file of lava-dispathcer branch and have a try. It should be easy to do I think.
Thanks, Yongqin Liu
On 12 Jun 2012, at 21:51, Alexander Sack wrote:
On Wed, Jun 6, 2012 at 4:22 AM, YongQin Liu yongqin.liu@linaro.org wrote:
On 6 June 2012 10:11, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 5 June 2012 21:08, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, all
Some information:
the boot time is about 2 minutes. tested with panda stable build.
the specification of timeout for each test is doing via this bp:
https://blueprints.launchpad.net/lava-dispatcher/+spec/timeout-for-android-t... when this BP is completed, the timeout option is assumed that can be specified for each test in LAVA_TEST_PLAN and each test of LAVA_TEST_X and each test of MONKEY_RUNNER_URL_X
Hi, Zach
| If glmark2 hangs, reboot the unit and restart on 0xbench.
Do you mean reboot just when glmark2 timeout, or reboot when any test timeout?
Reboot when any test times out and start the next test in the list if there is one.
OK, I see, thanks.
And, If the android hangs at some test, lava should be able to reboot the image and continue the next test now.
This is working now?
Yes, lava will reboot the android image before do any test action if the android hangs or has no response or is not on android image session.
With this already available it feels like it should be super easy to just reboot in between all test suites? ... what's the effort? I would really like to give that a try. It definitely would be much cleaner and as I explained before it would also ensure that test suite always get started while system is in a pristine "fresh boot" state.
It seems to me that it should be trivial to add "command": "boot-linaro-android-image" to the json file between each test?
Dave
On 28 May 2012 15:29, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, Zach
I am now working on a bp that adding the timeout process for android test action in lava. I am not sure if we should reboot the android image when the test run for a time longer than expected. so I'd like to know your opinion which you want to do. a. stop the test and restart the android image, then continue the next test b. stop the test and continue the next test c. others
Like we submit a job to lava for running "monkey,glmark2,0xbench" test on android, while glmark2 test runs for a longer time than expected(like in which case if we don't stop it it will run for ever).
In this case which one would you like? reboot the android and continue the next 0xbench test, or just stopped the glmark2 test and continue to run the next 0xbench test?
If glmark2 hangs, reboot the unit and restart on 0xbench.
Thanks, Yongqin Liu
linaro-validation@lists.linaro.org