On 8 March 2012 00:01, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, Zach
Here is what you write in the memo: https://docs.google.com/a/linaro.org/document/d/1mtaw5sVwVD4CD1fbAk0XU9jxIsA...
- Pass TESTCOMMANDFILE1=a file from git via Android build
- Pass TESTCOMMAND1=”command opt1 opt2” via Android build
the command should be executed on the android, that means the command should be an android command. not a command of the host.
- Be able to reboot the unit within one test
- CTS
- Pass a meta file back for parsing, or just a parser
Here I can understand the 1st and the 2nd and the 5th.
About the 3rd, we have talked about yesterday, the attachment file (talk.log) is the content of our talk yesterday. and I understand it like this:
In some test case, there are the possibilities that the android image will reboot, and we should be able to handle such cases so that we can get the exact test result.
and I thought about that again after the talk, there are also other possibility like reboot, such as power down, crash, freeze you mean all of these cases should be handled too?
Right. If a command in the command log causes the unit-under-test to do any of those things, then the unit should be rebooted (in the case of a hang) or the reboot should be sensed (in case the command caused a reboot) and when the unit boots back up, LAVA would continue the test on the command after the one that caused it to hang, reboot, freeze, etc. LAVA should save the logs for the entire command file run, including all hangs and reboots.
About the 4th one CTS, sorry, I am still not clear about what do you mean here by CTS, if possible, could you describe a test case about that?
Take a look at: http://source.android.com/compatibility/cts-intro.html
The suite is available here: http://source.android.com/compatibility/downloads.html
We simply need to run the existing test cases:
http://dl.google.com/dl/android/cts/android-cts-4.0.3_r2-linux_x86-arm.zip http://dl.google.com/dl/android/cts/android-cts-verifier-4.0.3_r1-linux_x86-...
Which are written against:
http://source.android.com/compatibility/4.0/android-4.0-cdd.pdf
Thanks, Yongqin Liu
On 9 March 2012 06:07, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 8 March 2012 00:01, YongQin Liu yongqin.liu@linaro.org wrote:
Hi, Zach
Here is what you write in the memo:
https://docs.google.com/a/linaro.org/document/d/1mtaw5sVwVD4CD1fbAk0XU9jxIsA...
- Pass TESTCOMMANDFILE1=a file from git via Android build
- Pass TESTCOMMAND1=”command opt1 opt2” via Android build
the command should be executed on the android, that means the command
should
be an android command. not a command of the host.
- Be able to reboot the unit within one test
- CTS
- Pass a meta file back for parsing, or just a parser
Here I can understand the 1st and the 2nd and the 5th.
About the 3rd, we have talked about yesterday, the attachment file (talk.log) is the content of our talk yesterday. and I understand it like this:
In some test case, there are the possibilities that the android image
will
reboot, and we should be able to handle such cases so that we can get the exact
test
result.
and I thought about that again after the talk, there are also other possibility like reboot, such as power down, crash, freeze you mean all of these cases should be handled too?
Right. If a command in the command log causes the unit-under-test to do any of those things, then the unit should be rebooted (in the case of a hang) or the reboot should be sensed (in case the command caused a reboot) and when the unit boots back up, LAVA would continue the test on the command after the one that caused it to hang, reboot, freeze, etc. LAVA should save the logs for the entire command file run, including all hangs and reboots.
Thanks for your explanation. Now I know more about it.
About the 4th one CTS, sorry, I am still not clear about what do you mean here by CTS, if possible, could you describe a test case about that?
Take a look at: http://source.android.com/compatibility/cts-intro.html
The suite is available here: http://source.android.com/compatibility/downloads.html
We simply need to run the existing test cases:
http://dl.google.com/dl/android/cts/android-cts-4.0.3_r2-linux_x86-arm.zip
http://dl.google.com/dl/android/cts/android-cts-verifier-4.0.3_r1-linux_x86-...
Which are written against:
http://source.android.com/compatibility/4.0/android-4.0-cdd.pdf
About CTS, because the problem reported below
http://code.google.com/p/android/issues/detail?id=24267
we need to do some modification on our branch, and now we use a self compiled CTS binary to test.
And about the cts-verifier, below is the description written in the CDD file:
The CTS Verifier is included with the Compatibility Test Suite, and is intended to be run by a human operator to test functionality that cannot be tested by an automated system, such as correct functioning of a camera and sensors.
I will try to investigate if we can run it automatically, but I guess it will possible be not.
Thanks, Yongqin Liu
On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer zach.pfeffer@linaro.orgwrote:
Right. If a command in the command log causes the unit-under-test to do any of those things, then the unit should be rebooted (in the case of a hang) or the reboot should be sensed (in case the command caused a reboot) and when the unit boots back up, LAVA would continue the test on the command after the one that caused it to hang, reboot, freeze, etc. LAVA should save the logs for the entire command file run, including all hangs and reboots.
As mentioned when we talked about this on the phone, I don't think this is the best way to approach the problem. Currently, if a test hangs, times out, reboots improperly (and thus times out because it won't get to the expected place), lava will reboot the system back into the test image. However, at this point, it will mark the previous testsuite it tried to run as a failed run. Then, if there are other tests queued up to run, it will continue running with the next one - NOT try to re-enter the existing test and continue where it left off. This is not a capability currently supported in lava-android-test.
The good news is, I think there's a much more straightforward way to do this. I haven't really seen an example of the kinds of things you want to run, but it seems that it's just a list of several tests, possibly with multiple steps in each test. And if it reboots, or has a problem, you want it to fail that one and continue to the next. This is very simple to do, if you simply define each of those tests as lava-android-tests. Then you can run them all in your test job, and I think it will do exactly what you are looking for here. Also, they can then be easily reused in other test jobs.
Thanks, Paul Larson
On 8 March 2012 23:44, Paul Larson paul.larson@linaro.org wrote:
On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
Right. If a command in the command log causes the unit-under-test to do any of those things, then the unit should be rebooted (in the case of a hang) or the reboot should be sensed (in case the command caused a reboot) and when the unit boots back up, LAVA would continue the test on the command after the one that caused it to hang, reboot, freeze, etc. LAVA should save the logs for the entire command file run, including all hangs and reboots.
As mentioned when we talked about this on the phone, I don't think this is the best way to approach the problem. Currently, if a test hangs, times out, reboots improperly (and thus times out because it won't get to the expected place), lava will reboot the system back into the test image. However, at this point, it will mark the previous testsuite it tried to run as a failed run. Then, if there are other tests queued up to run, it will continue running with the next one - NOT try to re-enter the existing test and continue where it left off. This is not a capability currently supported in lava-android-test.
The good news is, I think there's a much more straightforward way to do this. I haven't really seen an example of the kinds of things you want to run, but it seems that it's just a list of several tests, possibly with multiple steps in each test. And if it reboots, or has a problem, you want it to fail that one and continue to the next. This is very simple to do, if you simply define each of those tests as lava-android-tests. Then you can run them all in your test job, and I think it will do exactly what you are looking for here. Also, they can then be easily reused in other test jobs.
Hmm...
Here's what I'd like to do.
I'd like to pass a simple command file to LAVA. I'd like LAVA to execute each test and if that test hangs or causes a reboot I'd like LAVA to pick back up at the next test case. I'd like LAVA to collect all the logs and send them either to LAVA or back to us for post processing.
I'd like to do this, because I'd like to be able to include these command files in our builds so that people can run them themselves and include the post processing commands for people to see what passed and failed.
The text file also gives me a very easy way to add and remove tests on a per-build basis since it goes along with the build.
Thanks, Paul Larson
On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer zach.pfeffer@linaro.orgwrote:
On 8 March 2012 23:44, Paul Larson paul.larson@linaro.org wrote:
On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
Right. If a command in the command log causes the unit-under-test to do any of those things, then the unit should be rebooted (in the case of a hang) or the reboot should be sensed (in case the command caused a reboot) and when the unit boots back up, LAVA would continue the test on the command after the one that caused it to hang, reboot, freeze, etc. LAVA should save the logs for the entire command file run, including all hangs and reboots.
As mentioned when we talked about this on the phone, I don't think this
is
the best way to approach the problem. Currently, if a test hangs, times
out,
reboots improperly (and thus times out because it won't get to the
expected
place), lava will reboot the system back into the test image. However,
at
this point, it will mark the previous testsuite it tried to run as a
failed
run. Then, if there are other tests queued up to run, it will continue running with the next one - NOT try to re-enter the existing test and continue where it left off. This is not a capability currently
supported in
lava-android-test.
The good news is, I think there's a much more straightforward way to do this. I haven't really seen an example of the kinds of things you want to run,
but
it seems that it's just a list of several tests, possibly with multiple steps in each test. And if it reboots, or has a problem, you want it to fail that one and continue to the next. This is very simple to do, if
you
simply define each of those tests as lava-android-tests. Then you can
run
them all in your test job, and I think it will do exactly what you are looking for here. Also, they can then be easily reused in other test
jobs.
Hmm...
Here's what I'd like to do.
I'd like to pass a simple command file to LAVA. I'd like LAVA to execute each test and if that test hangs or causes a reboot I'd like LAVA to pick back up at the next test case. I'd like LAVA to collect all the logs and send them either to LAVA or back to us for post processing.
I'd like to do this, because I'd like to be able to include these command files in our builds so that people can run them themselves and include the post processing commands for people to see what passed and failed.
The text file also gives me a very easy way to add and remove tests on a per-build basis since it goes along with the build.
I get that, but I don't see why you can't have it both ways. If you want a simple set of instructions for someone to manually type at a command line to run through the tests, there's nothing preventing you from including that in a readme, script, or however you want to do that. But those tests described in there would be really nice to convert into reusable tests in lava. Once you have those building blocks, you can arrange them in future builds however you like.
Thanks, Paul Larson
On Fri, Mar 9, 2012 at 8:05 AM, Paul Larson paul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer zach.pfeffer@linaro.orgwrote:
On 8 March 2012 23:44, Paul Larson paul.larson@linaro.org wrote:
On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
Right. If a command in the command log causes the unit-under-test to do any of those things, then the unit should be rebooted (in the case of a hang) or the reboot should be sensed (in case the command caused a reboot) and when the unit boots back up, LAVA would continue the test on the command after the one that caused it to hang, reboot, freeze, etc. LAVA should save the logs for the entire command file run, including all hangs and reboots.
As mentioned when we talked about this on the phone, I don't think this
is
the best way to approach the problem. Currently, if a test hangs, times
out,
reboots improperly (and thus times out because it won't get to the
expected
place), lava will reboot the system back into the test image. However,
at
this point, it will mark the previous testsuite it tried to run as a
failed
run. Then, if there are other tests queued up to run, it will continue running with the next one - NOT try to re-enter the existing test and continue where it left off. This is not a capability currently
supported in
lava-android-test.
The good news is, I think there's a much more straightforward way to do this. I haven't really seen an example of the kinds of things you want to
run, but
it seems that it's just a list of several tests, possibly with multiple steps in each test. And if it reboots, or has a problem, you want it to fail that one and continue to the next. This is very simple to do, if
you
simply define each of those tests as lava-android-tests. Then you can
run
them all in your test job, and I think it will do exactly what you are looking for here. Also, they can then be easily reused in other test
jobs.
Hmm...
Here's what I'd like to do.
I'd like to pass a simple command file to LAVA. I'd like LAVA to execute each test and if that test hangs or causes a reboot I'd like LAVA to pick back up at the next test case. I'd like LAVA to collect all the logs and send them either to LAVA or back to us for post processing.
I'd like to do this, because I'd like to be able to include these command files in our builds so that people can run them themselves and include the post processing commands for people to see what passed and failed.
The text file also gives me a very easy way to add and remove tests on a per-build basis since it goes along with the build.
I get that, but I don't see why you can't have it both ways. If you want a simple set of instructions for someone to manually type at a command line to run through the tests, there's nothing preventing you from including that in a readme, script, or however you want to do that. But those tests described in there would be really nice to convert into reusable tests in lava. Once you have those building blocks, you can arrange them in future builds however you like.
I think what Zach is saying here is that he wants to use a single file to maintain the list of tests to run for a build. That file is then sourced both by a tool used by developers for local testing as well as by LAVA.
He most likely also would want the local tool used by devs to be a standard solution provided by the lava test framework...
On Fri, Mar 9, 2012 at 7:20 AM, Alexander Sack asac@linaro.org wrote:
On Fri, Mar 9, 2012 at 8:05 AM, Paul Larson paul.larson@linaro.orgwrote:
On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer zach.pfeffer@linaro.orgwrote:
On 8 March 2012 23:44, Paul Larson paul.larson@linaro.org wrote:
On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
Right. If a command in the command log causes the unit-under-test to do any of those things, then the unit should be rebooted (in the case of a hang) or the reboot should be sensed (in case the command caused a reboot) and when the unit boots back up, LAVA would continue the test on the command after the one that caused it to hang, reboot, freeze, etc. LAVA should save the logs for the entire command file run, including all hangs and reboots.
As mentioned when we talked about this on the phone, I don't think
this is
the best way to approach the problem. Currently, if a test hangs,
times out,
reboots improperly (and thus times out because it won't get to the
expected
place), lava will reboot the system back into the test image.
However, at
this point, it will mark the previous testsuite it tried to run as a
failed
run. Then, if there are other tests queued up to run, it will continue running with the next one - NOT try to re-enter the existing test and continue where it left off. This is not a capability currently
supported in
lava-android-test.
The good news is, I think there's a much more straightforward way to do this. I haven't really seen an example of the kinds of things you want to
run, but
it seems that it's just a list of several tests, possibly with multiple steps in each test. And if it reboots, or has a problem, you want it
to
fail that one and continue to the next. This is very simple to do, if
you
simply define each of those tests as lava-android-tests. Then you can
run
them all in your test job, and I think it will do exactly what you are looking for here. Also, they can then be easily reused in other test
jobs.
Hmm...
Here's what I'd like to do.
I'd like to pass a simple command file to LAVA. I'd like LAVA to execute each test and if that test hangs or causes a reboot I'd like LAVA to pick back up at the next test case. I'd like LAVA to collect all the logs and send them either to LAVA or back to us for post processing.
I'd like to do this, because I'd like to be able to include these command files in our builds so that people can run them themselves and include the post processing commands for people to see what passed and failed.
The text file also gives me a very easy way to add and remove tests on a per-build basis since it goes along with the build.
I get that, but I don't see why you can't have it both ways. If you want a simple set of instructions for someone to manually type at a command line to run through the tests, there's nothing preventing you from including that in a readme, script, or however you want to do that. But those tests described in there would be really nice to convert into reusable tests in lava. Once you have those building blocks, you can arrange them in future builds however you like.
I think what Zach is saying here is that he wants to use a single file to maintain the list of tests to run for a build. That file is then sourced both by a tool used by developers for local testing as well as by LAVA.
He most likely also would want the local tool used by devs to be a standard solution provided by the lava test framework...
I know, but what I'm seeing here doesn't look like something that's
possible right now, and I think might take some pretty serious effort and re-engineering of how lava works around this particular use case to make it work. What I'm trying to show is that there's more than one way to think about this, and that with a slight change it goes from something that we need to discuss at the connect and get some effort behind making happen, to something that he can have today. And at the same time, doing it in this alternative way builds up a reusable set of android tests that can be applied to future builds.
Thanks, Paul Larson
On 9 March 2012 07:33, Paul Larson paul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 7:20 AM, Alexander Sack asac@linaro.org wrote:
On Fri, Mar 9, 2012 at 8:05 AM, Paul Larson paul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 8 March 2012 23:44, Paul Larson paul.larson@linaro.org wrote:
On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
Right. If a command in the command log causes the unit-under-test to do any of those things, then the unit should be rebooted (in the case of a hang) or the reboot should be sensed (in case the command caused a reboot) and when the unit boots back up, LAVA would continue the test on the command after the one that caused it to hang, reboot, freeze, etc. LAVA should save the logs for the entire command file run, including all hangs and reboots.
As mentioned when we talked about this on the phone, I don't think this is the best way to approach the problem. Currently, if a test hangs, times out, reboots improperly (and thus times out because it won't get to the expected place), lava will reboot the system back into the test image. However, at this point, it will mark the previous testsuite it tried to run as a failed run. Then, if there are other tests queued up to run, it will continue running with the next one - NOT try to re-enter the existing test and continue where it left off. This is not a capability currently supported in lava-android-test.
The good news is, I think there's a much more straightforward way to do this. I haven't really seen an example of the kinds of things you want to run, but it seems that it's just a list of several tests, possibly with multiple steps in each test. And if it reboots, or has a problem, you want it to fail that one and continue to the next. This is very simple to do, if you simply define each of those tests as lava-android-tests. Then you can run them all in your test job, and I think it will do exactly what you are looking for here. Also, they can then be easily reused in other test jobs.
Hmm...
Here's what I'd like to do.
I'd like to pass a simple command file to LAVA. I'd like LAVA to execute each test and if that test hangs or causes a reboot I'd like LAVA to pick back up at the next test case. I'd like LAVA to collect all the logs and send them either to LAVA or back to us for post processing.
I'd like to do this, because I'd like to be able to include these command files in our builds so that people can run them themselves and include the post processing commands for people to see what passed and failed.
The text file also gives me a very easy way to add and remove tests on a per-build basis since it goes along with the build.
I get that, but I don't see why you can't have it both ways. If you want a simple set of instructions for someone to manually type at a command line to run through the tests, there's nothing preventing you from including that in a readme, script, or however you want to do that. But those tests described in there would be really nice to convert into reusable tests in lava. Once you have those building blocks, you can arrange them in future builds however you like.
I think what Zach is saying here is that he wants to use a single file to maintain the list of tests to run for a build. That file is then sourced both by a tool used by developers for local testing as well as by LAVA.
He most likely also would want the local tool used by devs to be a standard solution provided by the lava test framework...
I know, but what I'm seeing here doesn't look like something that's possible right now, and I think might take some pretty serious effort and re-engineering of how lava works around this particular use case to make it work. What I'm trying to show is that there's more than one way to think about this, and that with a slight change it goes from something that we need to discuss at the connect and get some effort behind making happen, to something that he can have today. And at the same time, doing it in this alternative way builds up a reusable set of android tests that can be applied to future builds.
Paul,
Your proposal is to create a linaro-android-test for each command I would put in a command file right?
Thanks, Paul Larson
I wouldn't think for every single command, no. As mentioned earlier, I suspect it could be broken up into tests comprised of one or more commands. The tests are what would go into lava-android-test.
But I'd like to get a better understanding of your perspective. Would it be possible to provide an example of one of these files so I can get a better idea of what you're looking for here?
Thanks, Paul Larson On Mar 9, 2012 11:40 AM, "Zach Pfeffer" zach.pfeffer@linaro.org wrote:
On 9 March 2012 07:33, Paul Larson paul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 7:20 AM, Alexander Sack asac@linaro.org wrote:
On Fri, Mar 9, 2012 at 8:05 AM, Paul Larson paul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer <zach.pfeffer@linaro.org
wrote:
On 8 March 2012 23:44, Paul Larson paul.larson@linaro.org wrote:
On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer <
zach.pfeffer@linaro.org>
wrote: > > Right. If a command in the command log causes the unit-under-test
to
> do any of those things, then the unit should be rebooted (in the
case
> of a hang) or the reboot should be sensed (in case the command
caused
> a reboot) and when the unit boots back up, LAVA would continue the > test on the command after the one that caused it to hang, reboot, > freeze, etc. LAVA should save the logs for the entire command file > run, including all hangs and reboots.
As mentioned when we talked about this on the phone, I don't think this is the best way to approach the problem. Currently, if a test hangs, times out, reboots improperly (and thus times out because it won't get to the expected place), lava will reboot the system back into the test image. However, at this point, it will mark the previous testsuite it tried to run as a failed run. Then, if there are other tests queued up to run, it will continue running with the next one - NOT try to re-enter the existing test
and
continue where it left off. This is not a capability currently supported in lava-android-test.
The good news is, I think there's a much more straightforward way to do this. I haven't really seen an example of the kinds of things you want to run, but it seems that it's just a list of several tests, possibly with multiple steps in each test. And if it reboots, or has a problem, you want
it
to fail that one and continue to the next. This is very simple to do,
if
you simply define each of those tests as lava-android-tests. Then you
can
run them all in your test job, and I think it will do exactly what you
are
looking for here. Also, they can then be easily reused in other
test
jobs.
Hmm...
Here's what I'd like to do.
I'd like to pass a simple command file to LAVA. I'd like LAVA to execute each test and if that test hangs or causes a reboot I'd like LAVA to pick back up at the next test case. I'd like LAVA to collect all the logs and send them either to LAVA or back to us for post processing.
I'd like to do this, because I'd like to be able to include these command files in our builds so that people can run them themselves and include the post processing commands for people to see what passed and failed.
The text file also gives me a very easy way to add and remove tests on a per-build basis since it goes along with the build.
I get that, but I don't see why you can't have it both ways. If you
want
a simple set of instructions for someone to manually type at a command
line
to run through the tests, there's nothing preventing you from
including that
in a readme, script, or however you want to do that. But those tests described in there would be really nice to convert into reusable tests
in
lava. Once you have those building blocks, you can arrange them in
future
builds however you like.
I think what Zach is saying here is that he wants to use a single file
to
maintain the list of tests to run for a build. That file is then sourced both by a tool used by developers for local testing as well as by LAVA.
He most likely also would want the local tool used by devs to be a standard solution provided by the lava test framework...
I know, but what I'm seeing here doesn't look like something that's
possible
right now, and I think might take some pretty serious effort and re-engineering of how lava works around this particular use case to make
it
work. What I'm trying to show is that there's more than one way to think about this, and that with a slight change it goes from something that we need to discuss at the connect and get some effort behind making happen,
to
something that he can have today. And at the same time, doing it in this alternative way builds up a reusable set of android tests that can be applied to future builds.
Paul,
Your proposal is to create a linaro-android-test for each command I would put in a command file right?
Thanks, Paul Larson
-- Zach Pfeffer Android Platform Team Lead, Linaro Platform Teams Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
On 9 March 2012 11:47, Paul Larson paul.larson@linaro.org wrote:
I wouldn't think for every single command, no. As mentioned earlier, I suspect it could be broken up into tests comprised of one or more commands. The tests are what would go into lava-android-test.
But I'd like to get a better understanding of your perspective. Would it be possible to provide an example of one of these files so I can get a better idea of what you're looking for here?
Looking at:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
To run "monkey" I must define this:
# Copyright (c) 2011 Linaro
# Author: Linaro Validation Team linaro-dev@lists.linaro.org # # This file is part of LAVA Android Test. # # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see http://www.gnu.org/licenses/. import os import lava_android_test.testdef from lava_android_test.config import get_config
test_name = 'monkey' config = get_config() curdir = os.path.realpath(os.path.dirname(__file__)) monkey_sh_name = 'monkey.sh' monkey_sh_path = os.path.join(curdir, 'monkey', monkey_sh_name) monkey_sh_android_path = os.path.join(config.installdir_android, test_name, monkey_sh_name)
INSTALL_STEPS_ADB_PRE = ['push %s %s ' % (monkey_sh_path, monkey_sh_android_path), 'shell chmod 777 %s' % monkey_sh_android_path]
ADB_SHELL_STEPS = [monkey_sh_android_path] #PATTERN = "^(?P<test_case_id>\w+):\W+(?P<measurement>\d+.\d+)" PATTERN = "## Network stats: elapsed time=(?P<measurement>\d+)ms" FAILURE_PATTERNS = [] #FAILURE_PATTERNS = ['** Monkey aborted due to error.', # '** System appears to have crashed']
inst = lava_android_test.testdef.AndroidTestInstaller( steps_adb_pre=INSTALL_STEPS_ADB_PRE) run = lava_android_test.testdef.AndroidTestRunner( adbshell_steps=ADB_SHELL_STEPS) parser = lava_android_test.testdef.AndroidTestParser(PATTERN, appendall={'units': 'ms'}, failure_patterns=FAILURE_PATTERNS) testobj = lava_android_test.testdef.AndroidTest(testname=test_name, installer=inst, runner=run, parser=parser) Which calls:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
#!/system/bin/sh #monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647" monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 500" echo execute command=${monkey_cmd} ${monkey_cmd} echo MONKEY_RET_CODE=$?
...so to run a specific instance of monkey:
monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647
I must write 20 lines of code.
Lets say I want to run monkey N different ways, I can create:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
...or I could write something that allows me to pass the command line I want to monkey through the JSON binding.
Lets say I want to run monkey in N ways for target A and M ways for target B and etc...
I won't want to encode all of these as
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
I really just want to pass a file in that says:
monkey1 monkey2 monkey3 monkey4
...and have LAVA execute that.
So that's generally doable. I'll write my 20 lines to call a test entiry point that passes a known good file.
But each one of the commands I passed in will crash the unit. I want to run them back to back because I want to see for each command how long the unit ran.
Now I could just have the command file that I run, that I wrote the 20 lines for, simply end on a test that I think will crash, but I'll never know for sure. To save me from guessing, the "tester" simply picks back up where it left off.
From a LAVA perspective this would probably be down to:
program_the_unit current_command = 0 boot_or_reboot: current_command = next_command running_test = 1 while (running_test and current_command) { run_command(current_command) current_command = next_command } running_test = 0
So we can write the linaro-android-test that runs a command file that's resident in the build, but I'm still stuck because if one of the commands hangs the test will be marked "over" and the unit will be reset, reprogramed and the next linaro-android-test run. So I'll need to write something the collects up all the runs and does the analysis - or LAVA could execute a test script, collect all the logs, and send them back. It could even accept a parser that would be able to parse all the logs from all the runs and take lots of nice statistics,
In the end, we will need to self host all the tests we run as per George's request. A set of command files and parsers included with each build allow me to satisfy that requirement.
Thanks, Paul Larson
On Mar 9, 2012 11:40 AM, "Zach Pfeffer" zach.pfeffer@linaro.org wrote:
On 9 March 2012 07:33, Paul Larson paul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 7:20 AM, Alexander Sack asac@linaro.org wrote:
On Fri, Mar 9, 2012 at 8:05 AM, Paul Larson paul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 8 March 2012 23:44, Paul Larson paul.larson@linaro.org wrote: > > > On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer > zach.pfeffer@linaro.org > wrote: >> >> Right. If a command in the command log causes the unit-under-test >> to >> do any of those things, then the unit should be rebooted (in the >> case >> of a hang) or the reboot should be sensed (in case the command >> caused >> a reboot) and when the unit boots back up, LAVA would continue the >> test on the command after the one that caused it to hang, reboot, >> freeze, etc. LAVA should save the logs for the entire command file >> run, including all hangs and reboots. > > As mentioned when we talked about this on the phone, I don't think > this is > the best way to approach the problem. Currently, if a test hangs, > times out, > reboots improperly (and thus times out because it won't get to the > expected > place), lava will reboot the system back into the test image. > However, at > this point, it will mark the previous testsuite it tried to run as > a > failed > run. Then, if there are other tests queued up to run, it will > continue > running with the next one - NOT try to re-enter the existing test > and > continue where it left off. This is not a capability currently > supported in > lava-android-test. > > The good news is, I think there's a much more straightforward way > to > do > this. > I haven't really seen an example of the kinds of things you want to > run, but > it seems that it's just a list of several tests, possibly with > multiple > steps in each test. And if it reboots, or has a problem, you want > it > to > fail that one and continue to the next. This is very simple to do, > if > you > simply define each of those tests as lava-android-tests. Then you > can > run > them all in your test job, and I think it will do exactly what you > are > looking for here. Also, they can then be easily reused in other > test > jobs.
Hmm...
Here's what I'd like to do.
I'd like to pass a simple command file to LAVA. I'd like LAVA to execute each test and if that test hangs or causes a reboot I'd like LAVA to pick back up at the next test case. I'd like LAVA to collect all the logs and send them either to LAVA or back to us for post processing.
I'd like to do this, because I'd like to be able to include these command files in our builds so that people can run them themselves and include the post processing commands for people to see what passed and failed.
The text file also gives me a very easy way to add and remove tests on a per-build basis since it goes along with the build.
I get that, but I don't see why you can't have it both ways. If you want a simple set of instructions for someone to manually type at a command line to run through the tests, there's nothing preventing you from including that in a readme, script, or however you want to do that. But those tests described in there would be really nice to convert into reusable tests in lava. Once you have those building blocks, you can arrange them in future builds however you like.
I think what Zach is saying here is that he wants to use a single file to maintain the list of tests to run for a build. That file is then sourced both by a tool used by developers for local testing as well as by LAVA.
He most likely also would want the local tool used by devs to be a standard solution provided by the lava test framework...
I know, but what I'm seeing here doesn't look like something that's possible right now, and I think might take some pretty serious effort and re-engineering of how lava works around this particular use case to make it work. What I'm trying to show is that there's more than one way to think about this, and that with a slight change it goes from something that we need to discuss at the connect and get some effort behind making happen, to something that he can have today. And at the same time, doing it in this alternative way builds up a reusable set of android tests that can be applied to future builds.
Paul,
Your proposal is to create a linaro-android-test for each command I would put in a command file right?
Thanks, Paul Larson
-- Zach Pfeffer Android Platform Team Lead, Linaro Platform Teams Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
W dniu 09.03.2012 19:21, Zach Pfeffer pisze:
On 9 March 2012 11:47, Paul Larsonpaul.larson@linaro.org wrote:
I wouldn't think for every single command, no. As mentioned earlier, I suspect it could be broken up into tests comprised of one or more commands. The tests are what would go into lava-android-test.
But I'd like to get a better understanding of your perspective. Would it be possible to provide an example of one of these files so I can get a better idea of what you're looking for here?
Looking at:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
To run "monkey" I must define this:
# Copyright (c) 2011 Linaro
# Author: Linaro Validation Teamlinaro-dev@lists.linaro.org # # This file is part of LAVA Android Test. # # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, seehttp://www.gnu.org/licenses/. import os import lava_android_test.testdef from lava_android_test.config import get_config
test_name = 'monkey' config = get_config() curdir = os.path.realpath(os.path.dirname(__file__)) monkey_sh_name = 'monkey.sh' monkey_sh_path = os.path.join(curdir, 'monkey', monkey_sh_name) monkey_sh_android_path = os.path.join(config.installdir_android, test_name, monkey_sh_name)
INSTALL_STEPS_ADB_PRE = ['push %s %s ' % (monkey_sh_path, monkey_sh_android_path), 'shell chmod 777 %s' % monkey_sh_android_path]
ADB_SHELL_STEPS = [monkey_sh_android_path] #PATTERN = "^(?P<test_case_id>\w+):\W+(?P<measurement>\d+.\d+)" PATTERN = "## Network stats: elapsed time=(?P<measurement>\d+)ms" FAILURE_PATTERNS = [] #FAILURE_PATTERNS = ['** Monkey aborted due to error.', # '** System appears to have crashed']
inst = lava_android_test.testdef.AndroidTestInstaller( steps_adb_pre=INSTALL_STEPS_ADB_PRE) run = lava_android_test.testdef.AndroidTestRunner( adbshell_steps=ADB_SHELL_STEPS) parser = lava_android_test.testdef.AndroidTestParser(PATTERN, appendall={'units': 'ms'}, failure_patterns=FAILURE_PATTERNS) testobj = lava_android_test.testdef.AndroidTest(testname=test_name, installer=inst, runner=run, parser=parser) Which calls:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
#!/system/bin/sh #monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647" monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 500" echo execute command=${monkey_cmd} ${monkey_cmd} echo MONKEY_RET_CODE=$?
...so to run a specific instance of monkey:
monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647
I must write 20 lines of code.
Lets say I want to run monkey N different ways, I can create:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
...or I could write something that allows me to pass the command line I want to monkey through the JSON binding.
Lets say I want to run monkey in N ways for target A and M ways for target B and etc...
I won't want to encode all of these as
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ... http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
I really just want to pass a file in that says:
monkey1 monkey2 monkey3 monkey4
Then all you need is one monkey with all the parameters exposed. Everything else will be specified by the dispatcher job. You can specify targets and any parameters.
You are attempting to solve a non-issue. We have everything needed to support your use case now.
Thanks ZK
...and have LAVA execute that.
So that's generally doable. I'll write my 20 lines to call a test entiry point that passes a known good file.
But each one of the commands I passed in will crash the unit. I want to run them back to back because I want to see for each command how long the unit ran.
Now I could just have the command file that I run, that I wrote the 20 lines for, simply end on a test that I think will crash, but I'll never know for sure. To save me from guessing, the "tester" simply picks back up where it left off.
From a LAVA perspective this would probably be down to:
program_the_unit current_command = 0
boot_or_reboot: current_command = next_command running_test = 1 while (running_test and current_command) { run_command(current_command) current_command = next_command } running_test = 0
So we can write the linaro-android-test that runs a command file that's resident in the build, but I'm still stuck because if one of the commands hangs the test will be marked "over" and the unit will be reset, reprogramed and the next linaro-android-test run. So I'll need to write something the collects up all the runs and does the analysis
- or LAVA could execute a test script, collect all the logs, and send
them back. It could even accept a parser that would be able to parse all the logs from all the runs and take lots of nice statistics,
In the end, we will need to self host all the tests we run as per George's request. A set of command files and parsers included with each build allow me to satisfy that requirement.
Thanks, Paul Larson
On Mar 9, 2012 11:40 AM, "Zach Pfeffer"zach.pfeffer@linaro.org wrote:
On 9 March 2012 07:33, Paul Larsonpaul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 7:20 AM, Alexander Sackasac@linaro.org wrote:
On Fri, Mar 9, 2012 at 8:05 AM, Paul Larsonpaul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer zach.pfeffer@linaro.org wrote: > > On 8 March 2012 23:44, Paul Larsonpaul.larson@linaro.org wrote: >> >> >> On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer >> zach.pfeffer@linaro.org >> wrote: >>> >>> Right. If a command in the command log causes the unit-under-test >>> to >>> do any of those things, then the unit should be rebooted (in the >>> case >>> of a hang) or the reboot should be sensed (in case the command >>> caused >>> a reboot) and when the unit boots back up, LAVA would continue the >>> test on the command after the one that caused it to hang, reboot, >>> freeze, etc. LAVA should save the logs for the entire command file >>> run, including all hangs and reboots. >> >> As mentioned when we talked about this on the phone, I don't think >> this is >> the best way to approach the problem. Currently, if a test hangs, >> times out, >> reboots improperly (and thus times out because it won't get to the >> expected >> place), lava will reboot the system back into the test image. >> However, at >> this point, it will mark the previous testsuite it tried to run as >> a >> failed >> run. Then, if there are other tests queued up to run, it will >> continue >> running with the next one - NOT try to re-enter the existing test >> and >> continue where it left off. This is not a capability currently >> supported in >> lava-android-test. >> >> The good news is, I think there's a much more straightforward way >> to >> do >> this. >> I haven't really seen an example of the kinds of things you want to >> run, but >> it seems that it's just a list of several tests, possibly with >> multiple >> steps in each test. And if it reboots, or has a problem, you want >> it >> to >> fail that one and continue to the next. This is very simple to do, >> if >> you >> simply define each of those tests as lava-android-tests. Then you >> can >> run >> them all in your test job, and I think it will do exactly what you >> are >> looking for here. Also, they can then be easily reused in other >> test >> jobs. > > Hmm... > > Here's what I'd like to do. > > I'd like to pass a simple command file to LAVA. I'd like LAVA to > execute each test and if that test hangs or causes a reboot I'd like > LAVA to pick back up at the next test case. I'd like LAVA to collect > all the logs and send them either to LAVA or back to us for post > processing. > > I'd like to do this, because I'd like to be able to include these > command files in our builds so that people can run them themselves > and > include the post processing commands for people to see what passed > and > failed. > > The text file also gives me a very easy way to add and remove tests > on > a per-build basis since it goes along with the build.
I get that, but I don't see why you can't have it both ways. If you want a simple set of instructions for someone to manually type at a command line to run through the tests, there's nothing preventing you from including that in a readme, script, or however you want to do that. But those tests described in there would be really nice to convert into reusable tests in lava. Once you have those building blocks, you can arrange them in future builds however you like.
I think what Zach is saying here is that he wants to use a single file to maintain the list of tests to run for a build. That file is then sourced both by a tool used by developers for local testing as well as by LAVA.
He most likely also would want the local tool used by devs to be a standard solution provided by the lava test framework...
I know, but what I'm seeing here doesn't look like something that's possible right now, and I think might take some pretty serious effort and re-engineering of how lava works around this particular use case to make it work. What I'm trying to show is that there's more than one way to think about this, and that with a slight change it goes from something that we need to discuss at the connect and get some effort behind making happen, to something that he can have today. And at the same time, doing it in this alternative way builds up a reusable set of android tests that can be applied to future builds.
Paul,
Your proposal is to create a linaro-android-test for each command I would put in a command file right?
Thanks, Paul Larson
-- Zach Pfeffer Android Platform Team Lead, Linaro Platform Teams Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
On 03/09/2012 12:39 PM, Zygmunt Krynicki wrote:
I really just want to pass a file in that says:
monkey1 monkey2 monkey3 monkey4
Then all you need is one monkey with all the parameters exposed. Everything else will be specified by the dispatcher job. You can specify targets and any parameters.
I'm doing something along these lines now for my proposal with the coremark benchmark. In this case a parameter tells LAVA *where* to download the actual benchmark, but the same idea could be used to tell LAVA what monkey commands you'd like to run. My method uses the "install_options" in lava-android-test.
On Fri, Mar 9, 2012 at 1:07 PM, Andy Doan andy.doan@linaro.org wrote:
... I'm doing something along these lines now for my proposal with the coremark benchmark. In this case a parameter tells LAVA *where* to download the actual benchmark, but the same idea could be used to tell LAVA what monkey commands you'd like to run. My method uses the "install_options" in lava-android-test.
Exactly, If this is just monkey with different params, we could either do something like that, or make sure that lava-android-test has the capability to support a default command line option that can be overridden and use that to call monkey with different params. In the case you describe here though Zach, I can see why it's a nice thing to provide different monkey "presets" for your users to be able to run conveniently.
I think this can be probably be done with a relatively minor modification to the monkey test wrapper.
Thanks, Paul Larson
On 9 March 2012 12:39, Zygmunt Krynicki zygmunt.krynicki@linaro.org wrote:
W dniu 09.03.2012 19:21, Zach Pfeffer pisze:
On 9 March 2012 11:47, Paul Larsonpaul.larson@linaro.org wrote:
I wouldn't think for every single command, no. As mentioned earlier, I suspect it could be broken up into tests comprised of one or more commands. The tests are what would go into lava-android-test.
But I'd like to get a better understanding of your perspective. Would it be possible to provide an example of one of these files so I can get a better idea of what you're looking for here?
Looking at:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
To run "monkey" I must define this:
# Copyright (c) 2011 Linaro
# Author: Linaro Validation Teamlinaro-dev@lists.linaro.org # # This file is part of LAVA Android Test. # # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, seehttp://www.gnu.org/licenses/. import os import lava_android_test.testdef from lava_android_test.config import get_config
test_name = 'monkey' config = get_config() curdir = os.path.realpath(os.path.dirname(__file__)) monkey_sh_name = 'monkey.sh' monkey_sh_path = os.path.join(curdir, 'monkey', monkey_sh_name) monkey_sh_android_path = os.path.join(config.installdir_android, test_name, monkey_sh_name)
INSTALL_STEPS_ADB_PRE = ['push %s %s ' % (monkey_sh_path, monkey_sh_android_path), 'shell chmod 777 %s' % monkey_sh_android_path]
ADB_SHELL_STEPS = [monkey_sh_android_path] #PATTERN = "^(?P<test_case_id>\w+):\W+(?P<measurement>\d+.\d+)" PATTERN = "## Network stats: elapsed time=(?P<measurement>\d+)ms" FAILURE_PATTERNS = [] #FAILURE_PATTERNS = ['** Monkey aborted due to error.', # '** System appears to have crashed']
inst = lava_android_test.testdef.AndroidTestInstaller( steps_adb_pre=INSTALL_STEPS_ADB_PRE) run = lava_android_test.testdef.AndroidTestRunner( adbshell_steps=ADB_SHELL_STEPS) parser = lava_android_test.testdef.AndroidTestParser(PATTERN, appendall={'units': 'ms'}, failure_patterns=FAILURE_PATTERNS) testobj = lava_android_test.testdef.AndroidTest(testname=test_name, installer=inst, runner=run, parser=parser) Which calls:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
#!/system/bin/sh #monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647" monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 500" echo execute command=${monkey_cmd} ${monkey_cmd} echo MONKEY_RET_CODE=$?
...so to run a specific instance of monkey:
monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647
I must write 20 lines of code.
Lets say I want to run monkey N different ways, I can create:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
...or I could write something that allows me to pass the command line I want to monkey through the JSON binding.
Lets say I want to run monkey in N ways for target A and M ways for target B and etc...
I won't want to encode all of these as
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
I really just want to pass a file in that says:
monkey1 monkey2 monkey3 monkey4
Then all you need is one monkey with all the parameters exposed. Everything else will be specified by the dispatcher job. You can specify targets and any parameters.
monkey is just an example of a command. The commands could be anything in the command file:
command1 param1 command2 param2 etc...
You are attempting to solve a non-issue. We have everything needed to support your use case now.
I don't think so based on the feedback Pauls given me. My issue is, run through a set of commands that individually may fail and if one fails reboot the unit and keep going on the next command, recording the output from all runs.
Thanks ZK
...and have LAVA execute that.
So that's generally doable. I'll write my 20 lines to call a test entiry point that passes a known good file.
But each one of the commands I passed in will crash the unit. I want to run them back to back because I want to see for each command how long the unit ran.
Now I could just have the command file that I run, that I wrote the 20 lines for, simply end on a test that I think will crash, but I'll never know for sure. To save me from guessing, the "tester" simply picks back up where it left off.
From a LAVA perspective this would probably be down to:
program_the_unit current_command = 0 boot_or_reboot: current_command = next_command running_test = 1 while (running_test and current_command) { run_command(current_command) current_command = next_command } running_test = 0
So we can write the linaro-android-test that runs a command file that's resident in the build, but I'm still stuck because if one of the commands hangs the test will be marked "over" and the unit will be reset, reprogramed and the next linaro-android-test run. So I'll need to write something the collects up all the runs and does the analysis
- or LAVA could execute a test script, collect all the logs, and send
them back. It could even accept a parser that would be able to parse all the logs from all the runs and take lots of nice statistics,
In the end, we will need to self host all the tests we run as per George's request. A set of command files and parsers included with each build allow me to satisfy that requirement.
Thanks, Paul Larson
On Mar 9, 2012 11:40 AM, "Zach Pfeffer"zach.pfeffer@linaro.org wrote:
On 9 March 2012 07:33, Paul Larsonpaul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 7:20 AM, Alexander Sackasac@linaro.org wrote:
On Fri, Mar 9, 2012 at 8:05 AM, Paul Larsonpaul.larson@linaro.org wrote: > > > On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer > zach.pfeffer@linaro.org > wrote: >> >> >> On 8 March 2012 23:44, Paul Larsonpaul.larson@linaro.org wrote: >>> >>> >>> >>> On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer >>> zach.pfeffer@linaro.org >>> wrote: >>>> >>>> >>>> Right. If a command in the command log causes the unit-under-test >>>> to >>>> do any of those things, then the unit should be rebooted (in the >>>> case >>>> of a hang) or the reboot should be sensed (in case the command >>>> caused >>>> a reboot) and when the unit boots back up, LAVA would continue the >>>> test on the command after the one that caused it to hang, reboot, >>>> freeze, etc. LAVA should save the logs for the entire command file >>>> run, including all hangs and reboots. >>> >>> >>> As mentioned when we talked about this on the phone, I don't think >>> this is >>> the best way to approach the problem. Currently, if a test hangs, >>> times out, >>> reboots improperly (and thus times out because it won't get to the >>> expected >>> place), lava will reboot the system back into the test image. >>> However, at >>> this point, it will mark the previous testsuite it tried to run as >>> a >>> failed >>> run. Then, if there are other tests queued up to run, it will >>> continue >>> running with the next one - NOT try to re-enter the existing test >>> and >>> continue where it left off. This is not a capability currently >>> supported in >>> lava-android-test. >>> >>> The good news is, I think there's a much more straightforward way >>> to >>> do >>> this. >>> I haven't really seen an example of the kinds of things you want to >>> run, but >>> it seems that it's just a list of several tests, possibly with >>> multiple >>> steps in each test. And if it reboots, or has a problem, you want >>> it >>> to >>> fail that one and continue to the next. This is very simple to do, >>> if >>> you >>> simply define each of those tests as lava-android-tests. Then you >>> can >>> run >>> them all in your test job, and I think it will do exactly what you >>> are >>> looking for here. Also, they can then be easily reused in other >>> test >>> jobs. >> >> >> Hmm... >> >> Here's what I'd like to do. >> >> I'd like to pass a simple command file to LAVA. I'd like LAVA to >> execute each test and if that test hangs or causes a reboot I'd like >> LAVA to pick back up at the next test case. I'd like LAVA to collect >> all the logs and send them either to LAVA or back to us for post >> processing. >> >> I'd like to do this, because I'd like to be able to include these >> command files in our builds so that people can run them themselves >> and >> include the post processing commands for people to see what passed >> and >> failed. >> >> The text file also gives me a very easy way to add and remove tests >> on >> a per-build basis since it goes along with the build. > > > I get that, but I don't see why you can't have it both ways. If you > want > a simple set of instructions for someone to manually type at a > command > line > to run through the tests, there's nothing preventing you from > including that > in a readme, script, or however you want to do that. But those tests > described in there would be really nice to convert into reusable > tests > in > lava. Once you have those building blocks, you can arrange them in > future > builds however you like. >
I think what Zach is saying here is that he wants to use a single file to maintain the list of tests to run for a build. That file is then sourced both by a tool used by developers for local testing as well as by LAVA.
He most likely also would want the local tool used by devs to be a standard solution provided by the lava test framework...
I know, but what I'm seeing here doesn't look like something that's possible right now, and I think might take some pretty serious effort and re-engineering of how lava works around this particular use case to make it work. What I'm trying to show is that there's more than one way to think about this, and that with a slight change it goes from something that we need to discuss at the connect and get some effort behind making happen, to something that he can have today. And at the same time, doing it in this alternative way builds up a reusable set of android tests that can be applied to future builds.
Paul,
Your proposal is to create a linaro-android-test for each command I would put in a command file right?
Thanks, Paul Larson
-- Zach Pfeffer Android Platform Team Lead, Linaro Platform Teams Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
-- Zygmunt Krynicki Linaro Validation Team
On 03/09/2012 01:59 PM, Zach Pfeffer wrote:
Then all you need is one monkey with all the parameters exposed. Everything else will be specified by the dispatcher job. You can specify targets and any parameters.
monkey is just an example of a command. The commands could be anything in the command file:
command1 param1 command2 param2 etc...
You are attempting to solve a non-issue. We have everything needed to support your use case now.
I don't think so based on the feedback Pauls given me. My issue is, run through a set of commands that individually may fail and if one fails reboot the unit and keep going on the next command, recording the output from all runs.
I believe Paul thinks there's a way to handle reboots (or it could be written) between LAVA tests. So could we compromise with something like this:
Make each test that's known to cause Android to reboot as its own lava-android-test. Put the other commands, that need to be run that don't cause reboots into one lava-android-test. This probably works best if the majority of what you want doesn't cause reboots. So lets pretend we have 4 commands:
* bam * foo * bar * blah
If "bam" is the only one that causes a reboot, we make it its own lava-android-test. We could submit LAVA jobs that call the "bam" test with different parameters. So bam1, bam2, and bam3 tests could be attempted.
We then have one lava-android-test that runs "foo", "bar", and "blah". Again, this could call these with different parameters.
So in the end you could have a LAVA job that could exercise all the tests in any order with any set of parameters. These tests could also still be run independently of LAVA on Android for developers not interested in LAVA.
-andy
W dniu 09.03.2012 20:59, Zach Pfeffer pisze:
On 9 March 2012 12:39, Zygmunt Krynickizygmunt.krynicki@linaro.org wrote:
W dniu 09.03.2012 19:21, Zach Pfeffer pisze:
On 9 March 2012 11:47, Paul Larsonpaul.larson@linaro.org wrote:
I wouldn't think for every single command, no. As mentioned earlier, I suspect it could be broken up into tests comprised of one or more commands. The tests are what would go into lava-android-test.
But I'd like to get a better understanding of your perspective. Would it be possible to provide an example of one of these files so I can get a better idea of what you're looking for here?
Looking at:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
To run "monkey" I must define this:
# Copyright (c) 2011 Linaro
# Author: Linaro Validation Teamlinaro-dev@lists.linaro.org # # This file is part of LAVA Android Test. # # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, seehttp://www.gnu.org/licenses/. import os import lava_android_test.testdef from lava_android_test.config import get_config
test_name = 'monkey' config = get_config() curdir = os.path.realpath(os.path.dirname(__file__)) monkey_sh_name = 'monkey.sh' monkey_sh_path = os.path.join(curdir, 'monkey', monkey_sh_name) monkey_sh_android_path = os.path.join(config.installdir_android, test_name, monkey_sh_name)
INSTALL_STEPS_ADB_PRE = ['push %s %s ' % (monkey_sh_path, monkey_sh_android_path), 'shell chmod 777 %s' % monkey_sh_android_path]
ADB_SHELL_STEPS = [monkey_sh_android_path] #PATTERN = "^(?P<test_case_id>\w+):\W+(?P<measurement>\d+.\d+)" PATTERN = "## Network stats: elapsed time=(?P<measurement>\d+)ms" FAILURE_PATTERNS = [] #FAILURE_PATTERNS = ['** Monkey aborted due to error.', # '** System appears to have crashed']
inst = lava_android_test.testdef.AndroidTestInstaller( steps_adb_pre=INSTALL_STEPS_ADB_PRE) run = lava_android_test.testdef.AndroidTestRunner( adbshell_steps=ADB_SHELL_STEPS) parser = lava_android_test.testdef.AndroidTestParser(PATTERN, appendall={'units': 'ms'}, failure_patterns=FAILURE_PATTERNS) testobj = lava_android_test.testdef.AndroidTest(testname=test_name, installer=inst, runner=run, parser=parser) Which calls:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
#!/system/bin/sh #monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647" monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 500" echo execute command=${monkey_cmd} ${monkey_cmd} echo MONKEY_RET_CODE=$?
...so to run a specific instance of monkey:
monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647
I must write 20 lines of code.
Lets say I want to run monkey N different ways, I can create:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
...or I could write something that allows me to pass the command line I want to monkey through the JSON binding.
Lets say I want to run monkey in N ways for target A and M ways for target B and etc...
I won't want to encode all of these as
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
I really just want to pass a file in that says:
monkey1 monkey2 monkey3 monkey4
Then all you need is one monkey with all the parameters exposed. Everything else will be specified by the dispatcher job. You can specify targets and any parameters.
monkey is just an example of a command. The commands could be anything in the command file:
If you want to run random shell commands in validation servers then really look again. We're not ready for that kind of freedom. If you want to run shell commands on your target then we _can_ support that easily, sanely and without much fuss.
command1 param1 command2 param2 etc...
You are attempting to solve a non-issue. We have everything needed to support your use case now.
I don't think so based on the feedback Pauls given me. My issue is, run through a set of commands that individually may fail and if one fails reboot the unit and keep going on the next command, recording the output from all runs.
Right, that looks like two things for me:
1) Standard dispatcher job with all the stuff we have now 2) Support for running shell command on the target with a new flag "reboot_on_failure". That boots you back into the same image.
Everything else is already there, tracking output, storing that, giving you nice look into the process.
I don't intend to say "meh, go away, your problems are imaginary". I'm trying to say "sure, this is easy". Based on what you've said we can support that with a minor extension (I don't track all android development so maybe running random shell on the target is already supported).
I hope we can clearly scope this and get it done this cycle. Thanks ZK
Thanks ZK
...and have LAVA execute that.
So that's generally doable. I'll write my 20 lines to call a test entiry point that passes a known good file.
But each one of the commands I passed in will crash the unit. I want to run them back to back because I want to see for each command how long the unit ran.
Now I could just have the command file that I run, that I wrote the 20 lines for, simply end on a test that I think will crash, but I'll never know for sure. To save me from guessing, the "tester" simply picks back up where it left off.
From a LAVA perspective this would probably be down to:
program_the_unit current_command = 0
boot_or_reboot: current_command = next_command running_test = 1 while (running_test and current_command) { run_command(current_command) current_command = next_command } running_test = 0
So we can write the linaro-android-test that runs a command file that's resident in the build, but I'm still stuck because if one of the commands hangs the test will be marked "over" and the unit will be reset, reprogramed and the next linaro-android-test run. So I'll need to write something the collects up all the runs and does the analysis
- or LAVA could execute a test script, collect all the logs, and send
them back. It could even accept a parser that would be able to parse all the logs from all the runs and take lots of nice statistics,
In the end, we will need to self host all the tests we run as per George's request. A set of command files and parsers included with each build allow me to satisfy that requirement.
Thanks, Paul Larson
On Mar 9, 2012 11:40 AM, "Zach Pfeffer"zach.pfeffer@linaro.org wrote:
On 9 March 2012 07:33, Paul Larsonpaul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 7:20 AM, Alexander Sackasac@linaro.org wrote: > > > On Fri, Mar 9, 2012 at 8:05 AM, Paul Larsonpaul.larson@linaro.org > wrote: >> >> >> On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer >> zach.pfeffer@linaro.org >> wrote: >>> >>> >>> On 8 March 2012 23:44, Paul Larsonpaul.larson@linaro.org wrote: >>>> >>>> >>>> >>>> On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer >>>> zach.pfeffer@linaro.org >>>> wrote: >>>>> >>>>> >>>>> Right. If a command in the command log causes the unit-under-test >>>>> to >>>>> do any of those things, then the unit should be rebooted (in the >>>>> case >>>>> of a hang) or the reboot should be sensed (in case the command >>>>> caused >>>>> a reboot) and when the unit boots back up, LAVA would continue the >>>>> test on the command after the one that caused it to hang, reboot, >>>>> freeze, etc. LAVA should save the logs for the entire command file >>>>> run, including all hangs and reboots. >>>> >>>> >>>> As mentioned when we talked about this on the phone, I don't think >>>> this is >>>> the best way to approach the problem. Currently, if a test hangs, >>>> times out, >>>> reboots improperly (and thus times out because it won't get to the >>>> expected >>>> place), lava will reboot the system back into the test image. >>>> However, at >>>> this point, it will mark the previous testsuite it tried to run as >>>> a >>>> failed >>>> run. Then, if there are other tests queued up to run, it will >>>> continue >>>> running with the next one - NOT try to re-enter the existing test >>>> and >>>> continue where it left off. This is not a capability currently >>>> supported in >>>> lava-android-test. >>>> >>>> The good news is, I think there's a much more straightforward way >>>> to >>>> do >>>> this. >>>> I haven't really seen an example of the kinds of things you want to >>>> run, but >>>> it seems that it's just a list of several tests, possibly with >>>> multiple >>>> steps in each test. And if it reboots, or has a problem, you want >>>> it >>>> to >>>> fail that one and continue to the next. This is very simple to do, >>>> if >>>> you >>>> simply define each of those tests as lava-android-tests. Then you >>>> can >>>> run >>>> them all in your test job, and I think it will do exactly what you >>>> are >>>> looking for here. Also, they can then be easily reused in other >>>> test >>>> jobs. >>> >>> >>> Hmm... >>> >>> Here's what I'd like to do. >>> >>> I'd like to pass a simple command file to LAVA. I'd like LAVA to >>> execute each test and if that test hangs or causes a reboot I'd like >>> LAVA to pick back up at the next test case. I'd like LAVA to collect >>> all the logs and send them either to LAVA or back to us for post >>> processing. >>> >>> I'd like to do this, because I'd like to be able to include these >>> command files in our builds so that people can run them themselves >>> and >>> include the post processing commands for people to see what passed >>> and >>> failed. >>> >>> The text file also gives me a very easy way to add and remove tests >>> on >>> a per-build basis since it goes along with the build. >> >> >> I get that, but I don't see why you can't have it both ways. If you >> want >> a simple set of instructions for someone to manually type at a >> command >> line >> to run through the tests, there's nothing preventing you from >> including that >> in a readme, script, or however you want to do that. But those tests >> described in there would be really nice to convert into reusable >> tests >> in >> lava. Once you have those building blocks, you can arrange them in >> future >> builds however you like. >> > > I think what Zach is saying here is that he wants to use a single file > to > maintain the list of tests to run for a build. That file is then > sourced > both by a tool used by developers for local testing as well as by > LAVA. > > He most likely also would want the local tool used by devs to be a > standard solution provided by the lava test framework... > I know, but what I'm seeing here doesn't look like something that's possible right now, and I think might take some pretty serious effort and re-engineering of how lava works around this particular use case to make it work. What I'm trying to show is that there's more than one way to think about this, and that with a slight change it goes from something that we need to discuss at the connect and get some effort behind making happen, to something that he can have today. And at the same time, doing it in this alternative way builds up a reusable set of android tests that can be applied to future builds.
Paul,
Your proposal is to create a linaro-android-test for each command I would put in a command file right?
Thanks, Paul Larson
-- Zach Pfeffer Android Platform Team Lead, Linaro Platform Teams Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
-- Zygmunt Krynicki Linaro Validation Team
Hi, Zach
monkey is just an example of a command. The commands could be anything in the command file:
command1 param1 command2 param2 etc...
We can do the above like this:
1. your input file commands file: command1 param1 command2 param2
2. turn your commands file into job file: (this can be done by android-build) .... "actions": [ { "command": "lava-android-test_run_command", "parameters":{ "command": "command1 param1" } }, { "command": "lava-android-test_run_command", "parameters":{ "command": "command2 param2" } } .... } ....
3. submit the job file generated by 2nd to lava server
then if the "command1 param1" failed or reboot the unit or hangs, then the result of this command will be fail. but it will not affect the next command "command2 param2", the next command should do as it is expected.
does this meets your requirement?
Thanks, Yongqin Liu
On 9 March 2012 07:20, Alexander Sack asac@linaro.org wrote:
On Fri, Mar 9, 2012 at 8:05 AM, Paul Larson paul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 8 March 2012 23:44, Paul Larson paul.larson@linaro.org wrote:
On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer zach.pfeffer@linaro.org wrote:
Right. If a command in the command log causes the unit-under-test to do any of those things, then the unit should be rebooted (in the case of a hang) or the reboot should be sensed (in case the command caused a reboot) and when the unit boots back up, LAVA would continue the test on the command after the one that caused it to hang, reboot, freeze, etc. LAVA should save the logs for the entire command file run, including all hangs and reboots.
As mentioned when we talked about this on the phone, I don't think this is the best way to approach the problem. Currently, if a test hangs, times out, reboots improperly (and thus times out because it won't get to the expected place), lava will reboot the system back into the test image. However, at this point, it will mark the previous testsuite it tried to run as a failed run. Then, if there are other tests queued up to run, it will continue running with the next one - NOT try to re-enter the existing test and continue where it left off. This is not a capability currently supported in lava-android-test.
The good news is, I think there's a much more straightforward way to do this. I haven't really seen an example of the kinds of things you want to run, but it seems that it's just a list of several tests, possibly with multiple steps in each test. And if it reboots, or has a problem, you want it to fail that one and continue to the next. This is very simple to do, if you simply define each of those tests as lava-android-tests. Then you can run them all in your test job, and I think it will do exactly what you are looking for here. Also, they can then be easily reused in other test jobs.
Hmm...
Here's what I'd like to do.
I'd like to pass a simple command file to LAVA. I'd like LAVA to execute each test and if that test hangs or causes a reboot I'd like LAVA to pick back up at the next test case. I'd like LAVA to collect all the logs and send them either to LAVA or back to us for post processing.
I'd like to do this, because I'd like to be able to include these command files in our builds so that people can run them themselves and include the post processing commands for people to see what passed and failed.
The text file also gives me a very easy way to add and remove tests on a per-build basis since it goes along with the build.
I get that, but I don't see why you can't have it both ways. If you want a simple set of instructions for someone to manually type at a command line to run through the tests, there's nothing preventing you from including that in a readme, script, or however you want to do that. But those tests described in there would be really nice to convert into reusable tests in lava. Once you have those building blocks, you can arrange them in future builds however you like.
I think what Zach is saying here is that he wants to use a single file to maintain the list of tests to run for a build. That file is then sourced both by a tool used by developers for local testing as well as by LAVA.
Yes.
He most likely also would want the local tool used by devs to be a standard solution provided by the lava test framework...
Yes.
-- Alexander Sack Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs
http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
linaro-android@lists.linaro.org