W dniu 09.03.2012 20:59, Zach Pfeffer pisze:
On 9 March 2012 12:39, Zygmunt Krynickizygmunt.krynicki@linaro.org wrote:
W dniu 09.03.2012 19:21, Zach Pfeffer pisze:
On 9 March 2012 11:47, Paul Larsonpaul.larson@linaro.org wrote:
I wouldn't think for every single command, no. As mentioned earlier, I suspect it could be broken up into tests comprised of one or more commands. The tests are what would go into lava-android-test.
But I'd like to get a better understanding of your perspective. Would it be possible to provide an example of one of these files so I can get a better idea of what you're looking for here?
Looking at:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
To run "monkey" I must define this:
# Copyright (c) 2011 Linaro
# Author: Linaro Validation Teamlinaro-dev@lists.linaro.org # # This file is part of LAVA Android Test. # # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, seehttp://www.gnu.org/licenses/. import os import lava_android_test.testdef from lava_android_test.config import get_config
test_name = 'monkey' config = get_config() curdir = os.path.realpath(os.path.dirname(__file__)) monkey_sh_name = 'monkey.sh' monkey_sh_path = os.path.join(curdir, 'monkey', monkey_sh_name) monkey_sh_android_path = os.path.join(config.installdir_android, test_name, monkey_sh_name)
INSTALL_STEPS_ADB_PRE = ['push %s %s ' % (monkey_sh_path, monkey_sh_android_path), 'shell chmod 777 %s' % monkey_sh_android_path]
ADB_SHELL_STEPS = [monkey_sh_android_path] #PATTERN = "^(?P<test_case_id>\w+):\W+(?P<measurement>\d+.\d+)" PATTERN = "## Network stats: elapsed time=(?P<measurement>\d+)ms" FAILURE_PATTERNS = [] #FAILURE_PATTERNS = ['** Monkey aborted due to error.', # '** System appears to have crashed']
inst = lava_android_test.testdef.AndroidTestInstaller( steps_adb_pre=INSTALL_STEPS_ADB_PRE) run = lava_android_test.testdef.AndroidTestRunner( adbshell_steps=ADB_SHELL_STEPS) parser = lava_android_test.testdef.AndroidTestParser(PATTERN, appendall={'units': 'ms'}, failure_patterns=FAILURE_PATTERNS) testobj = lava_android_test.testdef.AndroidTest(testname=test_name, installer=inst, runner=run, parser=parser) Which calls:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
#!/system/bin/sh #monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647" monkey_cmd="monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 500" echo execute command=${monkey_cmd} ${monkey_cmd} echo MONKEY_RET_CODE=$?
...so to run a specific instance of monkey:
monkey -s 1 --pct-touch 10 --pct-motion 20 --pct-nav 20 --pct-majornav 30 --pct-appswitch 20 --throttle 500 2147483647
I must write 20 lines of code.
Lets say I want to run monkey N different ways, I can create:
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
...or I could write something that allows me to pass the command line I want to monkey through the JSON binding.
Lets say I want to run monkey in N ways for target A and M ways for target B and etc...
I won't want to encode all of these as
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/... ...
http://bazaar.launchpad.net/~linaro-validation/lava-android-test/trunk/view/...
I really just want to pass a file in that says:
monkey1 monkey2 monkey3 monkey4
Then all you need is one monkey with all the parameters exposed. Everything else will be specified by the dispatcher job. You can specify targets and any parameters.
monkey is just an example of a command. The commands could be anything in the command file:
If you want to run random shell commands in validation servers then really look again. We're not ready for that kind of freedom. If you want to run shell commands on your target then we _can_ support that easily, sanely and without much fuss.
command1 param1 command2 param2 etc...
You are attempting to solve a non-issue. We have everything needed to support your use case now.
I don't think so based on the feedback Pauls given me. My issue is, run through a set of commands that individually may fail and if one fails reboot the unit and keep going on the next command, recording the output from all runs.
Right, that looks like two things for me:
1) Standard dispatcher job with all the stuff we have now 2) Support for running shell command on the target with a new flag "reboot_on_failure". That boots you back into the same image.
Everything else is already there, tracking output, storing that, giving you nice look into the process.
I don't intend to say "meh, go away, your problems are imaginary". I'm trying to say "sure, this is easy". Based on what you've said we can support that with a minor extension (I don't track all android development so maybe running random shell on the target is already supported).
I hope we can clearly scope this and get it done this cycle. Thanks ZK
Thanks ZK
...and have LAVA execute that.
So that's generally doable. I'll write my 20 lines to call a test entiry point that passes a known good file.
But each one of the commands I passed in will crash the unit. I want to run them back to back because I want to see for each command how long the unit ran.
Now I could just have the command file that I run, that I wrote the 20 lines for, simply end on a test that I think will crash, but I'll never know for sure. To save me from guessing, the "tester" simply picks back up where it left off.
From a LAVA perspective this would probably be down to:
program_the_unit current_command = 0
boot_or_reboot: current_command = next_command running_test = 1 while (running_test and current_command) { run_command(current_command) current_command = next_command } running_test = 0
So we can write the linaro-android-test that runs a command file that's resident in the build, but I'm still stuck because if one of the commands hangs the test will be marked "over" and the unit will be reset, reprogramed and the next linaro-android-test run. So I'll need to write something the collects up all the runs and does the analysis
- or LAVA could execute a test script, collect all the logs, and send
them back. It could even accept a parser that would be able to parse all the logs from all the runs and take lots of nice statistics,
In the end, we will need to self host all the tests we run as per George's request. A set of command files and parsers included with each build allow me to satisfy that requirement.
Thanks, Paul Larson
On Mar 9, 2012 11:40 AM, "Zach Pfeffer"zach.pfeffer@linaro.org wrote:
On 9 March 2012 07:33, Paul Larsonpaul.larson@linaro.org wrote:
On Fri, Mar 9, 2012 at 7:20 AM, Alexander Sackasac@linaro.org wrote: > > > On Fri, Mar 9, 2012 at 8:05 AM, Paul Larsonpaul.larson@linaro.org > wrote: >> >> >> On Fri, Mar 9, 2012 at 12:16 AM, Zach Pfeffer >> zach.pfeffer@linaro.org >> wrote: >>> >>> >>> On 8 March 2012 23:44, Paul Larsonpaul.larson@linaro.org wrote: >>>> >>>> >>>> >>>> On Thu, Mar 8, 2012 at 4:07 PM, Zach Pfeffer >>>> zach.pfeffer@linaro.org >>>> wrote: >>>>> >>>>> >>>>> Right. If a command in the command log causes the unit-under-test >>>>> to >>>>> do any of those things, then the unit should be rebooted (in the >>>>> case >>>>> of a hang) or the reboot should be sensed (in case the command >>>>> caused >>>>> a reboot) and when the unit boots back up, LAVA would continue the >>>>> test on the command after the one that caused it to hang, reboot, >>>>> freeze, etc. LAVA should save the logs for the entire command file >>>>> run, including all hangs and reboots. >>>> >>>> >>>> As mentioned when we talked about this on the phone, I don't think >>>> this is >>>> the best way to approach the problem. Currently, if a test hangs, >>>> times out, >>>> reboots improperly (and thus times out because it won't get to the >>>> expected >>>> place), lava will reboot the system back into the test image. >>>> However, at >>>> this point, it will mark the previous testsuite it tried to run as >>>> a >>>> failed >>>> run. Then, if there are other tests queued up to run, it will >>>> continue >>>> running with the next one - NOT try to re-enter the existing test >>>> and >>>> continue where it left off. This is not a capability currently >>>> supported in >>>> lava-android-test. >>>> >>>> The good news is, I think there's a much more straightforward way >>>> to >>>> do >>>> this. >>>> I haven't really seen an example of the kinds of things you want to >>>> run, but >>>> it seems that it's just a list of several tests, possibly with >>>> multiple >>>> steps in each test. And if it reboots, or has a problem, you want >>>> it >>>> to >>>> fail that one and continue to the next. This is very simple to do, >>>> if >>>> you >>>> simply define each of those tests as lava-android-tests. Then you >>>> can >>>> run >>>> them all in your test job, and I think it will do exactly what you >>>> are >>>> looking for here. Also, they can then be easily reused in other >>>> test >>>> jobs. >>> >>> >>> Hmm... >>> >>> Here's what I'd like to do. >>> >>> I'd like to pass a simple command file to LAVA. I'd like LAVA to >>> execute each test and if that test hangs or causes a reboot I'd like >>> LAVA to pick back up at the next test case. I'd like LAVA to collect >>> all the logs and send them either to LAVA or back to us for post >>> processing. >>> >>> I'd like to do this, because I'd like to be able to include these >>> command files in our builds so that people can run them themselves >>> and >>> include the post processing commands for people to see what passed >>> and >>> failed. >>> >>> The text file also gives me a very easy way to add and remove tests >>> on >>> a per-build basis since it goes along with the build. >> >> >> I get that, but I don't see why you can't have it both ways. If you >> want >> a simple set of instructions for someone to manually type at a >> command >> line >> to run through the tests, there's nothing preventing you from >> including that >> in a readme, script, or however you want to do that. But those tests >> described in there would be really nice to convert into reusable >> tests >> in >> lava. Once you have those building blocks, you can arrange them in >> future >> builds however you like. >> > > I think what Zach is saying here is that he wants to use a single file > to > maintain the list of tests to run for a build. That file is then > sourced > both by a tool used by developers for local testing as well as by > LAVA. > > He most likely also would want the local tool used by devs to be a > standard solution provided by the lava test framework... > I know, but what I'm seeing here doesn't look like something that's possible right now, and I think might take some pretty serious effort and re-engineering of how lava works around this particular use case to make it work. What I'm trying to show is that there's more than one way to think about this, and that with a slight change it goes from something that we need to discuss at the connect and get some effort behind making happen, to something that he can have today. And at the same time, doing it in this alternative way builds up a reusable set of android tests that can be applied to future builds.
Paul,
Your proposal is to create a linaro-android-test for each command I would put in a command file right?
Thanks, Paul Larson
-- Zach Pfeffer Android Platform Team Lead, Linaro Platform Teams Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
-- Zygmunt Krynicki Linaro Validation Team