I was recently making some additions to the lava-test-shell code and in the process more or less had to write some documentation on how things work. Here is what I wrote (actually what I wrote includes stuff about test result attachments but that's not in trunk yet):
# LAVA Test Shell implementation details # ====================================== # # The idea of lava-test-shell is a YAML test definition is "compiled" into a # job that is run when the device under test boots and then the output of this # job is retrieved and analyzed and turned into a bundle of results. # # In practice, this means a hierarchy of directories and files is created # during test installation, a sub-hierarchy is created during execution to # hold the results and these latter sub-hierarchy whole lot is poked at on the # host during analysis. # # On Ubuntu and OpenEmbedded, the hierarchy is rooted at /lava. / is mounted # read-only on Android, so there we root the hierarchy at /data/lava. I'll # assume Ubuntu paths from here for simplicity. # # The directory tree that is created during installation looks like this: # # /lava/ # bin/ This directory is put on the path when the # test code is running -- these binaries can # be viewed as a sort of device-side "API" # for test authors. # lava-test-runner The job that runs the tests on boot. # lava-test-shell A helper to run a test suite. # tests/ # ${IDX}_${TEST_ID}/ One directory per test to be executed. # testdef.yml The test definition. # install.sh The install steps. # run.sh The run steps. # [repos] The test definition can specify bzr or git # repositories to clone into this directory. # # In addition, a file /etc/lava-test-runner.conf is created containing the # names of the directories in /lava/tests/ to execute. # # During execution, the following files are created: # # /lava/ # results/ # cpuinfo.txt Hardware info. # meminfo.txt Ditto. # build.txt Software info. # pkgs.txt Ditto # ${IDX}_${TEST_ID}-${TIMESTAMP}/ # testdef.yml Attached to the test run in the bundle for # archival purposes. # install.sh Ditto. # run.sh Ditto. # stdout.log The standard output of run.sh. # stderr.log The standard error of run.sh (actually not # created currently) # return_code The exit code of run.sh. # # After the test run has completed, the /lava/results directory is pulled over # to the host and turned into a bundle for submission to the dashboard.
Now clearly the work Senthil is doing on getting testdefs from a repo is going to make some changes to the files that get created during installation (I had envisioned that the repo that the job specifies would be cloned to /lava/tests/${IDX}_${TEST_ID} but that might not be completely safe -- if the repo contains a file called run.sh we don't want to clobber that! Not sure what to do here).
But the tweaks I want to propose are more to do with what goes into the /lava/bin and /lava/results directories. For a start, I don't think it's especially useful that the tests run with lava-test-runner or lava-test-shell on $PATH -- there's no reason for a test author to call either of those functions! However I want to add helpers -- called lava-test-case and lava-test-case-attach in my brain currently:
* lava-test-case will send the test case started signal, run a shell command, interpret its exit code as a test result and send the test case stopped signal * lava-test-case-attach arranges to attach a file to the test result for a particular test case id
So you could imagine some run steps for an audio capture test:
run: steps: - lava-fft -t 3 > generated.wav - lava-test-case-attach sine-wave generated.wav audio/wav - lava-test-case sine-wave aplay generated.wav
and appropriate hooks on the host side:
* test case start/stop hooks that would capture audio * an "analysis hook" that would compare the captured sample with the attached wav file (and attach the captured wav)
Semi-relatedly, I would like to (at least optionally) store the test result data more explicitly in the /lava/results/${IDX}_${TEST_ID}-${TIMESTAMP} directory. Maybe something like this (in the above style):
# /lava/ # results/ # ... As before # ${IDX}_${TEST_ID}-${TIMESTAMP}/ # ... All the stuff we had before. # cases/ # ${TEST_CASE_ID}/ # result This would contain pass/fail/skip/unknown # units Mb/s, V, W, Hz, whatever # measurement Self explanatory I expect. # attachments/ # ${FILENAME} The content of the attachment # ${FILENAME}.mimetype The mime-type for the attachment # attributes/ # ${KEY} The content of the file would be the # value of the attachment. # ... other things you can stick on test results ...
Basically this would be defining another representation for test results: on the file system, in addition to the existing two: as JSON or in a postgres DB.
The reason for doing this is twofold: 1) it's more amenable than JSON to being incrementally built up by a bunch of shell scripts as a lava-test-shell test runs and 2) this directory could be presented to an "analysis hook" written in shell (earlier on today I told Andy Doan that I though writing hooks in shell would be impractical; now I'm not so sure). Also: 3) (noone expects the spanish inquisition!) it would allow us to write a lava-test-shell test that does not depend on parsing stdout.log.
Apologies for the log ramble!
Cheers, mwh
On 11/15/2012 08:37 PM, Michael Hudson-Doyle wrote:
I was recently making some additions to the lava-test-shell code and in the process more or less had to write some documentation on how things work. Here is what I wrote (actually what I wrote includes stuff about test result attachments but that's not in trunk yet):
[snip] I'm embarrassed for not doing that in my original commit. Thanks.
But the tweaks I want to propose are more to do with what goes into the /lava/bin and /lava/results directories. For a start, I don't think it's especially useful that the tests run with lava-test-runner or lava-test-shell on $PATH -- there's no reason for a test author to call
its mostly harmless. I didn't intended developers to use those two script. I just thought it might make it easier to not have to call out the full path to those in our own code.
either of those functions! However I want to add helpers -- called lava-test-case and lava-test-case-attach in my brain currently:
- lava-test-case will send the test case started signal, run a shell command, interpret its exit code as a test result and send the test case stopped signal
- lava-test-case-attach arranges to attach a file to the test result for a particular test case id
So you could imagine some run steps for an audio capture test:
run: steps: - lava-fft -t 3 > generated.wav - lava-test-case-attach sine-wave generated.wav audio/wav - lava-test-case sine-wave aplay generated.wav
and appropriate hooks on the host side:
- test case start/stop hooks that would capture audio
- an "analysis hook" that would compare the captured sample with the attached wav file (and attach the captured wav)
+1 - and I liked your branch that does the attachment logic.
Semi-relatedly, I would like to (at least optionally) store the test result data more explicitly in the /lava/results/${IDX}_${TEST_ID}-${TIMESTAMP} directory. Maybe something like this (in the above style):
# /lava/ # results/ # ... As before # ${IDX}_${TEST_ID}-${TIMESTAMP}/ # ... All the stuff we had before. # cases/ # ${TEST_CASE_ID}/ # result This would contain pass/fail/skip/unknown # units Mb/s, V, W, Hz, whatever # measurement Self explanatory I expect. # attachments/ # ${FILENAME} The content of the attachment # ${FILENAME}.mimetype The mime-type for the attachment # attributes/ # ${KEY} The content of the file would be the # value of the attachment. # ... other things you can stick on test results ...
Basically this would be defining another representation for test results: on the file system, in addition to the existing two: as JSON or in a postgres DB.
The reason for doing this is twofold: 1) it's more amenable than JSON to being incrementally built up by a bunch of shell scripts as a lava-test-shell test runs and 2) this directory could be presented to an "analysis hook" written in shell (earlier on today I told Andy Doan that I though writing hooks in shell would be impractical; now I'm not so sure). Also: 3) (noone expects the spanish inquisition!) it would allow us to write a lava-test-shell test that does not depend on parsing stdout.log.
This sounds good, but I worry how it plays out. Could you elaborate a little on how you think a person would write such a test? ie - it feels like we are on the path to becoming not only a test harness but also a test framework. I guess with the implication of signals and such, we have to become more of a framework, so I my worry might be unavoidable.
Andy Doan andy.doan@linaro.org writes:
On 11/15/2012 08:37 PM, Michael Hudson-Doyle wrote:
I was recently making some additions to the lava-test-shell code and in the process more or less had to write some documentation on how things work. Here is what I wrote (actually what I wrote includes stuff about test result attachments but that's not in trunk yet):
[snip] I'm embarrassed for not doing that in my original commit. Thanks.
But the tweaks I want to propose are more to do with what goes into the /lava/bin and /lava/results directories. For a start, I don't think it's especially useful that the tests run with lava-test-runner or lava-test-shell on $PATH -- there's no reason for a test author to call
its mostly harmless. I didn't intended developers to use those two script. I just thought it might make it easier to not have to call out the full path to those in our own code.
Yeah true. I should probably just not worry about it!
either of those functions! However I want to add helpers -- called lava-test-case and lava-test-case-attach in my brain currently:
- lava-test-case will send the test case started signal, run a shell command, interpret its exit code as a test result and send the test case stopped signal
- lava-test-case-attach arranges to attach a file to the test result for a particular test case id
So you could imagine some run steps for an audio capture test:
run: steps: - lava-fft -t 3 > generated.wav - lava-test-case-attach sine-wave generated.wav audio/wav - lava-test-case sine-wave aplay generated.wav
and appropriate hooks on the host side:
- test case start/stop hooks that would capture audio
- an "analysis hook" that would compare the captured sample with the attached wav file (and attach the captured wav)
+1 - and I liked your branch that does the attachment logic.
Cool.
Semi-relatedly, I would like to (at least optionally) store the test result data more explicitly in the /lava/results/${IDX}_${TEST_ID}-${TIMESTAMP} directory. Maybe something like this (in the above style):
# /lava/ # results/ # ... As before # ${IDX}_${TEST_ID}-${TIMESTAMP}/ # ... All the stuff we had before. # cases/ # ${TEST_CASE_ID}/ # result This would contain pass/fail/skip/unknown # units Mb/s, V, W, Hz, whatever # measurement Self explanatory I expect. # attachments/ # ${FILENAME} The content of the attachment # ${FILENAME}.mimetype The mime-type for the attachment # attributes/ # ${KEY} The content of the file would be the # value of the attachment. # ... other things you can stick on test results ...
Basically this would be defining another representation for test results: on the file system, in addition to the existing two: as JSON or in a postgres DB.
The reason for doing this is twofold: 1) it's more amenable than JSON to being incrementally built up by a bunch of shell scripts as a lava-test-shell test runs and 2) this directory could be presented to an "analysis hook" written in shell (earlier on today I told Andy Doan that I though writing hooks in shell would be impractical; now I'm not so sure). Also: 3) (noone expects the spanish inquisition!) it would allow us to write a lava-test-shell test that does not depend on parsing stdout.log.
This sounds good, but I worry how it plays out. Could you elaborate a little on how you think a person would write such a test?
OK, how about
http://people.linaro.org/~mwh/lava-test-case-doc/lava_test_shell.html#writin...
? This is built from a branch I've been working on today (the name is probably not quite right any more):
https://code.launchpad.net/~mwhudson/lava-dispatcher/more-obvious-json-disk-...
The idea (likely obvious to you, but probably not to anyone else) is that lava-test-case --shell will send start/stop test case signals to the host. I'd hoped to get a version of that done today, but it's getting a bit late now...
ie - it feels like we are on the path to becoming not only a test harness but also a test framework. I guess with the implication of signals and such, we have to become more of a framework, so I my worry might be unavoidable.
Yes, I think so. We can't rely on parsing output after the fact to signal to the host when a test case starts and stops...
Cheers, mwh
Michael Hudson-Doyle michael.hudson@linaro.org writes:
This sounds good, but I worry how it plays out. Could you elaborate a little on how you think a person would write such a test?
OK, how about
http://people.linaro.org/~mwh/lava-test-case-doc/lava_test_shell.html#writing-a-test-for-lava-test-shell
? This is built from a branch I've been working on today (the name is probably not quite right any more):
https://code.launchpad.net/~mwhudson/lava-dispatcher/more-obvious-json-disk-bundle-equivalence
The idea (likely obvious to you, but probably not to anyone else) is that lava-test-case --shell will send start/stop test case signals to the host. I'd hoped to get a version of that done today, but it's getting a bit late now...
This is what I expect a signals-using test to look like fwiw: http://bazaar.launchpad.net/~mwhudson/+junk/audio-test/files
Tomorrow I shall be trying to make this actually work :-)
Cheers, mwh
On 11/18/2012 09:28 PM, Michael Hudson-Doyle wrote:
Michael Hudson-Doyle michael.hudson@linaro.org writes:
This sounds good, but I worry how it plays out. Could you elaborate a little on how you think a person would write such a test?
OK, how about
http://people.linaro.org/~mwh/lava-test-case-doc/lava_test_shell.html#writing-a-test-for-lava-test-shell
? This is built from a branch I've been working on today (the name is probably not quite right any more):
https://code.launchpad.net/~mwhudson/lava-dispatcher/more-obvious-json-disk-bundle-equivalence
The idea (likely obvious to you, but probably not to anyone else) is that lava-test-case --shell will send start/stop test case signals to the host. I'd hoped to get a version of that done today, but it's getting a bit late now...
This is what I expect a signals-using test to look like fwiw: http://bazaar.launchpad.net/~mwhudson/+junk/audio-test/files
Tomorrow I shall be trying to make this actually work :-)
I haven't looked at the code, but the mechanics of this in the YAML/test-repo seem sensible to me.
linaro-validation@lists.linaro.org