Zygmunt Krynicki zygmunt.krynicki@linaro.org writes:
W dniu 18.10.2012 03:45, Michael Hudson-Doyle pisze:
Zygmunt Krynicki zygmunt.krynicki@linaro.org writes:
W dniu 17.10.2012 19:11, Andy Doan pisze:
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
Incidentally that's something we may collaborate on.
Yeah, so how does checkbox deal with this? I guess it doesn't quite have the concept of remote users submitted requests that jobs are run? (i.e. checkbox is more dispatcher than scheduler in lava terminology).
We have largely the same problem but in different context (there are different internal users).
Checkbox has the concept of "whitelists" which basically specify the test scenario. Each item in the whitelist is a "job" (full test definition) that can use various checkbox "plugins" (like shell, manual and many others that I'm not familiar with). Checkbox then transforms the whitelist (resolving dependencies and things like that) and executes the tests much like dispatcher would.
I see.
There are several use cases that are currently broken
Such as?
(as downstream users use checkbox to do their specialized testing) and we're looking for solutions that could help us solve that. We have not made up our minds really as the problem is equally technical as social and no patch will magically fix both sides.
Sure.
One of the proposals would be to build a pypi-like directory of tests and use that as a base for namespacing (first-come first-served name allocation). I'm not entirely sure this would help to solve the problem but it's something that, if available, could give us another vector.
Hm. This is definitely an interesting idea. I had actually already thought that using user specified distutils- or debian-style versioning would make sense -- you would get the latest version by the chosen algorithm by default, but could still upload revisions of old versions if you wanted to.
Part of this would be a command line tool for fetching / publishing test definitions I guess. In fact this could almost be the main thing: it depends whether you want to produce (and host, I guess) a single site which is the centrepoint of the test definition world (like pypi.python.org is for Python stuff) or just the tools / protocols people use to run and work with their own repositories (testdef.validation.linaro.org or testdef.qa.ubuntu.com or whatever).
I think that, as with pypi, even if there is a "single centrepoint of the test definition world", we should expect that sites will have local test repositories for one reason and another (as they do with pypi).
Another way to handle namespacing is to include the name of the user / group that can update a resource in its name, ala branches on LP or repos on github (or bundle streams in LAVA). Not sure if that's a good idea for our use case or not.
I wonder if checkbox's rfc822ish format would be better than JSON for test interchange...
Probably although it's still imperfect and suffers from binary deficiency.
What I'd like to see in practice is a web service that is free-for-all that can hold test meta data. I believe that as we go test meta data will formalize and at some point it may become possible to run lava-test test from checkbox and checkbox job in lava (given appropriate adapters on both sides) merely by specifying the name of the test.
So that's an argument for aiming for a single site? Maybe. Maybe you'd just give a URL of a testdef rather than the name of a test, so http://testdef.validation.linaro.org/stream rather than just 'stream'.
Initially it could be a simple RESTful interface based on a dumb HTTP server serving files from a tree structure.
And then could grow wiki like features? :-)
One of the user stories we have is "which tests are available to run on board X with Y deployed to it?" -- if we use test repositories that are entirely disconnected from the LAVA database I think this becomes a bit harder to answer. Although one could make searching a required feature of a test repository...
This would allow us to try moving some of the experimental meta-data there and build the client parts. If the idea gains traction it could grow from there.
Some considerations:
- Some tests have to be private. I don't know how to solve that in
namespaces. Some of the ideas that come to mind is .private. namespace that is explicitly non-global and can be provided by a local "test definition repository"
That would work, I think.
- It should probably be schema free, serving simple rfc822 files with
python-like classifiers (Test::Platform::Android anyone?) as this will allow free experimentation
FWIW, I think they're pedantically called "trove classifiers" :-)
I guess there would be two mandatory fields: name and version. And maybe format? So you could have
Name: stream Version: 1.0b3 Format: LAVA testdef version 1.3 ...
and everything else would only need to make sense to LAVA.
Then you would say client side:
$ testdef-get lava-stream Fetched lava-stream version 1.0b3 $ vi lava-stream.txt # update stuff $ testdef-push lava-stream.txt ERROR: lava-stream version 1.0b3 already exists on server $ vi lava-stream.txt # Oops, update version $ testdef-push lava-stream.txt Uploaded lava-stream version 1.0b4
- It should (must?) have pypi-like version support so that a test can
be updated but the old definition is never lost.
Must, imho. I guess support for explicitly removing a version would be good, but the default should be append-only.
- It probably does not have to be the download server as anyone can
host tests themselves. Just meta-data would be kept there.
By metadata you mean the key-value data as listed above, right?
(For small tests that may be enough but I can envision tests with external code and resources)
Yeah, the way lava-test tests can specify URLs and bzr and git repos to be fetched needs to stay I think.
Cheers, mwh