We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
Cheers, mwh
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
W dniu 17.10.2012 19:11, Andy Doan pisze:
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
Incidentally that's something we may collaborate on. How can I participate in that meeting?
Thanks Zygmunt
On 10/17/2012 12:57 PM, Zygmunt Krynicki wrote:
W dniu 17.10.2012 19:11, Andy Doan pisze:
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
Incidentally that's something we may collaborate on. How can I participate in that meeting?
http://summit.linaro.org/lce12/meeting/21203/test-case-management/
Zygmunt Krynicki zygmunt.krynicki@linaro.org writes:
W dniu 17.10.2012 19:11, Andy Doan pisze:
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
Incidentally that's something we may collaborate on.
Yeah, so how does checkbox deal with this? I guess it doesn't quite have the concept of remote users submitted requests that jobs are run? (i.e. checkbox is more dispatcher than scheduler in lava terminology).
I wonder if checkbox's rfc822ish format would be better than JSON for test interchange...
How can I participate in that meeting?
I imagine the google+ arrangments will be similar.
Cheers, mwh
W dniu 18.10.2012 03:45, Michael Hudson-Doyle pisze:
Zygmunt Krynicki zygmunt.krynicki@linaro.org writes:
W dniu 17.10.2012 19:11, Andy Doan pisze:
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
Incidentally that's something we may collaborate on.
Yeah, so how does checkbox deal with this? I guess it doesn't quite have the concept of remote users submitted requests that jobs are run? (i.e. checkbox is more dispatcher than scheduler in lava terminology).
We have largely the same problem but in different context (there are different internal users).
Checkbox has the concept of "whitelists" which basically specify the test scenario. Each item in the whitelist is a "job" (full test definition) that can use various checkbox "plugins" (like shell, manual and many others that I'm not familiar with). Checkbox then transforms the whitelist (resolving dependencies and things like that) and executes the tests much like dispatcher would.
There are several use cases that are currently broken (as downstream users use checkbox to do their specialized testing) and we're looking for solutions that could help us solve that. We have not made up our minds really as the problem is equally technical as social and no patch will magically fix both sides.
One of the proposals would be to build a pypi-like directory of tests and use that as a base for namespacing (first-come first-served name allocation). I'm not entirely sure this would help to solve the problem but it's something that, if available, could give us another vector.
I wonder if checkbox's rfc822ish format would be better than JSON for test interchange...
Probably although it's still imperfect and suffers from binary deficiency.
What I'd like to see in practice is a web service that is free-for-all that can hold test meta data. I believe that as we go test meta data will formalize and at some point it may become possible to run lava-test test from checkbox and checkbox job in lava (given appropriate adapters on both sides) merely by specifying the name of the test.
Initially it could be a simple RESTful interface based on a dumb HTTP server serving files from a tree structure. This would allow us to try moving some of the experimental meta-data there and build the client parts. If the idea gains traction it could grow from there.
Some considerations:
1) Some tests have to be private. I don't know how to solve that in namespaces. Some of the ideas that come to mind is .private. namespace that is explicitly non-global and can be provided by a local "test definition repository"
2) It should probably be schema free, serving simple rfc822 files with python-like classifiers (Test::Platform::Android anyone?) as this will allow free experimentation
3) It should (must?) have pypi-like version support so that a test can be updated but the old definition is never lost.
4) It probably does not have to be the download server as anyone can host tests themselves. Just meta-data would be kept there. (For small tests that may be enough but I can envision tests with external code and resources)
How can I participate in that meeting?
I imagine the google+ arrangments will be similar.
I've subscribed to the relevant blueprint now.
Thanks ZK
Zygmunt Krynicki zygmunt.krynicki@linaro.org writes:
W dniu 18.10.2012 03:45, Michael Hudson-Doyle pisze:
Zygmunt Krynicki zygmunt.krynicki@linaro.org writes:
W dniu 17.10.2012 19:11, Andy Doan pisze:
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
Incidentally that's something we may collaborate on.
Yeah, so how does checkbox deal with this? I guess it doesn't quite have the concept of remote users submitted requests that jobs are run? (i.e. checkbox is more dispatcher than scheduler in lava terminology).
We have largely the same problem but in different context (there are different internal users).
Checkbox has the concept of "whitelists" which basically specify the test scenario. Each item in the whitelist is a "job" (full test definition) that can use various checkbox "plugins" (like shell, manual and many others that I'm not familiar with). Checkbox then transforms the whitelist (resolving dependencies and things like that) and executes the tests much like dispatcher would.
I see.
There are several use cases that are currently broken
Such as?
(as downstream users use checkbox to do their specialized testing) and we're looking for solutions that could help us solve that. We have not made up our minds really as the problem is equally technical as social and no patch will magically fix both sides.
Sure.
One of the proposals would be to build a pypi-like directory of tests and use that as a base for namespacing (first-come first-served name allocation). I'm not entirely sure this would help to solve the problem but it's something that, if available, could give us another vector.
Hm. This is definitely an interesting idea. I had actually already thought that using user specified distutils- or debian-style versioning would make sense -- you would get the latest version by the chosen algorithm by default, but could still upload revisions of old versions if you wanted to.
Part of this would be a command line tool for fetching / publishing test definitions I guess. In fact this could almost be the main thing: it depends whether you want to produce (and host, I guess) a single site which is the centrepoint of the test definition world (like pypi.python.org is for Python stuff) or just the tools / protocols people use to run and work with their own repositories (testdef.validation.linaro.org or testdef.qa.ubuntu.com or whatever).
I think that, as with pypi, even if there is a "single centrepoint of the test definition world", we should expect that sites will have local test repositories for one reason and another (as they do with pypi).
Another way to handle namespacing is to include the name of the user / group that can update a resource in its name, ala branches on LP or repos on github (or bundle streams in LAVA). Not sure if that's a good idea for our use case or not.
I wonder if checkbox's rfc822ish format would be better than JSON for test interchange...
Probably although it's still imperfect and suffers from binary deficiency.
What I'd like to see in practice is a web service that is free-for-all that can hold test meta data. I believe that as we go test meta data will formalize and at some point it may become possible to run lava-test test from checkbox and checkbox job in lava (given appropriate adapters on both sides) merely by specifying the name of the test.
So that's an argument for aiming for a single site? Maybe. Maybe you'd just give a URL of a testdef rather than the name of a test, so http://testdef.validation.linaro.org/stream rather than just 'stream'.
Initially it could be a simple RESTful interface based on a dumb HTTP server serving files from a tree structure.
And then could grow wiki like features? :-)
One of the user stories we have is "which tests are available to run on board X with Y deployed to it?" -- if we use test repositories that are entirely disconnected from the LAVA database I think this becomes a bit harder to answer. Although one could make searching a required feature of a test repository...
This would allow us to try moving some of the experimental meta-data there and build the client parts. If the idea gains traction it could grow from there.
Some considerations:
- Some tests have to be private. I don't know how to solve that in
namespaces. Some of the ideas that come to mind is .private. namespace that is explicitly non-global and can be provided by a local "test definition repository"
That would work, I think.
- It should probably be schema free, serving simple rfc822 files with
python-like classifiers (Test::Platform::Android anyone?) as this will allow free experimentation
FWIW, I think they're pedantically called "trove classifiers" :-)
I guess there would be two mandatory fields: name and version. And maybe format? So you could have
Name: stream Version: 1.0b3 Format: LAVA testdef version 1.3 ...
and everything else would only need to make sense to LAVA.
Then you would say client side:
$ testdef-get lava-stream Fetched lava-stream version 1.0b3 $ vi lava-stream.txt # update stuff $ testdef-push lava-stream.txt ERROR: lava-stream version 1.0b3 already exists on server $ vi lava-stream.txt # Oops, update version $ testdef-push lava-stream.txt Uploaded lava-stream version 1.0b4
- It should (must?) have pypi-like version support so that a test can
be updated but the old definition is never lost.
Must, imho. I guess support for explicitly removing a version would be good, but the default should be append-only.
- It probably does not have to be the download server as anyone can
host tests themselves. Just meta-data would be kept there.
By metadata you mean the key-value data as listed above, right?
(For small tests that may be enough but I can envision tests with external code and resources)
Yeah, the way lava-test tests can specify URLs and bzr and git repos to be fetched needs to stay I think.
Cheers, mwh
W dniu 19.10.2012 01:36, Michael Hudson-Doyle pisze:
Incidentally that's something we may collaborate on.
Yeah, so how does checkbox deal with this? I guess it doesn't quite have the concept of remote users submitted requests that jobs are run? (i.e. checkbox is more dispatcher than scheduler in lava terminology).
We have largely the same problem but in different context (there are different internal users).
Checkbox has the concept of "whitelists" which basically specify the test scenario. Each item in the whitelist is a "job" (full test definition) that can use various checkbox "plugins" (like shell, manual and many others that I'm not familiar with). Checkbox then transforms the whitelist (resolving dependencies and things like that) and executes the tests much like dispatcher would.
I see.
There are several use cases that are currently broken
Such as?
From what I recall mostly on the way upstream/downstream (and sometimes side-stream) relationships work. The actual details are specific to Canonical (I would gladly explain that in a private channel if you wish to know more) but the general idea is that without some API stability (and we offer none today), script stability (you can think of it as another level of API) our downstream users (which are NOT just consumers) have a hard time following our releases.
The second issue that is more directly addressed is that there is poor conductivity for actual tests to flow from team to team and to get "stability" people prefer to keep similar/identical tests to themselves (not as in secret but as in not collaborated upon easily)
One of the proposals would be to build a pypi-like directory of tests and use that as a base for namespacing (first-come first-served name allocation). I'm not entirely sure this would help to solve the problem but it's something that, if available, could give us another vector.
Hm. This is definitely an interesting idea. I had actually already thought that using user specified distutils- or debian-style versioning would make sense -- you would get the latest version by the chosen algorithm by default, but could still upload revisions of old versions if you wanted to.
I'd rather avoid debian-style versions in favor of strict, constant length, version system. Let's not have a custom postgresql function for comparing versions again ;)
Part of this would be a command line tool for fetching / publishing test definitions I guess. In fact this could almost be the main thing: it depends whether you want to produce (and host, I guess) a single site which is the centrepoint of the test definition world (like pypi.python.org is for Python stuff) or just the tools / protocols people use to run and work with their own repositories (testdef.validation.linaro.org or testdef.qa.ubuntu.com or whatever).
I think that there _should_ be a central repository simply because it means less fractures early on. From what I know people don't deploy their own pypi just to host their pet project. They only do that if they depend on the protocols and tools around pypi and want to keep the code private.
I think that, as with pypi, even if there is a "single centrepoint of the test definition world", we should expect that sites will have local test repositories for one reason and another (as they do with pypi).
Having said what I did above, nothing can prevent others from re-implementing the same protocols or deploying their own archive but I think we should encourage working in the common pool as this will improve the ecosystem IMHO (look at easy_install, pip or even crate.io, they would not have happened if there was a competing group of pypi-like systems that have no dominance over others). In other words the value of pypi is the data that is stored there.
Another way to handle namespacing is to include the name of the user / group that can update a resource in its name, ala branches on LP or repos on github (or bundle streams in LAVA). Not sure if that's a good idea for our use case or not.
I thought about one thing that would warrant ~user/project approach. Both pypi and launchpad are product-centric -- you go to shop for solutions looking for the product name. GitHub on the other hand is developer centric as $product can have any number of forks that are equally exposed.
I think for our goals we should focus on product-centric views. The actual code, wherever it exists, should be managed with other tools. I would not like to outgrow this concept to a DVCS or a code hosting tool.
I wonder if checkbox's rfc822ish format would be better than JSON for test interchange...
Probably although it's still imperfect and suffers from binary deficiency.
What I'd like to see in practice is a web service that is free-for-all that can hold test meta data. I believe that as we go test meta data will formalize and at some point it may become possible to run lava-test test from checkbox and checkbox job in lava (given appropriate adapters on both sides) merely by specifying the name of the test.
So that's an argument for aiming for a single site? Maybe. Maybe you'd just give a URL of a testdef rather than the name of a test, so http://testdef.validation.linaro.org/stream rather than just 'stream'.
Imagine pip installing that each time. IMO it's better to stick to names rather than URLS, if we can. People know how to manage names already and URLs is something we can only google for.
The full URL could be usable for some kind of "packages" but that's not the primary scope of the proposal, I think. Packages are more complicated and secondary and the directory should merely point you at something that you can install with an absolute URL.
Initially it could be a simple RESTful interface based on a dumb HTTP server serving files from a tree structure.
And then could grow wiki like features? :-)
I'd rather not go there. IMHO it should only have search and CRUD actions on the content. Anything beyond that works better elsewhere (readthedocs / crate.io). Remember that it's not the 'appstore' experience that we are after here. The goal is to introduce a common component that people can converge and thrive on. This alone may give us better code re-usability as we gain partial visibility to other developers _and_ we fix the release process for test definitions so that people can depend on them indefinitely.
One of the user stories we have is "which tests are available to run on board X with Y deployed to it?" -- if we use test repositories that are entirely disconnected from the LAVA database I think this becomes a bit harder to answer. Although one could make searching a required feature of a test repository...
I think that's something to do in stage 2 as we get a better understanding of what we have. In the end the perfect solution, for LAVA, might be LAVA-specific and we should not sacrifice the generic useful aspects in the quest for something this narrow.
In simple classifiers that might help there:
Environment::Hardware::SoC::OMAP35xx Environment::Hardware::Board::Panda Board ES Environment::Hardware::Add-Ons::Linaro::ABCDXYZ-Power-Probe Environment::Software::Linaro::Ubuntu Desktop Environment::Software::Ubuntu::Ubuntu Desktop
But this requires building a sensible taxonomy which is something I don't want to require in the first stage. The important part is to be _able_ to build one as the meta-data format won't constrain you. As we go we can release "official" meta-data spec releases that standardize what certain things mean. This could them be used as a basis for reliable (as in no false positives) and advanced search tools.
This would allow us to try moving some of the experimental meta-data there and build the client parts. If the idea gains traction it could grow from there.
Some considerations:
- Some tests have to be private. I don't know how to solve that in
namespaces. Some of the ideas that come to mind is .private. namespace that is explicitly non-global and can be provided by a local "test definition repository"
That would work, I think.
- It should probably be schema free, serving simple rfc822 files with
python-like classifiers (Test::Platform::Android anyone?) as this will allow free experimentation
FWIW, I think they're pedantically called "trove classifiers" :-)
Right, thanks!
I guess there would be two mandatory fields: name and version. And maybe format? So you could have
Yeah, name and version is a good start. Obviously each test definition will have a maintainer / owner but that's not something that has to be visible here (and it certainly won't be a part of what gets published "to the archive" if we go that far).
Name: stream Version: 1.0b3 Format: LAVA testdef version 1.3
We could also prefix all non-standard (non standardized) headers with the vendor string (-Linaro -Canonical) or have a standard custom extension header prefix as in HTTP, X-foo
...
and everything else would only need to make sense to LAVA.
Then you would say client side:
$ testdef-get lava-stream
We definitely need a catchy name
But seriously. I'm not entirely sure that the command line tool will be a part of the "standard issue". The same way you use pip to install python stuff from pypi you'd use lava to install test definitions into lava. I don't imagine how a generic tool could know how to interact with lava and checkbox in a way that would still be useful. While your example is strictly about running tests (it's about defining them) I think it's important to emphasize -- the protocols, and maybe the common repo, matter more than the tools as those may be more domain-specific for a while.
Fetched lava-stream version 1.0b3 $ vi lava-stream.txt # update stuff $ testdef-push lava-stream.txt ERROR: lava-stream version 1.0b3 already exists on server $ vi lava-stream.txt # Oops, update version $ testdef-push lava-stream.txt Uploaded lava-stream version 1.0b4
I wonder if we could actually cheat and use pypi to prototype this. I don't suppose they have a staging instance where I can register 20 tiny projects with oddball meta-data?
- It should (must?) have pypi-like version support so that a test can
be updated but the old definition is never lost.
Must, imho. I guess support for explicitly removing a version would be good, but the default should be append-only.
No disagreement here
- It probably does not have to be the download server as anyone can
host tests themselves. Just meta-data would be kept there.
By metadata you mean the key-value data as listed above, right?
Yes
(For small tests that may be enough but I can envision tests with external code and resources)
Yeah, the way lava-test tests can specify URLs and bzr and git repos to be fetched needs to stay I think.
That's the part I hate the most about current LAVA setup. I think that going forward they should go away and should be converted into test definitions that describe the very same code you'd git clone or bzr branch. The reason I believe that is that it will allow you do to reliable releases. This is the same distinction as pypi not having any tarballs, just git urls. I think that would defeat the long term purpose of the directory. Remember that both the test "wrapper" / definition and the test code is something that gets consumed by users/testers so _both_ should be released in the same, reliable, way.
In addition to that, having "downloads" makes offline easier. I'm not entire sure how that would work with very high level tests that, say, apt-get install something from the archive and then run some arbitrary commands. One might be tempted to create a reproducible test environment where all the downloads are kept offline and versioned but perhaps that kind of test needs to be explicitly marked as non-idempotent and that's the actual value it provides.
Thanks ZK
Zygmunt Krynicki zygmunt.krynicki@linaro.org writes:
W dniu 19.10.2012 01:36, Michael Hudson-Doyle pisze:
Incidentally that's something we may collaborate on.
Yeah, so how does checkbox deal with this? I guess it doesn't quite have the concept of remote users submitted requests that jobs are run? (i.e. checkbox is more dispatcher than scheduler in lava terminology).
We have largely the same problem but in different context (there are different internal users).
Checkbox has the concept of "whitelists" which basically specify the test scenario. Each item in the whitelist is a "job" (full test definition) that can use various checkbox "plugins" (like shell, manual and many others that I'm not familiar with). Checkbox then transforms the whitelist (resolving dependencies and things like that) and executes the tests much like dispatcher would.
I see.
There are several use cases that are currently broken
Such as?
From what I recall mostly on the way upstream/downstream (and sometimes side-stream) relationships work. The actual details are specific to Canonical (I would gladly explain that in a private channel if you wish to know more) but the general idea is that without some API stability (and we offer none today), script stability (you can think of it as another level of API) our downstream users (which are NOT just consumers) have a hard time following our releases.
The second issue that is more directly addressed is that there is poor conductivity for actual tests to flow from team to team and to get "stability" people prefer to keep similar/identical tests to themselves (not as in secret but as in not collaborated upon easily)
Ah. Now I understand your interest in this topic :-)
One of the proposals would be to build a pypi-like directory of tests and use that as a base for namespacing (first-come first-served name allocation). I'm not entirely sure this would help to solve the problem but it's something that, if available, could give us another vector.
Hm. This is definitely an interesting idea. I had actually already thought that using user specified distutils- or debian-style versioning would make sense -- you would get the latest version by the chosen algorithm by default, but could still upload revisions of old versions if you wanted to.
I'd rather avoid debian-style versions in favor of strict, constant length, version system. Let's not have a custom postgresql function for comparing versions again ;)
Well, maybe debian-style is overkill. What I really want is this property: for any two versions A and B with A < B it is possible to construct a version C such that A < C < B. No constant length version system can satisfy this, but it doesn't necessarily imply anything as complicated as debian.
Part of this would be a command line tool for fetching / publishing test definitions I guess. In fact this could almost be the main thing: it depends whether you want to produce (and host, I guess) a single site which is the centrepoint of the test definition world (like pypi.python.org is for Python stuff) or just the tools / protocols people use to run and work with their own repositories (testdef.validation.linaro.org or testdef.qa.ubuntu.com or whatever).
I think that there _should_ be a central repository simply because it means less fractures early on. From what I know people don't deploy their own pypi just to host their pet project. They only do that if they depend on the protocols and tools around pypi and want to keep the code private.
I guess I am a little skeptical of the amount of test reuse that's going to be possible between different users. I mean, if a testdef includes device tags that must be present on a device for a test run to be possible -- as power measurement tests might well do -- that sort of ties the testdef to *our lab*, never mind LAVA in general, unless the concept of and specific names of device tags becomes more widespread than I really expect at the moment.
I think that, as with pypi, even if there is a "single centrepoint of the test definition world", we should expect that sites will have local test repositories for one reason and another (as they do with pypi).
Having said what I did above, nothing can prevent others from re-implementing the same protocols or deploying their own archive but I think we should encourage working in the common pool as this will improve the ecosystem IMHO (look at easy_install, pip or even crate.io,
What is crate.io btw? Pypi with a prettier skin?
they would not have happened if there was a competing group of pypi-like systems that have no dominance over others). In other words the value of pypi is the data that is stored there.
Well sure. But there are lots and lots and lots of sites that use, say, Django. How many are going to use Linaro's big.LITTLE tests? I think there is actually a difference here. I get the impression that your situation is somewhat different -- that you have checkbox users who really should be collaborating on test definitions but have no way of doing so right now.
Another way to handle namespacing is to include the name of the user / group that can update a resource in its name, ala branches on LP or repos on github (or bundle streams in LAVA). Not sure if that's a good idea for our use case or not.
I thought about one thing that would warrant ~user/project approach. Both pypi and launchpad are product-centric -- you go to shop for solutions looking for the product name. GitHub on the other hand is developer centric as $product can have any number of forks that are equally exposed.
I think for our goals we should focus on product-centric views. The actual code, wherever it exists, should be managed with other tools.
I /think/ I agree here...
I would not like to outgrow this concept to a DVCS or a code hosting tool.
I wonder if checkbox's rfc822ish format would be better than JSON for test interchange...
Probably although it's still imperfect and suffers from binary deficiency.
Having slept on this (a few times) I think I'm leaning towards .ini files, somewhat like the stuff you came up with for lava-test.
[metadata] name: stream version: 1.0 format: lava-test v1.0
[install] url: http://www.cs.virginia.edu/stream/FTP/Code/stream.c steps: cc stream.c -O2 -fopenmp -o stream
[run] steps: ./stream
[parse] pattern: ^(?P<test_case_id>\w+):\W+(?P<measurement>\d+.\d+) [parse:appendall] units: MB/s result: pass
for example. In my mind, the metadata section and the keys it has here are mandatory, all else is free-form (although there will be something somewhere that knows that this format string gives a meaning to the keys and values present in this example).
What I'd like to see in practice is a web service that is free-for-all that can hold test meta data. I believe that as we go test meta data will formalize and at some point it may become possible to run lava-test test from checkbox and checkbox job in lava (given appropriate adapters on both sides) merely by specifying the name of the test.
So that's an argument for aiming for a single site? Maybe. Maybe you'd just give a URL of a testdef rather than the name of a test, so http://testdef.validation.linaro.org/stream rather than just 'stream'.
Imagine pip installing that each time. IMO it's better to stick to names rather than URLS, if we can.
Yes, this part is a good point. The other option would be to have per user/site configuration for a default site, but well. Less configuration -> better.
People know how to manage names already and URLs is something we can only google for.
The full URL could be usable for some kind of "packages" but that's not the primary scope of the proposal, I think. Packages are more complicated and secondary and the directory should merely point you at something that you can install with an absolute URL.
I don't think I understand what you mean here.
Initially it could be a simple RESTful interface based on a dumb HTTP server serving files from a tree structure.
And then could grow wiki like features? :-)
I'd rather not go there. IMHO it should only have search and CRUD actions on the content. Anything beyond that works better elsewhere (readthedocs / crate.io). Remember that it's not the 'appstore' experience that we are after here. The goal is to introduce a common component that people can converge and thrive on. This alone may give us better code re-usability as we gain partial visibility to other developers _and_ we fix the release process for test definitions so that people can depend on them indefinitely.
One of the user stories we have is "which tests are available to run on board X with Y deployed to it?" -- if we use test repositories that are entirely disconnected from the LAVA database I think this becomes a bit harder to answer. Although one could make searching a required feature of a test repository...
I think that's something to do in stage 2 as we get a better understanding of what we have.
Hm. Not sure -- it really is something we want ASAP in lava.
In the end the perfect solution, for LAVA, might be LAVA-specific and we should not sacrifice the generic useful aspects in the quest for something this narrow.
In simple classifiers that might help there:
Environment::Hardware::SoC::OMAP35xx Environment::Hardware::Board::Panda Board ES Environment::Hardware::Add-Ons::Linaro::ABCDXYZ-Power-Probe Environment::Software::Linaro::Ubuntu Desktop Environment::Software::Ubuntu::Ubuntu Desktop
I think KISS will need to be applied here.
But this requires building a sensible taxonomy which is something I don't want to require in the first stage. The important part is to be _able_ to build one as the meta-data format won't constrain you. As we go we can release "official" meta-data spec releases that standardize what certain things mean. This could them be used as a basis for reliable (as in no false positives) and advanced search tools.
Right, let's just solve the problem in my face now in a way that doesn't flagrantly prevent more general solutions later.
This would allow us to try moving some of the experimental meta-data there and build the client parts. If the idea gains traction it could grow from there.
Some considerations:
- Some tests have to be private. I don't know how to solve that in
namespaces. Some of the ideas that come to mind is .private. namespace that is explicitly non-global and can be provided by a local "test definition repository"
That would work, I think.
- It should probably be schema free, serving simple rfc822 files with
python-like classifiers (Test::Platform::Android anyone?) as this will allow free experimentation
FWIW, I think they're pedantically called "trove classifiers" :-)
Right, thanks!
I guess there would be two mandatory fields: name and version. And maybe format? So you could have
Yeah, name and version is a good start. Obviously each test definition will have a maintainer / owner but that's not something that has to be visible here (and it certainly won't be a part of what gets published "to the archive" if we go that far).
Name: stream Version: 1.0b3 Format: LAVA testdef version 1.3
We could also prefix all non-standard (non standardized) headers with the vendor string (-Linaro -Canonical) or have a standard custom extension header prefix as in HTTP, X-foo
Blah. No thanks.
...
and everything else would only need to make sense to LAVA.
Then you would say client side:
$ testdef-get lava-stream
We definitely need a catchy name
But seriously. I'm not entirely sure that the command line tool will be a part of the "standard issue". The same way you use pip to install python stuff from pypi you'd use lava to install test definitions into lava. I don't imagine how a generic tool could know how to interact with lava and checkbox in a way that would still be useful. While your example is strictly about running tests (it's about defining them) I think it's important to emphasize -- the protocols, and maybe the common repo, matter more than the tools as those may be more domain-specific for a while.
You've lost me here, I'm afraid.
I really do care mostly about the experience of the people maintaining tests. *I* am (or at least, the LAVA team is) going to be writing the test execution stuff, so that more falls in the "do it once, forget about it" category.
Fetched lava-stream version 1.0b3 $ vi lava-stream.txt # update stuff $ testdef-push lava-stream.txt ERROR: lava-stream version 1.0b3 already exists on server $ vi lava-stream.txt # Oops, update version $ testdef-push lava-stream.txt Uploaded lava-stream version 1.0b4
I wonder if we could actually cheat and use pypi to prototype this.
Interesting idea.
I don't suppose they have a staging instance where I can register 20 tiny projects with oddball meta-data?
There is, it turns out: http://testpypi.python.org/pypi
http://wiki.python.org/moin/PyPiImplementations is also relevant if we want to just run our own instance of PyPI-like software.
- It probably does not have to be the download server as anyone can
host tests themselves. Just meta-data would be kept there.
By metadata you mean the key-value data as listed above, right?
Yes
(For small tests that may be enough but I can envision tests with external code and resources)
Yeah, the way lava-test tests can specify URLs and bzr and git repos to be fetched needs to stay I think.
That's the part I hate the most about current LAVA setup. I think that going forward they should go away and should be converted into test definitions that describe the very same code you'd git clone or bzr branch.
I agree, but I don't know how practical this is. I need to work through some moderately complicated existing tests -- if a test includes a few C files that need compilation, I don't think squeezing that into a testdef file is really practical. But maybe if the testdef server hosts more than just the testdef file (so more like PyPI, in fact) this becomes bearable again... not sure how this relates to some of your comments around just hosting metadata.
The reason I believe that is that it will allow you do to reliable releases. This is the same distinction as pypi not having any tarballs, just git urls. I think that would defeat the long term purpose of the directory. Remember that both the test "wrapper" / definition and the test code is something that gets consumed by users/testers so _both_ should be released in the same, reliable, way.
In addition to that, having "downloads" makes offline easier. I'm not entire sure how that would work with very high level tests that, say, apt-get install something from the archive and then run some arbitrary commands. One might be tempted to create a reproducible test environment where all the downloads are kept offline and versioned but perhaps that kind of test needs to be explicitly marked as non-idempotent and that's the actual value it provides.
Well. For things that are sufficiently declarative (specifying a git repo to clone or a package to install with apt-get) we can capture the versions that are obtained. For an arbitrary URL one could checksum the download and issue a warning if it has changed. But if the test code itself runs "apt-get update/install", well, then it's just not a test that supports being replayed on this level. As we don't have any support at all for re-running tests, I'm not sure I want to spend /too/ long worrying about this aspect...
Cheersm, mwh
Michael Hudson-Doyle michael.hudson@linaro.org writes:
I wonder if checkbox's rfc822ish format would be better than JSON for test interchange...
Probably although it's still imperfect and suffers from binary deficiency.
Having slept on this (a few times) I think I'm leaning towards .ini files
Actually, now I'm leaning towards YAML. It has its crazy corners but we don't need to advertise those and the core isn't too bad. It's also already used in Linaro to describe hwpacks.
Cheers, mwh
W dniu 25.10.2012 03:13, Michael Hudson-Doyle pisze:
Michael Hudson-Doyle michael.hudson@linaro.org writes:
I wonder if checkbox's rfc822ish format would be better than JSON for test interchange...
Probably although it's still imperfect and suffers from binary deficiency.
Having slept on this (a few times) I think I'm leaning towards .ini files
Actually, now I'm leaning towards YAML. It has its crazy corners but we don't need to advertise those and the core isn't too bad. It's also already used in Linaro to describe hwpacks.
Despite my dislike to YAML and the crazy stuff it supports I'm +1 on the non-flat structure and basic data types.
+1 from me
Thanks ZK
PS: I'll reply to the rest of the thread later today as I had a busy day yesterday
Hey everyone
I've registered testdef on pypi and pushed a skeleton to github. I'll be pushing some code examples and proposals here for review, if they have no objections I'll land them to trunk.
You are encouraged to fork and send pull requests as well. If anyone wants to co-own the pypi name I'm glad to share that
Thanks ZK
Andy Doan andy.doan@linaro.org writes:
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
Yeah. I guess there is also an issue around namespacing if we allow pulling from multiple branches -- if you pull from two places and both define the test called "stream", what happens?
Another thought -- do we want to have private tests?
Cheers, mwh
On 10/17/2012 02:55 PM, Michael Hudson-Doyle wrote:
Andy Doan andy.doan@linaro.org writes:
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
Yeah. I guess there is also an issue around namespacing if we allow pulling from multiple branches -- if you pull from two places and both define the test called "stream", what happens?
good point.
Another thought -- do we want to have private tests?
I think we have to assume yes considering these test defs will/should contain descriptions of tests. However, a private repo doesn't feel like much of a difference other than the URL given to us to pull from. I'd think you'd just want to use a white-listing approach like we currently do for private builds.
Andy Doan andy.doan@linaro.org writes:
On 10/17/2012 02:55 PM, Michael Hudson-Doyle wrote:
Andy Doan andy.doan@linaro.org writes:
On 10/16/2012 07:47 PM, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It feels good just to see this all listed concisely.
I think we probably need one other section in the page like "testdef organization". This would describe any rules we have file filenames or directory hierarchies needed by the git/bzr testdef repo so that we'll know how to import things.
Yeah. I guess there is also an issue around namespacing if we allow pulling from multiple branches -- if you pull from two places and both define the test called "stream", what happens?
good point.
Also, suppose you use the web UI to update a test and then an update from a branch pulls in an update to the same test? (This is starting to feel a bit like the issues a dvcs-backed wiki like ikiwiki faces...)
Another thought -- do we want to have private tests?
I think we have to assume yes considering these test defs will/should contain descriptions of tests. However, a private repo doesn't feel like much of a difference other than the URL given to us to pull from. I'd think you'd just want to use a white-listing approach like we currently do for private builds.
I was thinking more like a test where only privileged people can submit a job that runs the test and where the results of the test can only be submitted to a stream that has the same visibility as the test, or things like that.
Cheers, mwh
Hello,
Sorry for my delay. I have been busy with other things.
On Wed, 2012-10-17 at 13:47 +1300, Michael Hudson-Doyle wrote:
We're going to be talking about test case management in LAVA at the Connect. I've brain-dumped some of my thoughts here:
https://wiki.linaro.org/Platform/LAVA/Specs/TestCaseManagement
Comments welcome. But if all you do is read it before coming to the session, that's enough for me :-)
It's good to see people discussing it. I'm glad.
Here are some comments about this initial spec:
[Terminologies]
As a section says, it's not going to change the test/testcase/testrun/testresult terminology for now. It would be good to have each of these terminologies well explained somewhere. It might be crystal clear to you guys, but for other people it might be confusing when attending to the Linaro Connect session, since these same terminologies are commonly used to represent other QA artifacts.
[Test case concept]
The first thing that passed through my head when I started thinking about TC (sorry, I don't know what to call it considering LAVA naming) inside LAVA, was that it would be a different Object/artifact/file, and not really represented by the test definition concept/file LAVA already have nowadays.
Is that what you guys are planning to do? To consider the test def that we have today as the TC entity? If yes, I probably have some questions about it.
Cheers,
-Abner
Cheers, mwh
Hi All:
When I am trying to install the latest lava-dispatcher from https://launchpad.net/lava-dispatcher use ./setup.py install , it occur: error: Setup script exited with error in lava-utils-interface setup command: Unable to import 'lava.utils.interface': No module named utils.interface
how to fix it? I use ubuntu 12.04
I try to install python-utils, seems not work,
Thanks and best regards Elen song
Hello,
On Tue, Dec 25, 2012 at 08:22:38AM +0000, Song, Elen wrote:
Hi All:
When I am trying to install the latest lava-dispatcher from https://launchpad.net/lava-dispatcher use ./setup.py install , it occur: error: Setup script exited with error in lava-utils-interface setup command: Unable to import 'lava.utils.interface': No module named utils.interface
how to fix it? I use ubuntu 12.04
I try to install python-utils, seems not work,
Try following the installation instructions at http://lava-dispatcher.readthedocs.org/
Hi Antonio:
Thanks for your last support I have another question about bundle streams When I fulfill a job and going to find result in bundle streams I found this error:
Cause
null value in column "_order" violates not-null constraint Deserialization failure traceback
Traceback (most recent call last): File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/models.py", line 497, in deserializeself._do_deserialize(prefer_evolution) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/models.py", line 518, in _do_deserializehelper.deserialize(self,prefer_evolution) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 801, in deserializeimporter().import_document(s_bundle,doc) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 64, in import_documentself._import_document_with_transaction(s_bundle,doc) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/django/db/transaction.py", line 209, in innerreturnfunc(*args,**kwargs) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 106, in _import_document_with_transactionself._import_document(s_bundle,doc) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 110, in _import_documentself._import_test_run(c_test_run,s_bundle) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 152, in _import_test_runself._import_test_results(c_test_run,s_test_run) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 391, in _import_test_resultsself._import_test_results_pgsql(c_test_results,s_test_run) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 365, in _import_test_results_pgsql""" % (s_test_run.id, s_test_run.test.id)) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 52, in executereturnself.cursor.execute(query,args)IntegrityError: null value in column "_order" violates not-null constraint
I don't how it occur, I have do some change in lava-dispatcher to adapt to my own board And I use lava-deployment-tool-2012-06 The requirement is:
Lava-server < 0.16 Lava-tool < 0.6 Lava-scheduler < 0.2 Lava-scheduler-tool Lava-dashboard < 0.18 Lava-dashboard-tool Lava-dispatcher < 0.15 Simplejson < 2.5 Keyring
I can not use the latest version because it change too much for me to merge change of lava-dispatch to latest Is there any suggestion where problem may occur?
Thanks and best regards Elen song
"Song, Elen" Elen.Song@atmel.com writes:
Hi Antonio:
Thanks for your last support I have another question about bundle streams When I fulfill a job and going to find result in bundle streams I found this error:
Cause
null value in column "_order" violates not-null constraint Deserialization failure traceback
Traceback (most recent call last): File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/models.py", line 497, in deserializeself._do_deserialize(prefer_evolution) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/models.py", line 518, in _do_deserializehelper.deserialize(self,prefer_evolution) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 801, in deserializeimporter().import_document(s_bundle,doc) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 64, in import_documentself._import_document_with_transaction(s_bundle,doc) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/django/db/transaction.py", line 209, in innerreturnfunc(*args,**kwargs) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 106, in _import_document_with_transactionself._import_document(s_bundle,doc) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 110, in _import_documentself._import_test_run(c_test_run,s_bundle) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 152, in _import_test_runself._import_test_results(c_test_run,s_test_run) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 391, in _import_test_resultsself._import_test_results_pgsql(c_test_results,s_test_run) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 365, in _import_test_results_pgsql""" % (s_test_run.id, s_test_run.test.id)) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 52, in executereturnself.cursor.execute(query,args)IntegrityError: null value in column "_order" violates not-null constraint
I don't how it occur, I have do some change in lava-dispatcher to adapt to my own board And I use lava-deployment-tool-2012-06 The requirement is:
Lava-server < 0.16 Lava-tool < 0.6 Lava-scheduler < 0.2 Lava-scheduler-tool Lava-dashboard < 0.18 Lava-dashboard-tool Lava-dispatcher < 0.15 Simplejson < 2.5 Keyring
I can not use the latest version because it change too much for me to merge change of lava-dispatch to latest Is there any suggestion where problem may occur?
It's hard to help support such an old version of the code (maybe you should try to contribute support of your board to trunk, so it can be maintained going forwards?) but for me, _order defaults to 0 in postgres:
lava-dev=# \d dashboard_app_testresult Table "public.dashboard_app_testresult" Column | Type | Modifiers ----------------+--------------------------+----------------------------------------------------------------------- test_run_id | integer | not null _order | integer | not null default 0 ...
Does it not for you?
(I also don't know what the _order field is for, it seems to be something added by Django...)
Cheers, mwh
-----Original Message----- From: Michael Hudson-Doyle [mailto:michael.hudson@linaro.org] Sent: 2013年1月7日 5:14 To: Song, Elen; Antonio Terceiro Cc: Linaro Validation; Spring Zhang Subject: Re: [Linaro-validation] [bundle streams] Deserialization error
"Song, Elen" Elen.Song@atmel.com writes:
Hi Antonio:
Thanks for your last support I have another question about bundle streams When I fulfill a job and going to find result in bundle streams I found this error:
Cause
null value in column "_order" violates not-null constraint Deserialization failure traceback
Traceback (most recent call last): File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard _app/models.py", line 497, in deserializeself._do_deserialize(prefer_evolution)
File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard _app/models.py", line 518, in _do_deserializehelper.deserialize(self,prefer_evolution)
File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard _app/helpers.py", line 801, in deserializeimporter().import_document(s_bundle,doc)
File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard _app/helpers.py", line 64, in import_documentself._import_document_with_transaction(s_bundle,doc)
File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/django/db/ transaction.py", line 209, in innerreturnfunc(*args,**kwargs)
File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard _app/helpers.py", line 106, in _import_document_with_transactionself._import_document(s_bundle,doc)
File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard _app/helpers.py", line 110, in _import_documentself._import_test_run(c_test_run,s_bundle)
File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard _app/helpers.py", line 152, in _import_test_runself._import_test_results(c_test_run,s_test_run)
File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard _app/helpers.py", line 391, in _import_test_resultsself._import_test_results_pgsql(c_test_results,s_test_run )
File
"/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard _app/helpers.py", line 365, in _import_test_results_pgsql""" % (s_test_run.id, s_test_run.test.id))
File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dj ango/db/backends/postgresql_psycopg2/base.py", line 52, in executereturnself.cursor.execute(query,args)IntegrityError: null value in column "_order" violates not-null constraint
I don't how it occur, I have do some change in lava-dispatcher to adapt to my own board And I use lava-deployment-tool-2012-06 The requirement is:
In fact , it is lava-deployment-tool-version 0.4 This is the only version I found I can configure the lava-server version.
Lava-server < 0.16 Lava-tool < 0.6 Lava-scheduler < 0.2 Lava-scheduler-tool Lava-dashboard < 0.18 Lava-dashboard-tool Lava-dispatcher < 0.15 Simplejson < 2.5 Keyring
I can not use the latest version because it change too much for me to merge change of lava-dispatch to latest Is there any suggestion where
problem may occur?
It's hard to help support such an old version of the code (maybe you should try to contribute support of your board to trunk, so it can be maintained going forwards?) but for me, _order defaults to 0 in postgres:
lava-dev=# \d dashboard_app_testresult Table "public.dashboard_app_testresult" Column | Type | Modifiers ----------------+--------------------------+---------------------------- ----------------+--------------------------+---------------------------- ----------------+--------------------------+--------------- test_run_id | integer | not null _order | integer | not null default 0 ...
Does it not for you?
Sorry, where is this postgres locate? I can't find it in my /srv/lava/instance/testinstance/ There is no dashboard_app_testresult in there Am I missing something during the installation?
(I also don't know what the _order field is for, it seems to be something added by Django...)
Cheers, mwh
Hi Antonio:
More traceback
May be easy to locate problem:
Traceback (most recent call last): File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/models.py", line 497, in deserialize self._do_deserialize(prefer_evolution) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/models.py", line 518, in _do_deserialize helper.deserialize(self, prefer_evolution) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 801, in deserialize importer().import_document(s_bundle, doc) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 64, in import_document self._import_document_with_transaction(s_bundle, doc) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/django/db/transaction.py", line 209, in inner return func(*args, **kwargs) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 106, in _import_document_with_transaction self._import_document(s_bundle, doc) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 110, in _import_document self._import_test_run(c_test_run, s_bundle) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 152, in _import_test_run self._import_test_results(c_test_run, s_test_run) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 391, in _import_test_results self._import_test_results_pgsql(c_test_results, s_test_run) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/dashboard_app/helpers.py", line 365, in _import_test_results_pgsql """ % (s_test_run.id, s_test_run.test.id)) File "/srv/lava/instances/testinstance/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 52, in execute return self.cursor.execute(query, args) IntegrityError: null value in column "_order" violates not-null constraint
Best regards Elen song
Hi Antonio:
Can I use lava-deployment-tool to install old lava-version. Like install lava-server-2012.06 and other package?
Best regards Elen song
Hi Elen,
On Wed, Jan 09, 2013 at 06:17:01AM +0000, Song, Elen wrote:
Hi Antonio:
Can I use lava-deployment-tool to install old lava-version. Like install lava-server-2012.06 and other package?
In theory: yes, you should be able to do it by setting the LAVA_MANIFEST_BRANCH environment variable to a bzr URL of a manifest branch. The default is lp:lava-manifest, which we keep up to date to always install the latest lava, so you could branch off that and revert to the state of lava-server 2012.06, and then try:
$ LAVA_MANIFEST_BRANCH=lp:~you/lava-manifest/lava-2012.06 ./lava-deployment-tool [ARGS]
In practice: we don't test doing that, so I have no idea what problems you might find. Also, lava-deployment-tool has evolved since then and it might be making assumptions that weren't true for older lava code.
Hi Antonio:
-----Original Message----- From: Antonio Terceiro [mailto:antonio.terceiro@linaro.org] Sent: 2013年1月9日 21:44 To: Song, Elen Cc: Michael Hudson-Doyle; Linaro Validation Subject: Re: [Linaro-validation] lava-deployment-tool
Hi Elen,
On Wed, Jan 09, 2013 at 06:17:01AM +0000, Song, Elen wrote:
Hi Antonio:
Can I use lava-deployment-tool to install old lava-version. Like install lava-server-2012.06 and other package?
In theory: yes, you should be able to do it by setting the LAVA_MANIFEST_BRANCH environment variable to a bzr URL of a manifest branch. The default is lp:lava-manifest, which we keep up to date to always install the latest lava, so you could branch off that and revert to the state of lava-server 2012.06, and then try:
$ LAVA_MANIFEST_BRANCH=lp:~you/lava-manifest/lava-2012.06 ./lava-deploy ment-tool [ARGS]
Yes, that probably could work. And I found the buildout.cfg have to be changed into bulidout-productuin.cfg in lava-deployment-tool so that it will download relevant lava stuff.
In practice: we don't test doing that, so I have no idea what problems you might find. Also, lava-deployment-tool has evolved since then and it might be making assumptions that weren't true for older lava code.
-- Antonio Terceiro Software Engineer - Linaro http://www.linaro.org
Best regards Elen Song
linaro-validation@lists.linaro.org