On 19 June 2015 at 11:46, Luis Araujo luis.araujo@collabora.co.uk wrote:
Hello Milosz,
On 06/18/2015 09:52 PM, Milosz Wasilewski wrote:
Luis,
I'm now doing a similar thing. The only difference is the target is web application rather than command line tool. By checking your code it seems there are some common parts. You can check the data polling code from here: https://git.linaro.org/people/milosz.wasilewski/dataminer.git
This is really interesting. I am thinking to add some similar DB support to lqa to allow some of the query options you have there.
I already have DB. Should be rolled out to qa-reports this or next week (still fixing a few bugs)
On 17 June 2015 at 16:35, Luis Araujo luis.araujo@collabora.co.uk wrote:
Hello everyone,
Collabora has been working on `lqa', a tool to submit and manage LAVA jobs, which helps to get many of the LAVA job administration and monitoring tasks conveniently done from the command line.
`lqa' brings a new API, lqa_api python module, a complete set of classes to easily interact with LAVA and offering at the same time a clean API on top of which further applications can be built upon (like `lqa' itself).
It has a templating system (using jinja2 package) that allows to use variables in json job files (in future could be expanded to support yaml), specifying their values either from a profile file or directly from the command line making possible the dynamic assignments of template variables during the `lqa' command execution. The templating mechanism allows to handle groups of jobs, therefore it makes it easier to submit jobs in bulk.
`lqa' also features a flexible profile system (in YAML) which allows to specify a 'main-profile' from which further sub-profiles can inherit values, avoiding information duplication between similar profiles.
Other of the current features include:
- Test report generation with the 'analyse' subcommand.
I'm not sure if _find_missing_tests [1] works properly for you. The tests in the JSON job definition are identified using git repository URL and YAML file path. In the result bundle you have git repository URL, commit ID and test name (comes form metadata->name property). So in order to check what is missing you need to checkout the proper commit from repository, go through all YAML files, find the proper metadata->name and match it to file name. Since the names in metadata are not guaranteed to be unique, you can't be 100% you're hitting the right YAML file :(
The main idea of this method is finding the tests that are specified in the JSON job file but have no available results in the final bundle (maybe a more accurate name would be _find_missing_results).
So far, it has been working fine properly reporting the missing results.
Sounds strange. The code shouldn't work. I'll try it locally and let you know how does that look like.
Maybe your point is more about finding the missing test definitions from the repositories?
no, I'm talking about exactly the same case - find out if the test-shell produced results or not.
milosz
[1] https://git.collabora.com/cgit/singularity/tools/lqa.git/tree/lqa_tool/comma...
milosz
- Polling to check for job completion.
- All the operations offer logging capabilities.
- Independent profile and configuration files.
We invite everyone to check out its official git repo at:
https://git.collabora.com/cgit/singularity/tools/lqa.git/
Suggestions and comments are welcome.
--- Luis
linaro-validation mailing list linaro-validation@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-validation