On 06/25/2015 02:59 AM, Milosz Wasilewski wrote:
On 24 June 2015 at 16:00, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On 19 June 2015 at 11:46, Luis Araujo luis.araujo@collabora.co.uk wrote:
Hello Milosz,
On 06/18/2015 09:52 PM, Milosz Wasilewski wrote:
Luis,
I'm now doing a similar thing. The only difference is the target is web application rather than command line tool. By checking your code it seems there are some common parts. You can check the data polling code from here: https://git.linaro.org/people/milosz.wasilewski/dataminer.git
This is really interesting. I am thinking to add some similar DB support to lqa to allow some of the query options you have there.
I already have DB. Should be rolled out to qa-reports this or next week (still fixing a few bugs)
On 17 June 2015 at 16:35, Luis Araujo luis.araujo@collabora.co.uk wrote:
Hello everyone,
Collabora has been working on `lqa', a tool to submit and manage LAVA jobs, which helps to get many of the LAVA job administration and monitoring tasks conveniently done from the command line.
`lqa' brings a new API, lqa_api python module, a complete set of classes to easily interact with LAVA and offering at the same time a clean API on top of which further applications can be built upon (like `lqa' itself).
It has a templating system (using jinja2 package) that allows to use variables in json job files (in future could be expanded to support yaml), specifying their values either from a profile file or directly from the command line making possible the dynamic assignments of template variables during the `lqa' command execution. The templating mechanism allows to handle groups of jobs, therefore it makes it easier to submit jobs in bulk.
`lqa' also features a flexible profile system (in YAML) which allows to specify a 'main-profile' from which further sub-profiles can inherit values, avoiding information duplication between similar profiles.
Other of the current features include:
- Test report generation with the 'analyse' subcommand.
I'm not sure if _find_missing_tests [1] works properly for you. The tests in the JSON job definition are identified using git repository URL and YAML file path. In the result bundle you have git repository URL, commit ID and test name (comes form metadata->name property). So in order to check what is missing you need to checkout the proper commit from repository, go through all YAML files, find the proper metadata->name and match it to file name. Since the names in metadata are not guaranteed to be unique, you can't be 100% you're hitting the right YAML file :(
The main idea of this method is finding the tests that are specified in the JSON job file but have no available results in the final bundle (maybe a more accurate name would be _find_missing_results).
So far, it has been working fine properly reporting the missing results.
Sounds strange. The code shouldn't work. I'll try it locally and let you know how does that look like.
Maybe your point is more about finding the missing test definitions from the repositories?
no, I'm talking about exactly the same case - find out if the test-shell produced results or not.
I checked and it doesn't work (doesn't detect missing results). Here is example job: https://validation.linaro.org/scheduler/job/382325 (I'm not sure it's publicly available)
I cannot access it (even logged in with my launchpad account).
Can you send me a link publicly available with this same problem?, I really would like to check this out.
There are a couple of LTP test shells with parameters. Results for TST_CMDFILES=fs are missing and lqa doesn't show that. Here is the output I got:
./lqa -c examples/lqa.yaml analyse 382325 Generating lqa report for job(s): 382325
Report for job(s) (Wed Jun 24 19:52:31 2015): 382325 1 test job(s) ran: 1 complete (0 fully successful, 1 wih failures), 0 incomplete
- --- Failed Jobs --- *
Bundles (F) Jobs with failed tests:
382325: https://ci.linaro.org/jenkins/job/linux-linaro-stable-lsk-v3.14/hwpack=vexpress64,label=build/81/ ========================================================================================================= 2075 passed, 44 failed, 2 skipped, 0 unknown FAILED | kselftest-net:net FAILED | kselftest-net:psock_fanout test FAILED | kselftest-net:psock_tpacket test FAILED | ltp:LTP_admin_tools FAILED | ltp:su01 FAILED | ltp:LTP_containers FAILED | ltp:netns_devices FAILED | ltp:netns_devices2 FAILED | ltp:netns_isolation FAILED | perf:perf report test FAILED | perf:perf test - vmlinux symtab matches kallsyms FAILED | perf:perf test - detect open syscall event FAILED | perf:perf test - detect open syscall event on all cpus FAILED | perf:perf test - read samples using the mmap interface FAILED | perf:perf test - parse events tests FAILED | perf:perf test - Test breakpoint overflow signal handler FAILED | perf:perf test - Test breakpoint overflow sampling FAILED | perf:perf test - Test tracking with sched_switch FAILED | kselftest-vm:vm FAILED | kselftest-vm:vm FAILED | kselftest-vm:hugetlbfstest FAILED | ltp:LTP_syscalls FAILED | ltp:accept4_01 FAILED | ltp:connect01 FAILED | ltp:fsync02 FAILED | ltp:ftruncate04 FAILED | ltp:ftruncate04_64 FAILED | ltp:fanotify06 FAILED | ltp:recv01 FAILED | ltp:recvfrom01 FAILED | ltp:recvmsg01 FAILED | ltp:send01 FAILED | ltp:sendfile02 FAILED | ltp:sendfile02_64 FAILED | ltp:sendfile04 FAILED | ltp:sendfile04_64 FAILED | ltp:sendfile05 FAILED | ltp:sendfile05_64 FAILED | ltp:sendfile06 FAILED | ltp:sendfile06_64 FAILED | ltp:sendmsg01 FAILED | lava:wait_for_master_image_boot_msg FAILED | lava:lava_test_shell FAILED | lava:wait_for_master_image_boot_msg Job: https://validation.linaro.org/scheduler/job/382325 Bundle: https://validation.linaro.org/dashboard/permalink/bundle/9d3a579ed6ccb89073bfdd73e112daf3078d355d/
Does the list show the missing test-shells? Do I miss some options to show the missing stuff?
No missing options, it should just work running the command like that.
milosz