Hi,
I'm trying to get the proper relationship between requested tests and results in LAVA v2. Here is example job: https://validation.linaro.org/scheduler/job/1109234 and results for this job: https://validation.linaro.org/results/1109234
How can I tell: - which result matches which test? - if there are multiple occurrences of the same test with different parameters, how to recognize the results?
In LAVA v1 the matching was a very arduous process. One had to download the job definition, look for lava-test-shell actions, pull the test definition yaml sources and match yaml ID and to ID found in result bundle. How does this work in v2?
milosz
On 9 September 2016 at 14:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
I'm trying to get the proper relationship between requested tests and results in LAVA v2. Here is example job: https://validation.linaro.org/scheduler/job/1109234 and results for this job: https://validation.linaro.org/results/1109234
I'll add notes to the docs for the 2016.11 release based on these responses and any feedback on this list.
How can I tell:
- which result matches which test?
There is a chevron in the test case detail page, directly after the test case name, which links to the point in the log where that test was reported. The same URL can also be determined in advance by knowing the job ID, the sequence of test definitions in the test job definition and the name of the test case.
Note: Unlike V1, the test shell does not wait until the test case entry has been created before moving on, so there can be an offset between the point linked from the result (where the test case entry was created) to the point slightly earlier in the log where the test itself was executed. This wait behaviour caused various bugs as it needs to block at the shell read command which gets confused by other messages on the serial console. The offset is the consequence of removing this behaviour.
So:
https://validation.linaro.org/results/1109234/1_lamp-test/mysql-show-databas... links to https://validation.linaro.org/scheduler/job/1109234#results_1_lamp-test_mysq...
i.e. once you know the URL of the result, you can generate the URL of the point in the test job log where that result was created.
In the log file this section looks like:
Received signal: <TESTCASE> TEST_CASE_ID=mysql-show-databases RESULT=pass case: mysql-show-databases definition: 1_lamp-test result: pass
So, in this case, there was no offset.
There is a REST API using the name of the test definition and the name of the test case.
The name of the test definition comes from the test job definition:
- repository: http://git.linaro.org/lava-team/lava-functional-tests.git from: git path: lava-test-shell/single-node/singlenode03.yaml name: singlenode-advanced
The digit comes from the sequence of definitions in the list in the - test: action of the test job definition. So job 154736 on staging has three definitions to the test action, 0_env-dut-inline, 1_smoke_tests and 2_singlenode_advanced.
The test case name comes directly from the call to lava-test-case.
When an inline test definition does not report any test cases (by not calling lava-test-case anywhere, just doing setup or diagnostic calls to put data into the logs) then the metadata shows that test definition as "omitted" and it has no entry in the results table.
omitted.0.inline.name: env-dut-inline
In addition, each test job gets a set of LAVA results containing useful information like the commit hash of the test definition when it was cloned for this test job.
- if there are multiple occurrences of the same test with different
parameters, how to recognize the results?
Multiple occurrences show up in the results table: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced (realpath_check occurs twice with separate results)
In this case, each occurred within the same test definition, so there is one page showing both results: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced/r... (as the test case name is the same, there can only be one URL).
Each one links to a different point in the job log.
In LAVA v1 the matching was a very arduous process.
That should be much simpler now. All the information is available to the test writer once the testjob ID is known.
https://validation.linaro.org/results/1109234
There is a CSV and YAML download link for the complete set of results for the job. (Not just a list of bundles as it was with V1.) The export includes details of the test definition names.
https://validation.linaro.org/results/1109234/csv https://validation.linaro.org/results/1109234/yaml
From the test job submission, there are two entries in the test:
definitions: list. The names in the test job submission are smoke-test and lamp-test. This ordering is retained consistently, so the results for smoke-test are 0_smoke-test:
https://validation.linaro.org/results/1109234/0_smoke-test
Now the CSV and YAML links are restricted to just the smoke-test definition.
https://validation.linaro.org/results/1109234/0_smoke-test/csv https://validation.linaro.org/results/1109234/0_smoke-test/yaml
In addition, private test jobs create results which are only visible for certain users. The REST API supports this using ?name=user.name&token=tokenstring in the URL at all export levels.
e.g. ....validation.linaro.org/results/1109234/0_smoke-test/yaml?name=user.name&token=tokenstring
One had to download the job definition, look for lava-test-shell actions, pull the test definition yaml sources and match yaml ID and to ID found in result bundle. How does this work in v2?
See also the metadata section:
https://staging.validation.linaro.org/results/154736
test.2.namesinglenode-advanced
There are CSV and YAML download links for the results and YAML download links for the metadata.
BTW: one limitation with the metadata representation is that it is not possible from the git clone URL, path and filename to consistently build a full URL to the file itself without specialist knowledge of exactly which git WWW frontend is in use for which git clone URL. So LAVA can turn git://git.linaro.org/qa/test-definitions.git into http://git.linaro.org/qa/test-definitions.git but LAVA cannot reliably combine that with ubuntu/smoke-tests-basic.yaml to get a path like https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/ubuntu/smoke-tests... because the "blob" "HEAD:" elements are not knowable from the URL or the git clone operation. git.linaro.org has one frontend, github.com has another and git.debian.org has yet another.
On 12 September 2016 at 08:55, Neil Williams neil.williams@linaro.org wrote:
On 9 September 2016 at 14:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
I'm trying to get the proper relationship between requested tests and results in LAVA v2. Here is example job: https://validation.linaro.org/scheduler/job/1109234 and results for this job: https://validation.linaro.org/results/1109234
I'll add notes to the docs for the 2016.11 release based on these responses and any feedback on this list.
How can I tell:
- which result matches which test?
There is a chevron in the test case detail page, directly after the test case name, which links to the point in the log where that test was reported. The same URL can also be determined in advance by knowing the job ID, the sequence of test definitions in the test job definition and the name of the test case.
The chevron seems to always point to #bottom of log file.
Note: Unlike V1, the test shell does not wait until the test case entry has been created before moving on, so there can be an offset between the point linked from the result (where the test case entry was created) to the point slightly earlier in the log where the test itself was executed. This wait behaviour caused various bugs as it needs to block at the shell read command which gets confused by other messages on the serial console. The offset is the consequence of removing this behaviour.
So:
https://validation.linaro.org/results/1109234/1_lamp-test/mysql-show-databas... links to https://validation.linaro.org/scheduler/job/1109234#results_1_lamp-test_mysq...
i.e. once you know the URL of the result, you can generate the URL of the point in the test job log where that result was created.
In the log file this section looks like:
Received signal: <TESTCASE> TEST_CASE_ID=mysql-show-databases RESULT=pass case: mysql-show-databases definition: 1_lamp-test result: pass
So, in this case, there was no offset.
There is a REST API using the name of the test definition and the name of the test case.
My question was about API. Manually it's possible to do the matching even in v1.
The name of the test definition comes from the test job definition:
- repository: http://git.linaro.org/lava-team/lava-functional-tests.git from: git path: lava-test-shell/single-node/singlenode03.yaml name: singlenode-advanced
The digit comes from the sequence of definitions in the list in the - test: action of the test job definition. So job 154736 on staging has three definitions to the test action, 0_env-dut-inline, 1_smoke_tests and 2_singlenode_advanced.
OK. So when I download the jobdefinition and test results I should get the match by order of appearance. Is 'lava' always present in the results?
The test case name comes directly from the call to lava-test-case.
When an inline test definition does not report any test cases (by not calling lava-test-case anywhere, just doing setup or diagnostic calls to put data into the logs) then the metadata shows that test definition as "omitted" and it has no entry in the results table.
omitted.0.inline.name: env-dut-inline
lava-test-case calls are not that interesing yet as for example the test can return different number of results based on parameters passed.
In addition, each test job gets a set of LAVA results containing useful information like the commit hash of the test definition when it was cloned for this test job.
- if there are multiple occurrences of the same test with different
parameters, how to recognize the results?
Multiple occurrences show up in the results table: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced (realpath_check occurs twice with separate results)
The question was about multiple occurences of the same test definition. For example we use subsets of LTP. So I would like to test: - LTP - syscalls - LTP - math
As I wrote above the test cases will be different, so they're not that interesting.
In this case, each occurred within the same test definition, so there is one page showing both results: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced/r... (as the test case name is the same, there can only be one URL).
Each one links to a different point in the job log.
In LAVA v1 the matching was a very arduous process.
That should be much simpler now. All the information is available to the test writer once the testjob ID is known.
https://validation.linaro.org/results/1109234
There is a CSV and YAML download link for the complete set of results for the job. (Not just a list of bundles as it was with V1.) The export includes details of the test definition names.
https://validation.linaro.org/results/1109234/csv https://validation.linaro.org/results/1109234/yaml
This looks very good, but I'm missing parameters. Smoke test has a defined parameter which is by default set to 'false'. How do I match the results from 2 subsequent execution of the smoke test - first with 'false', second with 'true' set as parameter? If the params are not present anywhere in the result I still have to download the git repo with tests to learn what the default was :(
Example of such jobs: https://validation.linaro.org/results/1107487 (not the best as the names are different) https://validation.linaro.org/scheduler/job/1113188/definition (job failed, so no results, but I'm trying to get this working)
From the test job submission, there are two entries in the test: definitions: list. The names in the test job submission are smoke-test and lamp-test. This ordering is retained consistently, so the results for smoke-test are 0_smoke-test:
https://validation.linaro.org/results/1109234/0_smoke-test
Now the CSV and YAML links are restricted to just the smoke-test definition.
https://validation.linaro.org/results/1109234/0_smoke-test/csv https://validation.linaro.org/results/1109234/0_smoke-test/yaml
In addition, private test jobs create results which are only visible for certain users. The REST API supports this using ?name=user.name&token=tokenstring in the URL at all export levels.
e.g. ....validation.linaro.org/results/1109234/0_smoke-test/yaml?name=user.name&token=tokenstring
One had to download the job definition, look for lava-test-shell actions, pull the test definition yaml sources and match yaml ID and to ID found in result bundle. How does this work in v2?
See also the metadata section:
https://staging.validation.linaro.org/results/154736
test.2.namesinglenode-advanced
There are CSV and YAML download links for the results and YAML download links for the metadata.
BTW: one limitation with the metadata representation is that it is not possible from the git clone URL, path and filename to consistently build a full URL to the file itself without specialist knowledge of exactly which git WWW frontend is in use for which git clone URL. So LAVA can turn git://git.linaro.org/qa/test-definitions.git into http://git.linaro.org/qa/test-definitions.git but LAVA cannot reliably combine that with ubuntu/smoke-tests-basic.yaml to get a path like https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/ubuntu/smoke-tests... because the "blob" "HEAD:" elements are not knowable from the URL or the git clone operation. git.linaro.org has one frontend, github.com has another and git.debian.org has yet another.
That is OK. I don't think it's reasonable to expect that such feature would work for all git frontends.
milosz
--
Neil Williams
neil.williams@linaro.org http://www.linux.codehelp.co.uk/
On 12 September 2016 at 10:32, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On 12 September 2016 at 08:55, Neil Williams neil.williams@linaro.org wrote:
On 9 September 2016 at 14:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
I'm trying to get the proper relationship between requested tests and results in LAVA v2. Here is example job: https://validation.linaro.org/scheduler/job/1109234 and results for this job: https://validation.linaro.org/results/1109234
I'll add notes to the docs for the 2016.11 release based on these responses and any feedback on this list.
How can I tell:
- which result matches which test?
There is a chevron in the test case detail page, directly after the test case name, which links to the point in the log where that test was reported. The same URL can also be determined in advance by knowing the job ID, the sequence of test definitions in the test job definition and the name of the test case.
The chevron seems to always point to #bottom of log file.
That's the double chevron >> on the same line as the Job ID.
Below that, there is the test case name and a single chevron.
https://validation.linaro.org/results/1109234/1_lamp-test/mysql-show-databas...
mysql-show-databases >
Suggestions on making this clearer are welcome...
The URL can also be assembled from the data available in the results, allowing parsers to go directly to that position in the log file.
Note: Unlike V1, the test shell does not wait until the test case entry has been created before moving on, so there can be an offset between the point linked from the result (where the test case entry was created) to the point slightly earlier in the log where the test itself was executed. This wait behaviour caused various bugs as it needs to block at the shell read command which gets confused by other messages on the serial console. The offset is the consequence of removing this behaviour.
So:
https://validation.linaro.org/results/1109234/1_lamp-test/mysql-show-databas... links to https://validation.linaro.org/scheduler/job/1109234#results_1_lamp-test_mysq...
i.e. once you know the URL of the result, you can generate the URL of the point in the test job log where that result was created.
In the log file this section looks like:
Received signal: <TESTCASE> TEST_CASE_ID=mysql-show-databases RESULT=pass case: mysql-show-databases definition: 1_lamp-test result: pass
So, in this case, there was no offset.
There is a REST API using the name of the test definition and the name of the test case.
My question was about API. Manually it's possible to do the matching even in v1.
I'm not sure what else you want from a REST API other than having all of the data available to build the URL immediately after completion, without needing to do round-trip lookups to find hashes or other generated strings. A single call to the results for a completed testjob provides all the information you need to build URLs for all test cases including the links to the position within the log file for each test case. There is no "matching" required in V2 and no round-trips back to the server with more API calls. One call gets all the data but the job of a REST API is not to build those URLs for you, it's to provide enough information to predict those URLs in a single call. Are you looking for an API call which returns all the URLs pre-assembled?
The name of the test definition comes from the test job definition:
- repository: http://git.linaro.org/lava-team/lava-functional-tests.git from: git path: lava-test-shell/single-node/singlenode03.yaml name: singlenode-advanced
The digit comes from the sequence of definitions in the list in the - test: action of the test job definition. So job 154736 on staging has three definitions to the test action, 0_env-dut-inline, 1_smoke_tests and 2_singlenode_advanced.
OK. So when I download the jobdefinition and test results I should get the match by order of appearance. Is 'lava' always present in the results?
Yes.
The test case name comes directly from the call to lava-test-case.
When an inline test definition does not report any test cases (by not calling lava-test-case anywhere, just doing setup or diagnostic calls to put data into the logs) then the metadata shows that test definition as "omitted" and it has no entry in the results table.
omitted.0.inline.name: env-dut-inline
lava-test-case calls are not that interesing yet as for example the test can return different number of results based on parameters passed.
However, lava-test-case can also be used to report results for things which are "hidden" within the scripts in the remote git repo. It is also the test-case which provides the link into the position in the job log file.
In addition, each test job gets a set of LAVA results containing useful information like the commit hash of the test definition when it was cloned for this test job.
- if there are multiple occurrences of the same test with different
parameters, how to recognize the results?
Multiple occurrences show up in the results table: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced (realpath_check occurs twice with separate results)
The question was about multiple occurences of the same test definition.
Will occur as discrete entries in the results - prefixed with the order.
1_smoke_tests 2_smoke_tests etc.
For example we use subsets of LTP. So I would like to test:
- LTP - syscalls
- LTP - math
As I wrote above the test cases will be different, so they're not that interesting.
That is where test-set is useful. I'll be writing up more documentation on that today.
lava-test-set start syscalls lava-test-case syscalls ... lava-test-set stop syscalls lava-test-case start math lava-test-case math ... lava-test-set stop math
This adds a set around those test cases by adding the test set to the URL.
/results/JOB_ID/2_smoke-tests/syscalls/syscall_one_test
In this case, each occurred within the same test definition, so there is one page showing both results: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced/r... (as the test case name is the same, there can only be one URL).
Each one links to a different point in the job log.
In LAVA v1 the matching was a very arduous process.
That should be much simpler now. All the information is available to the test writer once the testjob ID is known.
https://validation.linaro.org/results/1109234
There is a CSV and YAML download link for the complete set of results for the job. (Not just a list of bundles as it was with V1.) The export includes details of the test definition names.
https://validation.linaro.org/results/1109234/csv https://validation.linaro.org/results/1109234/yaml
This looks very good, but I'm missing parameters. Smoke test has a defined parameter which is by default set to 'false'. How do I match the results from 2 subsequent execution of the smoke test - first with 'false', second with 'true' set as parameter? If the params are not present anywhere in the result I still have to download the git repo with tests to learn what the default was :(
Example of such jobs: https://validation.linaro.org/results/1107487 (not the best as the names are different) https://validation.linaro.org/scheduler/job/1113188/definition (job failed, so no results, but I'm trying to get this working)
That needs to be declared to LAVA via the test suite name or a test-set or via the test case names. LAVA cannot introspect into your remote git repo any more easily than you can.
So if the default isn't clear, add a lava-test-case which tests that the default is what you expect - smoke-test-default-true: fail.
On 12 September 2016 at 11:37, Neil Williams neil.williams@linaro.org wrote:
On 12 September 2016 at 10:32, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On 12 September 2016 at 08:55, Neil Williams neil.williams@linaro.org wrote:
On 9 September 2016 at 14:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
I'm trying to get the proper relationship between requested tests and results in LAVA v2. Here is example job: https://validation.linaro.org/scheduler/job/1109234 and results for this job: https://validation.linaro.org/results/1109234
I'll add notes to the docs for the 2016.11 release based on these responses and any feedback on this list.
How can I tell:
- which result matches which test?
There is a chevron in the test case detail page, directly after the test case name, which links to the point in the log where that test was reported. The same URL can also be determined in advance by knowing the job ID, the sequence of test definitions in the test job definition and the name of the test case.
The chevron seems to always point to #bottom of log file.
That's the double chevron >> on the same line as the Job ID.
Below that, there is the test case name and a single chevron.
https://validation.linaro.org/results/1109234/1_lamp-test/mysql-show-databas...
mysql-show-databases >
Suggestions on making this clearer are welcome...
OK, I was looking at the wrong chevron :) This is OK once one knows how to use it.
The URL can also be assembled from the data available in the results, allowing parsers to go directly to that position in the log file.
Note: Unlike V1, the test shell does not wait until the test case entry has been created before moving on, so there can be an offset between the point linked from the result (where the test case entry was created) to the point slightly earlier in the log where the test itself was executed. This wait behaviour caused various bugs as it needs to block at the shell read command which gets confused by other messages on the serial console. The offset is the consequence of removing this behaviour.
So:
https://validation.linaro.org/results/1109234/1_lamp-test/mysql-show-databas... links to https://validation.linaro.org/scheduler/job/1109234#results_1_lamp-test_mysq...
i.e. once you know the URL of the result, you can generate the URL of the point in the test job log where that result was created.
In the log file this section looks like:
Received signal: <TESTCASE> TEST_CASE_ID=mysql-show-databases RESULT=pass case: mysql-show-databases definition: 1_lamp-test result: pass
So, in this case, there was no offset.
There is a REST API using the name of the test definition and the name of the test case.
My question was about API. Manually it's possible to do the matching even in v1.
I'm not sure what else you want from a REST API other than having all of the data available to build the URL immediately after completion, without needing to do round-trip lookups to find hashes or other generated strings. A single call to the results for a completed testjob provides all the information you need to build URLs for all test cases including the links to the position within the log file for each test case. There is no "matching" required in V2 and no round-trips back to the server with more API calls. One call gets all the data but the job of a REST API is not to build those URLs for you, it's to provide enough information to predict those URLs in a single call. Are you looking for an API call which returns all the URLs pre-assembled?
I don't need URLs at all. All I need is to know which test results come from which 'tests' in job definition and if there is anything missing. The important part is to know that some tests screws something up and produces no results. What will I have in the 'LAVA Results' then? Will the metadata present such test as 'omitted'? It's also important to know which results come from which parametrized tests (when more than 1 parameter is present).
The name of the test definition comes from the test job definition:
- repository: http://git.linaro.org/lava-team/lava-functional-tests.git from: git path: lava-test-shell/single-node/singlenode03.yaml name: singlenode-advanced
The digit comes from the sequence of definitions in the list in the - test: action of the test job definition. So job 154736 on staging has three definitions to the test action, 0_env-dut-inline, 1_smoke_tests and 2_singlenode_advanced.
OK. So when I download the jobdefinition and test results I should get the match by order of appearance. Is 'lava' always present in the results?
Yes.
The test case name comes directly from the call to lava-test-case.
When an inline test definition does not report any test cases (by not calling lava-test-case anywhere, just doing setup or diagnostic calls to put data into the logs) then the metadata shows that test definition as "omitted" and it has no entry in the results table.
omitted.0.inline.name: env-dut-inline
lava-test-case calls are not that interesing yet as for example the test can return different number of results based on parameters passed.
However, lava-test-case can also be used to report results for things which are "hidden" within the scripts in the remote git repo. It is also the test-case which provides the link into the position in the job log file.
This approach ties tests to LAVA which I don't like as users requested to have ability to run tests 'standalone'. So anything that takes the test in the direction of being 'LAVA specific' can't be used.
In addition, each test job gets a set of LAVA results containing useful information like the commit hash of the test definition when it was cloned for this test job.
- if there are multiple occurrences of the same test with different
parameters, how to recognize the results?
Multiple occurrences show up in the results table: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced (realpath_check occurs twice with separate results)
The question was about multiple occurences of the same test definition.
Will occur as discrete entries in the results - prefixed with the order.
1_smoke_tests 2_smoke_tests etc.
For example we use subsets of LTP. So I would like to test:
- LTP - syscalls
- LTP - math
As I wrote above the test cases will be different, so they're not that interesting.
That is where test-set is useful. I'll be writing up more documentation on that today.
lava-test-set start syscalls lava-test-case syscalls ... lava-test-set stop syscalls lava-test-case start math lava-test-case math ... lava-test-set stop math
This adds a set around those test cases by adding the test set to the URL.
/results/JOB_ID/2_smoke-tests/syscalls/syscall_one_test
This approach ties the test to LAVA which is a 'no go' from my point of view. Beside that there are other params which are important to know (see CTS: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/android/cts-host.y... or hackbench: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/ubuntu/hackbench.y...).
[cut]
Example of such jobs: https://validation.linaro.org/results/1107487 (not the best as the names are different) https://validation.linaro.org/scheduler/job/1113188/definition (job failed, so no results, but I'm trying to get this working)
That needs to be declared to LAVA via the test suite name or a test-set or via the test case names. LAVA cannot introspect into your remote git repo any more easily than you can.
Hmm, this approach implies there is only 1 parameter. How do I know if there are more than 1?
So if the default isn't clear, add a lava-test-case which tests that the default is what you expect - smoke-test-default-true: fail.
This looks like regression from v1 which reported all params in result bundles (default and set in job definition).
milosz
On 12 September 2016 at 12:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On 12 September 2016 at 11:37, Neil Williams neil.williams@linaro.org wrote:
On 12 September 2016 at 10:32, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On 12 September 2016 at 08:55, Neil Williams neil.williams@linaro.org wrote:
On 9 September 2016 at 14:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
I'm trying to get the proper relationship between requested tests and results in LAVA v2. Here is example job: https://validation.linaro.org/scheduler/job/1109234 and results for this job: https://validation.linaro.org/results/1109234
I'll add notes to the docs for the 2016.11 release based on these responses and any feedback on this list.
How can I tell:
- which result matches which test?
There is a chevron in the test case detail page, directly after the test case name, which links to the point in the log where that test was reported. The same URL can also be determined in advance by knowing the job ID, the sequence of test definitions in the test job definition and the name of the test case.
The chevron seems to always point to #bottom of log file.
That's the double chevron >> on the same line as the Job ID.
Below that, there is the test case name and a single chevron.
https://validation.linaro.org/results/1109234/1_lamp-test/mysql-show-databas...
mysql-show-databases >
Suggestions on making this clearer are welcome...
OK, I was looking at the wrong chevron :) This is OK once one knows how to use it.
The URL can also be assembled from the data available in the results, allowing parsers to go directly to that position in the log file.
Note: Unlike V1, the test shell does not wait until the test case entry has been created before moving on, so there can be an offset between the point linked from the result (where the test case entry was created) to the point slightly earlier in the log where the test itself was executed. This wait behaviour caused various bugs as it needs to block at the shell read command which gets confused by other messages on the serial console. The offset is the consequence of removing this behaviour.
So:
https://validation.linaro.org/results/1109234/1_lamp-test/mysql-show-databas... links to https://validation.linaro.org/scheduler/job/1109234#results_1_lamp-test_mysq...
i.e. once you know the URL of the result, you can generate the URL of the point in the test job log where that result was created.
In the log file this section looks like:
Received signal: <TESTCASE> TEST_CASE_ID=mysql-show-databases RESULT=pass case: mysql-show-databases definition: 1_lamp-test result: pass
So, in this case, there was no offset.
There is a REST API using the name of the test definition and the name of the test case.
My question was about API. Manually it's possible to do the matching even in v1.
I'm not sure what else you want from a REST API other than having all of the data available to build the URL immediately after completion, without needing to do round-trip lookups to find hashes or other generated strings. A single call to the results for a completed testjob provides all the information you need to build URLs for all test cases including the links to the position within the log file for each test case. There is no "matching" required in V2 and no round-trips back to the server with more API calls. One call gets all the data but the job of a REST API is not to build those URLs for you, it's to provide enough information to predict those URLs in a single call. Are you looking for an API call which returns all the URLs pre-assembled?
I don't need URLs at all. All I need is to know which test results come from which 'tests' in job definition
Test suites come from the job definition, following the name specified by the job definition.
and if there is anything missing.
There is a specific calculation for this on the Results page for the test job. This checks that all test-suites defined in the test definition have provided results - that is why inline definitions show as omitted if no lava-test-case is used.
The important part is to know that some tests screws something up and produces no results.
That is outside the control of LAVA *except* in the case where the test runner itself fails and thereby stops a subsequent test suite from executing. So if 1_smoke-tests falls over in a heap such that the job terminates, times out then the job will be Incomplete. If the test-runner exits early then this will also be picked up as a test runner failure. "lava-test-runner exited with an error".
We need to be careful with terminology here.
test-suite - maps to the test definition in your git repo or the inline definition. If a test suite fails to execute, LAVA will report that.
test-set and test-case - individual lines within a test definition. If any of these fail, there is *nothing* LAVA can do about the rest of the test sets or test cases in that test definition. The reason is simple - lava-test-case can be called from custom scripts in your git repo and those scripts can easily call lava-test-case in loops. Therefore, there is nothing LAVA can do to anticipate whether a test definition is going to call lava-test-case 5 times or 100. Either could be deemed correct, either could be wrong.
In such situations, if the test writer *knows* that this call in their test definition needs to call lava-test-case N times and should not call it N-1 or N+1 times, then the test definition needs to do that calculation itself (in a custom script) and make an explicit test case for that. These custom scripts massively improve the ability of the test writers to run these tests in standalone mode without LAVA. The script does all the work and then, when it is time to report on the work, it can check to see if it needs to output the results for LAVA or for the console or something else.
The contents of the Lava Test Shell Definition is essentially hidden to LAVA until that content causes something to trigger lava-test-case (or lava-test-set) or the content exits. LAVA cannot introspect inside the Test Shell Definition as this goes down a rabbit hole of endless permutations depending on kernel config, device behaviour and remote source code (like LTP itself).
What will I have in the 'LAVA Results' then? Will the metadata present such test as 'omitted'? It's also important to know which results come from which parametrized tests (when more than 1 parameter is present).
That can only be done by the test writer. When parameters are used, the purpose or meaning of those parameters needs to be declared into the results of that test job if anyone is later to be able to identify that meaning from those results. This can be done with explicit results and/or with specific names of test cases. It doesn't matter whether those results appear in LAVA or on the console when running the test in standalone mode - the parsing and calculation still needs to be done by the standalone script to express the meaning and/or purpose of whichever parameters may be of interest.
The name of the test definition comes from the test job definition:
- repository: http://git.linaro.org/lava-team/lava-functional-tests.git from: git path: lava-test-shell/single-node/singlenode03.yaml name: singlenode-advanced
The digit comes from the sequence of definitions in the list in the - test: action of the test job definition. So job 154736 on staging has three definitions to the test action, 0_env-dut-inline, 1_smoke_tests and 2_singlenode_advanced.
OK. So when I download the jobdefinition and test results I should get the match by order of appearance. Is 'lava' always present in the results?
Yes.
The test case name comes directly from the call to lava-test-case.
When an inline test definition does not report any test cases (by not calling lava-test-case anywhere, just doing setup or diagnostic calls to put data into the logs) then the metadata shows that test definition as "omitted" and it has no entry in the results table.
omitted.0.inline.name: env-dut-inline
lava-test-case calls are not that interesing yet as for example the test can return different number of results based on parameters passed.
However, lava-test-case can also be used to report results for things which are "hidden" within the scripts in the remote git repo. It is also the test-case which provides the link into the position in the job log file.
This approach ties tests to LAVA which I don't like as users requested to have ability to run tests 'standalone'. So anything that takes the test in the direction of being 'LAVA specific' can't be used.
Then a custom script is going to be needed which does the parsing - including checking whether the correct number of tests have been run - and then produces data which is reported to LAVA (or something else). I do this for the django unit tests with lava-server. We have a single script, ./ci-run, that everyone runs to execute the tests locally (and in gerrit). The custom script sets up the environment, then runs ./ci-run | tee filename and then parses the file. Once it has done the checks it needs, it loops through it's own data. At that point, it can check for lava-test-case in $PATH and use that or dump to some other output or call something else. This is what provides the standalone support with LAVA picking up the results once the standalone script has done the execution and parsing of the data.
In addition, each test job gets a set of LAVA results containing useful information like the commit hash of the test definition when it was cloned for this test job.
- if there are multiple occurrences of the same test with different
parameters, how to recognize the results?
Multiple occurrences show up in the results table: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced (realpath_check occurs twice with separate results)
The question was about multiple occurences of the same test definition.
Will occur as discrete entries in the results - prefixed with the order.
1_smoke_tests 2_smoke_tests etc.
For example we use subsets of LTP. So I would like to test:
- LTP - syscalls
- LTP - math
As I wrote above the test cases will be different, so they're not that interesting.
That is where test-set is useful. I'll be writing up more documentation on that today.
lava-test-set start syscalls lava-test-case syscalls ... lava-test-set stop syscalls lava-test-case start math lava-test-case math ... lava-test-set stop math
This adds a set around those test cases by adding the test set to the URL.
/results/JOB_ID/2_smoke-tests/syscalls/syscall_one_test
This approach ties the test to LAVA which is a 'no go' from my point of view. Beside that there are other params which are important to know (see CTS: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/android/cts-host.y... or hackbench: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/ubuntu/hackbench.y...).
I disagree. You can do all the processing in the standalone script and still call lava-test-set at particular points if that proves to be useful, as part of the reporting stage at the end of the standalone script. The script still needs to output something sensible when run outside LAVA, so it still needs to do all the same checks and parsing. When it chooses to report to LAVA, it is able to use lava-test-set if that is useful or simply put everything through lava-test-case.
Analysis of the data within LAVA will need the relevant elements to be reported to LAVA - we cannot go into the Lava Test Shell Definition and *guess* how many times lava-test-case is meant to be called.
[cut]
Example of such jobs: https://validation.linaro.org/results/1107487 (not the best as the names are different) https://validation.linaro.org/scheduler/job/1113188/definition (job failed, so no results, but I'm trying to get this working)
That needs to be declared to LAVA via the test suite name or a test-set or via the test case names. LAVA cannot introspect into your remote git repo any more easily than you can.
Hmm, this approach implies there is only 1 parameter. How do I know if there are more than 1?
That is up to the standalone script that does the parsing.
So if the default isn't clear, add a lava-test-case which tests that the default is what you expect - smoke-test-default-true: fail.
This looks like regression from v1 which reported all params in result bundles (default and set in job definition).
It's not a regression, it is a different method. Not all test writers always need / want all parameters to be reported. V2 provides the control that V1 just presumed to take whether the writer wanted that or not.
Doing these tests to support standalone testing means putting nearly all the logic of the test into a single script which can be run both standalone and in LAVA - the script simply has options which determine how that work is reported.
On 12 September 2016 at 12:56, Neil Williams neil.williams@linaro.org wrote:
[cut]
I'm not sure what else you want from a REST API other than having all of the data available to build the URL immediately after completion, without needing to do round-trip lookups to find hashes or other generated strings. A single call to the results for a completed testjob provides all the information you need to build URLs for all test cases including the links to the position within the log file for each test case. There is no "matching" required in V2 and no round-trips back to the server with more API calls. One call gets all the data but the job of a REST API is not to build those URLs for you, it's to provide enough information to predict those URLs in a single call. Are you looking for an API call which returns all the URLs pre-assembled?
I don't need URLs at all. All I need is to know which test results come from which 'tests' in job definition
Test suites come from the job definition, following the name specified by the job definition.
and if there is anything missing.
There is a specific calculation for this on the Results page for the test job. This checks that all test-suites defined in the test definition have provided results - that is why inline definitions show as omitted if no lava-test-case is used.
The important part is to know that some tests screws something up and produces no results.
That is outside the control of LAVA *except* in the case where the test runner itself fails and thereby stops a subsequent test suite from executing. So if 1_smoke-tests falls over in a heap such that the job terminates, times out then the job will be Incomplete. If the test-runner exits early then this will also be picked up as a test runner failure. "lava-test-runner exited with an error".
We need to be careful with terminology here.
test-suite - maps to the test definition in your git repo or the inline definition. If a test suite fails to execute, LAVA will report that.
test-set and test-case - individual lines within a test definition. If
These are the ones I don't care about. I don't need to know whether there are N or N+1 test-cases in the test-suite. All I'm asking is whether the test-suite (translated from inline test in job definition) reported a result or not. So let's stop worrying about test-cases as they're irrelevant in this context.
[cut]
lava-test-case calls are not that interesing yet as for example the test can return different number of results based on parameters passed.
However, lava-test-case can also be used to report results for things which are "hidden" within the scripts in the remote git repo. It is also the test-case which provides the link into the position in the job log file.
This approach ties tests to LAVA which I don't like as users requested to have ability to run tests 'standalone'. So anything that takes the test in the direction of being 'LAVA specific' can't be used.
Then a custom script is going to be needed which does the parsing - including checking whether the correct number of tests have been run - and then produces data which is reported to LAVA (or something else). I do this for the django unit tests with lava-server. We have a single script, ./ci-run, that everyone runs to execute the tests locally (and in gerrit). The custom script sets up the environment, then runs ./ci-run | tee filename and then parses the file. Once it has done the checks it needs, it loops through it's own data. At that point, it can check for lava-test-case in $PATH and use that or dump to some other output or call something else. This is what provides the standalone support with LAVA picking up the results once the standalone script has done the execution and parsing of the data.
I'm doing a similar thing now.
In addition, each test job gets a set of LAVA results containing useful information like the commit hash of the test definition when it was cloned for this test job.
- if there are multiple occurrences of the same test with different
parameters, how to recognize the results?
Multiple occurrences show up in the results table: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced (realpath_check occurs twice with separate results)
The question was about multiple occurences of the same test definition.
Will occur as discrete entries in the results - prefixed with the order.
1_smoke_tests 2_smoke_tests etc.
For example we use subsets of LTP. So I would like to test:
- LTP - syscalls
- LTP - math
As I wrote above the test cases will be different, so they're not that interesting.
That is where test-set is useful. I'll be writing up more documentation on that today.
lava-test-set start syscalls lava-test-case syscalls ... lava-test-set stop syscalls lava-test-case start math lava-test-case math ... lava-test-set stop math
This adds a set around those test cases by adding the test set to the URL.
/results/JOB_ID/2_smoke-tests/syscalls/syscall_one_test
This approach ties the test to LAVA which is a 'no go' from my point of view. Beside that there are other params which are important to know (see CTS: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/android/cts-host.y... or hackbench: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/ubuntu/hackbench.y...).
I disagree. You can do all the processing in the standalone script and still call lava-test-set at particular points if that proves to be useful, as part of the reporting stage at the end of the standalone script. The script still needs to output something sensible when run outside LAVA, so it still needs to do all the same checks and parsing. When it chooses to report to LAVA, it is able to use lava-test-set if that is useful or simply put everything through lava-test-case.
That is very impractical. It means the result format for different tests is different. This increases the effort on test maintenance and increases complexity. In most cases lava-test-set is not useful. When test is run standalone the parameters are known. So why not report the parameters in results (metadata) from LAVA? They are just a part of job definition like git URL or test definition path. These are reported, so why not the parameters (at least if they're set)?
Analysis of the data within LAVA will need the relevant elements to be reported to LAVA - we cannot go into the Lava Test Shell Definition and *guess* how many times lava-test-case is meant to be called.
As I mentioned calling lava-test-case is out of context here. I just need to match inline test definition to test-suite in results. I actually got my example running and IMHO there is a bug somewhere: https://validation.linaro.org/scheduler/job/1113516
There are 2 test-suites in job definition but only one is reported in the results (note the parameters are different for test-suite instances).
[cut]
Example of such jobs: https://validation.linaro.org/results/1107487 (not the best as the names are different) https://validation.linaro.org/scheduler/job/1113188/definition (job failed, so no results, but I'm trying to get this working)
That needs to be declared to LAVA via the test suite name or a test-set or via the test case names. LAVA cannot introspect into your remote git repo any more easily than you can.
Hmm, this approach implies there is only 1 parameter. How do I know if there are more than 1?
That is up to the standalone script that does the parsing.
How do I do the parsing if LAVA doesn't give me the parameters back?
So if the default isn't clear, add a lava-test-case which tests that the default is what you expect - smoke-test-default-true: fail.
This looks like regression from v1 which reported all params in result bundles (default and set in job definition).
It's not a regression, it is a different method. Not all test writers always need / want all parameters to be reported. V2 provides the control that V1 just presumed to take whether the writer wanted that or not.
Doing these tests to support standalone testing means putting nearly all the logic of the test into a single script which can be run both standalone and in LAVA - the script simply has options which determine how that work is reported.
I disagree here. The parameters come from job definition and should be reported (in metadata?). Other metadata is reported, like git repo URL or test path. Even the default parameters are available to LAVA before the execution as the test shell scripts are prepared by LAVA. So there isn't much difference from v1 in this process.
milosz
On 12 September 2016 at 13:53, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On 12 September 2016 at 12:56, Neil Williams neil.williams@linaro.org wrote:
[cut]
I'm not sure what else you want from a REST API other than having all of the data available to build the URL immediately after completion, without needing to do round-trip lookups to find hashes or other generated strings. A single call to the results for a completed testjob provides all the information you need to build URLs for all test cases including the links to the position within the log file for each test case. There is no "matching" required in V2 and no round-trips back to the server with more API calls. One call gets all the data but the job of a REST API is not to build those URLs for you, it's to provide enough information to predict those URLs in a single call. Are you looking for an API call which returns all the URLs pre-assembled?
I don't need URLs at all. All I need is to know which test results come from which 'tests' in job definition
Test suites come from the job definition, following the name specified by the job definition.
and if there is anything missing.
There is a specific calculation for this on the Results page for the test job. This checks that all test-suites defined in the test definition have provided results - that is why inline definitions show as omitted if no lava-test-case is used.
The important part is to know that some tests screws something up and produces no results.
That is outside the control of LAVA *except* in the case where the test runner itself fails and thereby stops a subsequent test suite from executing. So if 1_smoke-tests falls over in a heap such that the job terminates, times out then the job will be Incomplete. If the test-runner exits early then this will also be picked up as a test runner failure. "lava-test-runner exited with an error".
We need to be careful with terminology here.
test-suite - maps to the test definition in your git repo or the inline definition. If a test suite fails to execute, LAVA will report that.
test-set and test-case - individual lines within a test definition. If
These are the ones I don't care about. I don't need to know whether there are N or N+1 test-cases in the test-suite. All I'm asking is whether the test-suite (translated from inline test in job definition) reported a result or not. So let's stop worrying about test-cases as they're irrelevant in this context.
[cut]
lava-test-case calls are not that interesing yet as for example the test can return different number of results based on parameters passed.
However, lava-test-case can also be used to report results for things which are "hidden" within the scripts in the remote git repo. It is also the test-case which provides the link into the position in the job log file.
This approach ties tests to LAVA which I don't like as users requested to have ability to run tests 'standalone'. So anything that takes the test in the direction of being 'LAVA specific' can't be used.
Then a custom script is going to be needed which does the parsing - including checking whether the correct number of tests have been run - and then produces data which is reported to LAVA (or something else). I do this for the django unit tests with lava-server. We have a single script, ./ci-run, that everyone runs to execute the tests locally (and in gerrit). The custom script sets up the environment, then runs ./ci-run | tee filename and then parses the file. Once it has done the checks it needs, it loops through it's own data. At that point, it can check for lava-test-case in $PATH and use that or dump to some other output or call something else. This is what provides the standalone support with LAVA picking up the results once the standalone script has done the execution and parsing of the data.
I'm doing a similar thing now.
In addition, each test job gets a set of LAVA results containing useful information like the commit hash of the test definition when it was cloned for this test job.
> - if there are multiple occurrences of the same test with different > parameters, how to recognize the results?
Multiple occurrences show up in the results table: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced (realpath_check occurs twice with separate results)
The question was about multiple occurences of the same test definition.
Will occur as discrete entries in the results - prefixed with the order.
1_smoke_tests 2_smoke_tests etc.
For example we use subsets of LTP. So I would like to test:
- LTP - syscalls
- LTP - math
As I wrote above the test cases will be different, so they're not that interesting.
That is where test-set is useful. I'll be writing up more documentation on that today.
lava-test-set start syscalls lava-test-case syscalls ... lava-test-set stop syscalls lava-test-case start math lava-test-case math ... lava-test-set stop math
This adds a set around those test cases by adding the test set to the URL.
/results/JOB_ID/2_smoke-tests/syscalls/syscall_one_test
This approach ties the test to LAVA which is a 'no go' from my point of view. Beside that there are other params which are important to know (see CTS: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/android/cts-host.y... or hackbench: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/ubuntu/hackbench.y...).
I disagree. You can do all the processing in the standalone script and still call lava-test-set at particular points if that proves to be useful, as part of the reporting stage at the end of the standalone script. The script still needs to output something sensible when run outside LAVA, so it still needs to do all the same checks and parsing. When it chooses to report to LAVA, it is able to use lava-test-set if that is useful or simply put everything through lava-test-case.
That is very impractical. It means the result format for different tests is different. This increases the effort on test maintenance and increases complexity. In most cases lava-test-set is not useful. When test is run standalone the parameters are known. So why not report the parameters in results (metadata) from LAVA? They are just a part of job definition like git URL or test definition path. These are reported, so why not the parameters (at least if they're set)?
Yes, ok. I can see that. Parameters are part of the test job definition so should be handled by the lava results when preparing the overlay. I've taken that up as https://projects.linaro.org/browse/LAVA-747
Analysis of the data within LAVA will need the relevant elements to be reported to LAVA - we cannot go into the Lava Test Shell Definition and *guess* how many times lava-test-case is meant to be called.
As I mentioned calling lava-test-case is out of context here. I just need to match inline test definition to test-suite in results. I actually got my example running and IMHO there is a bug somewhere:
Yes, there is a problem there. https://projects.linaro.org/browse/LAVA-748
Sorry for any confusion with this, we'll get fixes for those into 2016.11.
https://validation.linaro.org/scheduler/job/1113516
There are 2 test-suites in job definition but only one is reported in the results (note the parameters are different for test-suite instances).
[cut]
Example of such jobs: https://validation.linaro.org/results/1107487 (not the best as the names are different) https://validation.linaro.org/scheduler/job/1113188/definition (job failed, so no results, but I'm trying to get this working)
That needs to be declared to LAVA via the test suite name or a test-set or via the test case names. LAVA cannot introspect into your remote git repo any more easily than you can.
Hmm, this approach implies there is only 1 parameter. How do I know if there are more than 1?
That is up to the standalone script that does the parsing.
How do I do the parsing if LAVA doesn't give me the parameters back?
So if the default isn't clear, add a lava-test-case which tests that the default is what you expect - smoke-test-default-true: fail.
This looks like regression from v1 which reported all params in result bundles (default and set in job definition).
It's not a regression, it is a different method. Not all test writers always need / want all parameters to be reported. V2 provides the control that V1 just presumed to take whether the writer wanted that or not.
Doing these tests to support standalone testing means putting nearly all the logic of the test into a single script which can be run both standalone and in LAVA - the script simply has options which determine how that work is reported.
I disagree here. The parameters come from job definition and should be reported (in metadata?). Other metadata is reported, like git repo URL or test path. Even the default parameters are available to LAVA before the execution as the test shell scripts are prepared by LAVA. So there isn't much difference from v1 in this process.
milosz
On 12 September 2016 at 14:16, Neil Williams neil.williams@linaro.org wrote:
On 12 September 2016 at 13:53, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On 12 September 2016 at 12:56, Neil Williams neil.williams@linaro.org wrote:
[cut]
I'm not sure what else you want from a REST API other than having all of the data available to build the URL immediately after completion, without needing to do round-trip lookups to find hashes or other generated strings. A single call to the results for a completed testjob provides all the information you need to build URLs for all test cases including the links to the position within the log file for each test case. There is no "matching" required in V2 and no round-trips back to the server with more API calls. One call gets all the data but the job of a REST API is not to build those URLs for you, it's to provide enough information to predict those URLs in a single call. Are you looking for an API call which returns all the URLs pre-assembled?
I don't need URLs at all. All I need is to know which test results come from which 'tests' in job definition
Test suites come from the job definition, following the name specified by the job definition.
and if there is anything missing.
There is a specific calculation for this on the Results page for the test job. This checks that all test-suites defined in the test definition have provided results - that is why inline definitions show as omitted if no lava-test-case is used.
The important part is to know that some tests screws something up and produces no results.
That is outside the control of LAVA *except* in the case where the test runner itself fails and thereby stops a subsequent test suite from executing. So if 1_smoke-tests falls over in a heap such that the job terminates, times out then the job will be Incomplete. If the test-runner exits early then this will also be picked up as a test runner failure. "lava-test-runner exited with an error".
We need to be careful with terminology here.
test-suite - maps to the test definition in your git repo or the inline definition. If a test suite fails to execute, LAVA will report that.
test-set and test-case - individual lines within a test definition. If
These are the ones I don't care about. I don't need to know whether there are N or N+1 test-cases in the test-suite. All I'm asking is whether the test-suite (translated from inline test in job definition) reported a result or not. So let's stop worrying about test-cases as they're irrelevant in this context.
[cut]
lava-test-case calls are not that interesing yet as for example the test can return different number of results based on parameters passed.
However, lava-test-case can also be used to report results for things which are "hidden" within the scripts in the remote git repo. It is also the test-case which provides the link into the position in the job log file.
This approach ties tests to LAVA which I don't like as users requested to have ability to run tests 'standalone'. So anything that takes the test in the direction of being 'LAVA specific' can't be used.
Then a custom script is going to be needed which does the parsing - including checking whether the correct number of tests have been run - and then produces data which is reported to LAVA (or something else). I do this for the django unit tests with lava-server. We have a single script, ./ci-run, that everyone runs to execute the tests locally (and in gerrit). The custom script sets up the environment, then runs ./ci-run | tee filename and then parses the file. Once it has done the checks it needs, it loops through it's own data. At that point, it can check for lava-test-case in $PATH and use that or dump to some other output or call something else. This is what provides the standalone support with LAVA picking up the results once the standalone script has done the execution and parsing of the data.
I'm doing a similar thing now.
> > In addition, each test job gets a set of LAVA results containing > useful information like the commit hash of the test definition when it > was cloned for this test job. > >> - if there are multiple occurrences of the same test with different >> parameters, how to recognize the results? > > Multiple occurrences show up in the results table: > https://staging.validation.linaro.org/results/154736/2_singlenode-advanced > (realpath_check occurs twice with separate results) >
The question was about multiple occurences of the same test definition.
Will occur as discrete entries in the results - prefixed with the order.
1_smoke_tests 2_smoke_tests etc.
For example we use subsets of LTP. So I would like to test:
- LTP - syscalls
- LTP - math
As I wrote above the test cases will be different, so they're not that interesting.
That is where test-set is useful. I'll be writing up more documentation on that today.
lava-test-set start syscalls lava-test-case syscalls ... lava-test-set stop syscalls lava-test-case start math lava-test-case math ... lava-test-set stop math
This adds a set around those test cases by adding the test set to the URL.
/results/JOB_ID/2_smoke-tests/syscalls/syscall_one_test
This approach ties the test to LAVA which is a 'no go' from my point of view. Beside that there are other params which are important to know (see CTS: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/android/cts-host.y... or hackbench: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/ubuntu/hackbench.y...).
I disagree. You can do all the processing in the standalone script and still call lava-test-set at particular points if that proves to be useful, as part of the reporting stage at the end of the standalone script. The script still needs to output something sensible when run outside LAVA, so it still needs to do all the same checks and parsing. When it chooses to report to LAVA, it is able to use lava-test-set if that is useful or simply put everything through lava-test-case.
That is very impractical. It means the result format for different tests is different. This increases the effort on test maintenance and increases complexity. In most cases lava-test-set is not useful. When test is run standalone the parameters are known. So why not report the parameters in results (metadata) from LAVA? They are just a part of job definition like git URL or test definition path. These are reported, so why not the parameters (at least if they're set)?
Yes, ok. I can see that. Parameters are part of the test job definition so should be handled by the lava results when preparing the overlay. I've taken that up as https://projects.linaro.org/browse/LAVA-747
Great, thanks!
Analysis of the data within LAVA will need the relevant elements to be reported to LAVA - we cannot go into the Lava Test Shell Definition and *guess* how many times lava-test-case is meant to be called.
As I mentioned calling lava-test-case is out of context here. I just need to match inline test definition to test-suite in results. I actually got my example running and IMHO there is a bug somewhere:
Yes, there is a problem there. https://projects.linaro.org/browse/LAVA-748
I think one solution is that name needs to be unique. This way multiple instances of the same test will differ at least in name. At least from my point of view this would be OK. Tried that approach and it works: https://validation.linaro.org/scheduler/job/1113611/definition
milosz
Sorry for any confusion with this, we'll get fixes for those into 2016.11.
https://validation.linaro.org/scheduler/job/1113516
There are 2 test-suites in job definition but only one is reported in the results (note the parameters are different for test-suite instances).
[cut]
Example of such jobs: https://validation.linaro.org/results/1107487 (not the best as the names are different) https://validation.linaro.org/scheduler/job/1113188/definition (job failed, so no results, but I'm trying to get this working)
That needs to be declared to LAVA via the test suite name or a test-set or via the test case names. LAVA cannot introspect into your remote git repo any more easily than you can.
Hmm, this approach implies there is only 1 parameter. How do I know if there are more than 1?
That is up to the standalone script that does the parsing.
How do I do the parsing if LAVA doesn't give me the parameters back?
So if the default isn't clear, add a lava-test-case which tests that the default is what you expect - smoke-test-default-true: fail.
This looks like regression from v1 which reported all params in result bundles (default and set in job definition).
It's not a regression, it is a different method. Not all test writers always need / want all parameters to be reported. V2 provides the control that V1 just presumed to take whether the writer wanted that or not.
Doing these tests to support standalone testing means putting nearly all the logic of the test into a single script which can be run both standalone and in LAVA - the script simply has options which determine how that work is reported.
I disagree here. The parameters come from job definition and should be reported (in metadata?). Other metadata is reported, like git repo URL or test path. Even the default parameters are available to LAVA before the execution as the test shell scripts are prepared by LAVA. So there isn't much difference from v1 in this process.
milosz
--
Neil Williams
neil.williams@linaro.org http://www.linux.codehelp.co.uk/