Hi,
I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE
As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail.
Maybe it worth to analyze each test?
Best regards, Alex
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE
As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail.
Maybe it worth to analyze each test?
Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li... [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/...
Regards, Anibal
Best regards, Alex
Thank you Anibal for the fast response
On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev <oterenti@cisco.com mailto:oterenti@cisco.com> wrote:
Hi, I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail. Maybe it worth to analyze each test?
Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention about return code processing or a reference to the convention you mentioned in the answer.
I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed.
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li... http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr... http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr... http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/db... http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2f... http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gaw...
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I see all ptests passed. Still, in log https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I found 54 failed tests and wasn’t able to find a report which indicates those failures.
Is there such a report? It would be really useful to know that some tests failed.
Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li... [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/...
Regards, Anibal
Best regards, Alex
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev oterenti@cisco.com wrote:
Thank you Anibal for the fast response
On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE
As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail.
Maybe it worth to analyze each test?
Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention about return code processing or a reference to the convention you mentioned in the answer.
I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed.
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/db...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2f...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gaw...
Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the pass/fail of every ptest. So the ptest.py needs to implement the same behavior.
Regards, Anibal
[1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cas...
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I see all ptests passed. Still, in log
https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I found 54 failed tests and wasn’t able to find a report which indicates those failures.
Is there such a report? It would be really useful to know that some tests failed.
Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li... [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/...
Regards, Anibal
Best regards, Alex
Hi,
I would like to discuss the following question. As it was said now we have to analyze pass/fail of everyptest. From my point of view there are couple options.
The first we can parse output and markptestas failed if there is even only one failed test found.
The second we can analyze each test within some packet and record corresponding results. I see a few issues here. First of all there will bea large number oftest results as eachptestcan run lots of tests. Another thing is that we need somehow separate test results between particular packets. As an option we can use lava-test-set feature for that. So each test withinptestwill be marked as test case and packet name we will see as test set.
What do you think about that?
Regards, Alex
On 23.08.18 16:10, Anibal Limon wrote:
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev <oterenti@cisco.com mailto:oterenti@cisco.com> wrote:
Thank you Anibal for the fast response On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Hi, I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail. Maybe it worth to analyze each test? Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention about return code processing or a reference to the convention you mentioned in the answer. I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed. http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/acl/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/files/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/dbus/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gawk/gawk/run-ptest
Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the pass/fail of every ptest. So the ptest.py needs to implement the same behavior.
Regards, Anibal
[1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cas...
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/suite/linux-ptest/tests/ I see all ptests passed. Still, in log https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/log I found 54 failed tests and wasn’t able to find a report which indicates those failures. Is there such a report? It would be really useful to know that some tests failed. Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/ Regards, Anibal Best regards, Alex
Hi,
I was on vacation, that's the reason for the slow response, comments below,
On Fri, 28 Sep 2018 at 03:59, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I would like to discuss the following question. As it was said now we have to analyze pass/fail of every ptest. From my point of view there are couple options.
The first we can parse output and mark ptest as failed if there is even only one failed test found.
Right I will choice this approach changes needs to be done in the ptest lava script [1] to fail when any of the package tests failed like [2].
The second we can analyze each test within some packet and record corresponding results. I see a few issues here. First of all there will be a large number of test results as each ptest can run lots of tests.
Right but that need to be handled in every OE ptest script, I mean if you want to fail if certain test inside a ptest fails needs to be done in OE.
Another thing is that we need somehow separate test results between particular packets.
Currently we use QA reports to see only the package test result, now If you want to be able look at details you need to see the ptest log.
As an option we can use lava-test-set feature for that. So each test within ptest will be marked as test case and packet name we will see as test set.
What do you think about that?
May be the lava-test-set is an option.
I would go to do the 1st option and then start to review/implement the idea of use lava-test-feature.
Regards, Anibal
[1] https://git.linaro.org/qa/test-definitions.git/tree/automated/linux/ptest/pt... [2] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cas...
Regards,
Alex
On 23.08.18 16:10, Anibal Limon wrote:
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev oterenti@cisco.com wrote:
Thank you Anibal for the fast response
On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE
As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail.
Maybe it worth to analyze each test?
Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention about return code processing or a reference to the convention you mentioned in the answer.
I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed.
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/db...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2f...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gaw...
Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the pass/fail of every ptest. So the ptest.py needs to implement the same behavior.
Regards, Anibal
[1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cas...
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I see all ptests passed. Still, in log
https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I found 54 failed tests and wasn’t able to find a report which indicates those failures.
Is there such a report? It would be really useful to know that some tests failed.
Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li... [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/...
Regards, Anibal
Best regards, Alex
Hi Abibal,
In our project we need to analyze a total number of passed and failed tests for each packet. To distinguish packets we use lava-test-set feature. In order to implement that I modified ptest.py and send-to-lava.sh scripts. Could you please look at the patch and express your opinion? Maybe this code can be added to git.linaro.org/qa/test-definitions.git ?
Best regards, Alex
automated/linux/ptest: Analyze each test in package tests
Currently ptest.py analyze only exit code of each package test to decide if it passed or not. However, ptest-runner can return success code even though some tests failed. So we need to parse test output and analyze it.
It also quite useful to see exactly which tests failed. So results are recorded for each particular test, and lava-test-set feature is used to distinguish packages.
Signed-off-by: Oleksandr Terentiev oterenti@cisco.com
diff --git a/automated/linux/ptest/ptest.py b/automated/linux/ptest/ptest.py index 13feb4d..a28d7f0 100755 --- a/automated/linux/ptest/ptest.py +++ b/automated/linux/ptest/ptest.py @@ -84,20 +84,60 @@ def filter_ptests(ptests, requested_ptests, exclude):
return filter_ptests
+def parse_line(line): + test_status_list = { + 'pass': re.compile("^PASS:(.+)"), + 'fail': re.compile("^FAIL:(.+)"), + 'skip': re.compile("^SKIP:(.+)") + } + + for test_status, status_regex in test_status_list.items(): + test_name = status_regex.search(line) + if test_name: + return [test_name.group(1), test_status]
-def check_ptest(ptest_dir, ptest_name, output_log): - status = 'pass' + return None
- try: - output = subprocess.check_call('ptest-runner -d %s %s' % - (ptest_dir, ptest_name), shell=True, - stderr=subprocess.STDOUT) - except subprocess.CalledProcessError: - status = 'fail' +def parse_ptest(log_file): + result = []
- with open(output_log, 'a+') as f: - f.write("%s %s\n" % (ptest_name, status)) + with open(log_file, 'r') as f: + for line in f: + result_tuple = parse_line(line) + if not result_tuple: + continue + print(result_tuple) + result.append(result_tuple) + continue
+ return result + +def run_command(command, log_file): + process = subprocess.Popen(command, + shell=True, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT) + with open(log_file, 'w') as f: + while True: + output = process.stdout.readline() + if output == '' and process.poll() is not None: + break + if output: + print output.strip() + f.write("%s\n" % output.strip()) + rc = process.poll() + return rc + +def check_ptest(ptest_dir, ptest_name, output_log): + log_name = os.path.join(os.getcwd(), '%s.log' % ptest_name) + status = run_command('ptest-runner -d %s %s' % (ptest_dir, ptest_name), log_name) + + with open(output_log, 'a+') as f: + f.write("lava-test-set start %s\n" % ptest_name) + f.write("%s %s\n" % (ptest_name, "pass" if status == 0 else "fail")) + for test, test_status in parse_ptest(log_name): + f.write("%s %s\n" % (re.sub(r'[^\w-]', '', test), test_status)) + f.write("lava-test-set stop %s\n" % ptest_name)
def main(): parser = argparse.ArgumentParser(description="LAVA/OE ptest script", diff --git a/automated/utils/send-to-lava.sh b/automated/utils/send-to-lava.sh index bf2a477..db4442c 100755 --- a/automated/utils/send-to-lava.sh +++ b/automated/utils/send-to-lava.sh @@ -4,6 +4,8 @@ RESULT_FILE="$1"
which lava-test-case > /dev/null 2>&1 lava_test_case="$?" +which lava-test-set > /dev/null 2>&1 +lava_test_set="$?"
if [ -f "${RESULT_FILE}" ]; then while read -r line; do @@ -31,6 +33,18 @@ if [ -f "${RESULT_FILE}" ]; then else echo "<TEST_CASE_ID=${test} RESULT=${result} MEASUREMENT=${measurement} UNITS=${units}>" fi + elif echo "${line}" | egrep -iq "^lava-test-set.*"; then + test_set_status="$(echo "${line}" | awk '{print $2}')" + test_set_name="$(echo "${line}" | awk '{print $3}')" + if [ "${lava_test_set}" -eq 0 ]; then + lava-test-set "${test_set_status}" "${test_set_name}" + else + if [ "${test_set_status}" == "start" ]; then + echo "<LAVA_SIGNAL_TESTSET START ${test_set_name}>" + else + echo "<LAVA_SIGNAL_TESTSET STOP>" + fi + fi fi done < "${RESULT_FILE}" else
On 01.10.18 17:09, Anibal Limon wrote:
Hi,
I was on vacation, that's the reason for the slow response, comments below,
On Fri, 28 Sep 2018 at 03:59, Oleksandr Terentiev <oterenti@cisco.com mailto:oterenti@cisco.com> wrote:
Hi, I would like to discuss the following question. As it was said now we have to analyze pass/fail of every ptest. From my point of view there are couple options. The first we can parse output and mark ptest as failed if there is even only one failed test found.
Right I will choice this approach changes needs to be done in the ptest lava script [1] to fail when any of the package tests failed like [2].
The second we can analyze each test within some packet and record corresponding results. I see a few issues here. First of all there will be a large number of test results as each ptest can run lots of tests.
Right but that need to be handled in every OE ptest script, I mean if you want to fail if certain test inside a ptest fails needs to be done in OE.
Another thing is that we need somehow separate test results between particular packets.
Currently we use QA reports to see only the package test result, now If you want to be able look at details you need to see the ptest log.
As an option we can use lava-test-set feature for that. So each test within ptest will be marked as test case and packet name we will see as test set. What do you think about that?
May be the lava-test-set is an option.
I would go to do the 1st option and then start to review/implement the idea of use lava-test-feature. Regards, Anibal
[1] https://git.linaro.org/qa/test-definitions.git/tree/automated/linux/ptest/pt... [2] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cas...
Regards, Alex On 23.08.18 16:10, Anibal Limon wrote:
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Thank you Anibal for the fast response On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Hi, I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail. Maybe it worth to analyze each test? Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention about return code processing or a reference to the convention you mentioned in the answer. I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed. http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/acl/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/files/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/dbus/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gawk/gawk/run-ptest Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the pass/fail of every ptest. So the ptest.py needs to implement the same behavior. Regards, Anibal [1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n80
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/suite/linux-ptest/tests/ I see all ptests passed. Still, in log https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/log I found 54 failed tests and wasn’t able to find a report which indicates those failures. Is there such a report? It would be really useful to know that some tests failed. Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/ Regards, Anibal Best regards, Alex
On Tue, 27 Nov 2018 at 08:29, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi Abibal,
In our project we need to analyze a total number of passed and failed tests for each packet. To distinguish packets we use lava-test-set feature. In order to implement that I modified ptest.py and send-to-lava.sh scripts. Could you please look at the patch and express your opinion? Maybe this code can be added to git.linaro.org/qa/test-definitions.git ?
Hi Oleksandr,
The code looks good, can you have an example of the LAVA test run to see the actual results?
Regards, Anibal
Best regards, Alex
automated/linux/ptest: Analyze each test in package tests
Currently ptest.py analyze only exit code of each package test to decide if it passed or not. However, ptest-runner can return success code even though some tests failed. So we need to parse test output and analyze it.
It also quite useful to see exactly which tests failed. So results are recorded for each particular test, and lava-test-set feature is used to distinguish packages.
Signed-off-by: Oleksandr Terentiev oterenti@cisco.com oterenti@cisco.com
diff --git a/automated/linux/ptest/ptest.py b/automated/linux/ptest/ptest.py index 13feb4d..a28d7f0 100755 --- a/automated/linux/ptest/ptest.py +++ b/automated/linux/ptest/ptest.py @@ -84,20 +84,60 @@ def filter_ptests(ptests, requested_ptests, exclude):
return filter_ptests
+def parse_line(line):
- test_status_list = {
'pass': re.compile("^PASS:(.+)"),
'fail': re.compile("^FAIL:(.+)"),
'skip': re.compile("^SKIP:(.+)")
- }
- for test_status, status_regex in test_status_list.items():
test_name = status_regex.search(line)
if test_name:
return [test_name.group(1), test_status]
-def check_ptest(ptest_dir, ptest_name, output_log):
- status = 'pass'
- return None
- try:
output = subprocess.check_call('ptest-runner -d %s %s' %
(ptest_dir, ptest_name),
shell=True,
stderr=subprocess.STDOUT)
- except subprocess.CalledProcessError:
status = 'fail'
+def parse_ptest(log_file):
- result = []
- with open(output_log, 'a+') as f:
f.write("%s %s\n" % (ptest_name, status))
with open(log_file, 'r') as f:
for line in f:
result_tuple = parse_line(line)
if not result_tuple:
continue
print(result_tuple)
result.append(result_tuple)
continue
return result
+def run_command(command, log_file):
- process = subprocess.Popen(command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
- with open(log_file, 'w') as f:
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print output.strip()
f.write("%s\n" % output.strip())
- rc = process.poll()
- return rc
+def check_ptest(ptest_dir, ptest_name, output_log):
- log_name = os.path.join(os.getcwd(), '%s.log' % ptest_name)
- status = run_command('ptest-runner -d %s %s' % (ptest_dir,
ptest_name), log_name)
- with open(output_log, 'a+') as f:
f.write("lava-test-set start %s\n" % ptest_name)
f.write("%s %s\n" % (ptest_name, "pass" if status == 0 else
"fail"))
for test, test_status in parse_ptest(log_name):
f.write("%s %s\n" % (re.sub(r'[^\w-]', '', test),
test_status))
f.write("lava-test-set stop %s\n" % ptest_name)
def main(): parser = argparse.ArgumentParser(description="LAVA/OE ptest script", diff --git a/automated/utils/send-to-lava.sh b/automated/utils/send-to-lava.sh index bf2a477..db4442c 100755 --- a/automated/utils/send-to-lava.sh +++ b/automated/utils/send-to-lava.sh @@ -4,6 +4,8 @@ RESULT_FILE="$1"
which lava-test-case > /dev/null 2>&1 lava_test_case="$?" +which lava-test-set > /dev/null 2>&1 +lava_test_set="$?"
if [ -f "${RESULT_FILE}" ]; then while read -r line; do @@ -31,6 +33,18 @@ if [ -f "${RESULT_FILE}" ]; then else echo "<TEST_CASE_ID=${test} RESULT=${result} MEASUREMENT=${measurement} UNITS=${units}>" fi
elif echo "${line}" | egrep -iq "^lava-test-set.*"; then
test_set_status="$(echo "${line}" | awk '{print $2}')"
test_set_name="$(echo "${line}" | awk '{print $3}')"
if [ "${lava_test_set}" -eq 0 ]; then
lava-test-set "${test_set_status}" "${test_set_name}"
else
if [ "${test_set_status}" == "start" ]; then
echo "<LAVA_SIGNAL_TESTSET START ${test_set_name}>"
else
echo "<LAVA_SIGNAL_TESTSET STOP>"
fi
done < "${RESULT_FILE}"fi fi
else
On 01.10.18 17:09, Anibal Limon wrote:
Hi,
I was on vacation, that's the reason for the slow response, comments below,
On Fri, 28 Sep 2018 at 03:59, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I would like to discuss the following question. As it was said now we have to analyze pass/fail of every ptest. From my point of view there are couple options.
The first we can parse output and mark ptest as failed if there is even only one failed test found.
Right I will choice this approach changes needs to be done in the ptest lava script [1] to fail when any of the package tests failed like [2].
The second we can analyze each test within some packet and record corresponding results. I see a few issues here. First of all there will be a large number of test results as each ptest can run lots of tests.
Right but that need to be handled in every OE ptest script, I mean if you want to fail if certain test inside a ptest fails needs to be done in OE.
Another thing is that we need somehow separate test results between particular packets.
Currently we use QA reports to see only the package test result, now If you want to be able look at details you need to see the ptest log.
As an option we can use lava-test-set feature for that. So each test wit hin ptest will be marked as test case and packet name we will see as test set.
What do you think about that?
May be the lava-test-set is an option.
I would go to do the 1st option and then start to review/implement the idea of use lava-test-feature.
Regards, Anibal
[1] https://git.linaro.org/qa/test-definitions.git/tree/automated/linux/ptest/pt... [2] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cas...
Regards,
Alex
On 23.08.18 16:10, Anibal Limon wrote:
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev oterenti@cisco.com wrote:
Thank you Anibal for the fast response
On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE
As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail.
Maybe it worth to analyze each test?
Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention about return code processing or a reference to the convention you mentioned in the answer.
I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed.
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/db...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2f...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gaw...
Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the pass/fail of every ptest. So the ptest.py needs to implement the same behavior.
Regards, Anibal
[1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cas...
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I see all ptests passed. Still, in log
https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I found 54 failed tests and wasn’t able to find a report which indicates those failures.
Is there such a report? It would be really useful to know that some tests failed.
Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li... [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/...
Regards, Anibal
Best regards, Alex
Thanks for replay. Is there any public LAVA instance where I can run the example test? Maybe on staging.validation.linaro.org https://staging.validation.linaro.org/ ? Can I be registered there and how? Or maybe there is another approach to run that.
On 27.11.18 19:41, Anibal Limon wrote:
On Tue, 27 Nov 2018 at 08:29, Oleksandr Terentiev <oterenti@cisco.com mailto:oterenti@cisco.com> wrote:
Hi Abibal, In our project we need to analyze a total number of passed and failed tests for each packet. To distinguish packets we use lava-test-set feature. In order to implement that I modified ptest.py and send-to-lava.sh scripts. Could you please look at the patch and express your opinion? Maybe this code can be added to git.linaro.org/qa/test-definitions.git <http://git.linaro.org/qa/test-definitions.git> ?
Hi Oleksandr,
The code looks good, can you have an example of the LAVA test run to see the actual results?
Regards, Anibal
Best regards, Alex automated/linux/ptest: Analyze each test in package tests Currently ptest.py analyze only exit code of each package test to decide if it passed or not. However, ptest-runner can return success code even though some tests failed. So we need to parse test output and analyze it. It also quite useful to see exactly which tests failed. So results are recorded for each particular test, and lava-test-set feature is used to distinguish packages. Signed-off-by: Oleksandr Terentiev <oterenti@cisco.com> <mailto:oterenti@cisco.com> diff --git a/automated/linux/ptest/ptest.py b/automated/linux/ptest/ptest.py index 13feb4d..a28d7f0 100755 --- a/automated/linux/ptest/ptest.py +++ b/automated/linux/ptest/ptest.py @@ -84,20 +84,60 @@ def filter_ptests(ptests, requested_ptests, exclude): return filter_ptests +def parse_line(line): + test_status_list = { + 'pass': re.compile("^PASS:(.+)"), + 'fail': re.compile("^FAIL:(.+)"), + 'skip': re.compile("^SKIP:(.+)") + } + + for test_status, status_regex in test_status_list.items(): + test_name = status_regex.search(line) + if test_name: + return [test_name.group(1), test_status] -def check_ptest(ptest_dir, ptest_name, output_log): - status = 'pass' + return None - try: - output = subprocess.check_call('ptest-runner -d %s %s' % - (ptest_dir, ptest_name), shell=True, - stderr=subprocess.STDOUT) - except subprocess.CalledProcessError: - status = 'fail' +def parse_ptest(log_file): + result = [] - with open(output_log, 'a+') as f: - f.write("%s %s\n" % (ptest_name, status)) + with open(log_file, 'r') as f: + for line in f: + result_tuple = parse_line(line) + if not result_tuple: + continue + print(result_tuple) + result.append(result_tuple) + continue + return result + +def run_command(command, log_file): + process = subprocess.Popen(command, + shell=True, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT) + with open(log_file, 'w') as f: + while True: + output = process.stdout.readline() + if output == '' and process.poll() is not None: + break + if output: + print output.strip() + f.write("%s\n" % output.strip()) + rc = process.poll() + return rc + +def check_ptest(ptest_dir, ptest_name, output_log): + log_name = os.path.join(os.getcwd(), '%s.log' % ptest_name) + status = run_command('ptest-runner -d %s %s' % (ptest_dir, ptest_name), log_name) + + with open(output_log, 'a+') as f: + f.write("lava-test-set start %s\n" % ptest_name) + f.write("%s %s\n" % (ptest_name, "pass" if status == 0 else "fail")) + for test, test_status in parse_ptest(log_name): + f.write("%s %s\n" % (re.sub(r'[^\w-]', '', test), test_status)) + f.write("lava-test-set stop %s\n" % ptest_name) def main(): parser = argparse.ArgumentParser(description="LAVA/OE ptest script", diff --git a/automated/utils/send-to-lava.sh b/automated/utils/send-to-lava.sh index bf2a477..db4442c 100755 --- a/automated/utils/send-to-lava.sh +++ b/automated/utils/send-to-lava.sh @@ -4,6 +4,8 @@ RESULT_FILE="$1" which lava-test-case > /dev/null 2>&1 lava_test_case="$?" +which lava-test-set > /dev/null 2>&1 +lava_test_set="$?" if [ -f "${RESULT_FILE}" ]; then while read -r line; do @@ -31,6 +33,18 @@ if [ -f "${RESULT_FILE}" ]; then else echo "<TEST_CASE_ID=${test} RESULT=${result} MEASUREMENT=${measurement} UNITS=${units}>" fi + elif echo "${line}" | egrep -iq "^lava-test-set.*"; then + test_set_status="$(echo "${line}" | awk '{print $2}')" + test_set_name="$(echo "${line}" | awk '{print $3}')" + if [ "${lava_test_set}" -eq 0 ]; then + lava-test-set "${test_set_status}" "${test_set_name}" + else + if [ "${test_set_status}" == "start" ]; then + echo "<LAVA_SIGNAL_TESTSET START ${test_set_name}>" + else + echo "<LAVA_SIGNAL_TESTSET STOP>" + fi + fi fi done < "${RESULT_FILE}" else On 01.10.18 17:09, Anibal Limon wrote:
Hi, I was on vacation, that's the reason for the slow response, comments below, On Fri, 28 Sep 2018 at 03:59, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Hi, I would like to discuss the following question. As it was said now we have to analyze pass/fail of every ptest. From my point of view there are couple options. The first we can parse output and mark ptest as failed if there is even only one failed test found. Right I will choice this approach changes needs to be done in the ptest lava script [1] to fail when any of the package tests failed like [2]. The second we can analyze each test within some packet and record corresponding results. I see a few issues here. First of all there will be a large number of test results as each ptest can run lots of tests. Right but that need to be handled in every OE ptest script, I mean if you want to fail if certain test inside a ptest fails needs to be done in OE. Another thing is that we need somehow separate test results between particular packets. Currently we use QA reports to see only the package test result, now If you want to be able look at details you need to see the ptest log. As an option we can use lava-test-set feature for that. So each test within ptest will be marked as test case and packet name we will see as test set. What do you think about that? May be the lava-test-set is an option. I would go to do the 1st option and then start to review/implement the idea of use lava-test-feature. Regards, Anibal [1] https://git.linaro.org/qa/test-definitions.git/tree/automated/linux/ptest/ptest.py [2] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n87 Regards, Alex On 23.08.18 16:10, Anibal Limon wrote:
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Thank you Anibal for the fast response On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Hi, I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail. Maybe it worth to analyze each test? Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention about return code processing or a reference to the convention you mentioned in the answer. I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed. http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/acl/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/files/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/dbus/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gawk/gawk/run-ptest Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the pass/fail of every ptest. So the ptest.py needs to implement the same behavior. Regards, Anibal [1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n80
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/suite/linux-ptest/tests/ I see all ptests passed. Still, in log https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/log I found 54 failed tests and wasn’t able to find a report which indicates those failures. Is there such a report? It would be really useful to know that some tests failed. Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/ Regards, Anibal Best regards, Alex
On Tue, 27 Nov 2018 at 12:22, Oleksandr Terentiev oterenti@cisco.com wrote:
Thanks for replay. Is there any public LAVA instance where I can run the example test? Maybe on staging.validation.linaro.org ? Can I be registered there and how? Or maybe there is another approach to run that.
I can run the LAVA job definition, if you share with me.
Cheers, Anibal
On 27.11.18 19:41, Anibal Limon wrote:
On Tue, 27 Nov 2018 at 08:29, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi Abibal,
In our project we need to analyze a total number of passed and failed tests for each packet. To distinguish packets we use lava-test-set feature. In order to implement that I modified ptest.py and send-to-lava.sh scripts. Could you please look at the patch and express your opinion? Maybe this code can be added to git.linaro.org/qa/test-definitions.git ?
Hi Oleksandr,
The code looks good, can you have an example of the LAVA test run to see the actual results?
Regards, Anibal
Best regards, Alex
automated/linux/ptest: Analyze each test in package tests
Currently ptest.py analyze only exit code of each package test to decide if it passed or not. However, ptest-runner can return success code even though some tests failed. So we need to parse test output and analyze it.
It also quite useful to see exactly which tests failed. So results are recorded for each particular test, and lava-test-set feature is used to distinguish packages.
Signed-off-by: Oleksandr Terentiev oterenti@cisco.com oterenti@cisco.com
diff --git a/automated/linux/ptest/ptest.py b/automated/linux/ptest/ptest.py index 13feb4d..a28d7f0 100755 --- a/automated/linux/ptest/ptest.py +++ b/automated/linux/ptest/ptest.py @@ -84,20 +84,60 @@ def filter_ptests(ptests, requested_ptests, exclude):
return filter_ptests
+def parse_line(line):
- test_status_list = {
'pass': re.compile("^PASS:(.+)"),
'fail': re.compile("^FAIL:(.+)"),
'skip': re.compile("^SKIP:(.+)")
- }
- for test_status, status_regex in test_status_list.items():
test_name = status_regex.search(line)
if test_name:
return [test_name.group(1), test_status]
-def check_ptest(ptest_dir, ptest_name, output_log):
- status = 'pass'
- return None
- try:
output = subprocess.check_call('ptest-runner -d %s %s' %
(ptest_dir, ptest_name),
shell=True,
stderr=subprocess.STDOUT)
- except subprocess.CalledProcessError:
status = 'fail'
+def parse_ptest(log_file):
- result = []
- with open(output_log, 'a+') as f:
f.write("%s %s\n" % (ptest_name, status))
with open(log_file, 'r') as f:
for line in f:
result_tuple = parse_line(line)
if not result_tuple:
continue
print(result_tuple)
result.append(result_tuple)
continue
return result
+def run_command(command, log_file):
- process = subprocess.Popen(command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
- with open(log_file, 'w') as f:
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print output.strip()
f.write("%s\n" % output.strip())
- rc = process.poll()
- return rc
+def check_ptest(ptest_dir, ptest_name, output_log):
- log_name = os.path.join(os.getcwd(), '%s.log' % ptest_name)
- status = run_command('ptest-runner -d %s %s' % (ptest_dir,
ptest_name), log_name)
- with open(output_log, 'a+') as f:
f.write("lava-test-set start %s\n" % ptest_name)
f.write("%s %s\n" % (ptest_name, "pass" if status == 0 else
"fail"))
for test, test_status in parse_ptest(log_name):
f.write("%s %s\n" % (re.sub(r'[^\w-]', '', test),
test_status))
f.write("lava-test-set stop %s\n" % ptest_name)
def main(): parser = argparse.ArgumentParser(description="LAVA/OE ptest script", diff --git a/automated/utils/send-to-lava.sh b/automated/utils/send-to-lava.sh index bf2a477..db4442c 100755 --- a/automated/utils/send-to-lava.sh +++ b/automated/utils/send-to-lava.sh @@ -4,6 +4,8 @@ RESULT_FILE="$1"
which lava-test-case > /dev/null 2>&1 lava_test_case="$?" +which lava-test-set > /dev/null 2>&1 +lava_test_set="$?"
if [ -f "${RESULT_FILE}" ]; then while read -r line; do @@ -31,6 +33,18 @@ if [ -f "${RESULT_FILE}" ]; then else echo "<TEST_CASE_ID=${test} RESULT=${result} MEASUREMENT=${measurement} UNITS=${units}>" fi
elif echo "${line}" | egrep -iq "^lava-test-set.*"; then
test_set_status="$(echo "${line}" | awk '{print $2}')"
test_set_name="$(echo "${line}" | awk '{print $3}')"
if [ "${lava_test_set}" -eq 0 ]; then
lava-test-set "${test_set_status}" "${test_set_name}"
else
if [ "${test_set_status}" == "start" ]; then
echo "<LAVA_SIGNAL_TESTSET START ${test_set_name}>"
else
echo "<LAVA_SIGNAL_TESTSET STOP>"
fi
done < "${RESULT_FILE}"fi fi
else
On 01.10.18 17:09, Anibal Limon wrote:
Hi,
I was on vacation, that's the reason for the slow response, comments below,
On Fri, 28 Sep 2018 at 03:59, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I would like to discuss the following question. As it was said now we have to analyze pass/fail of every ptest. From my point of view there are couple options.
The first we can parse output and mark ptest as failed if there is even only one failed test found.
Right I will choice this approach changes needs to be done in the ptest lava script [1] to fail when any of the package tests failed like [2].
The second we can analyze each test within some packet and record corresponding results. I see a few issues here. First of all there will be a large number of test results as each ptest can run lots of tests.
Right but that need to be handled in every OE ptest script, I mean if you want to fail if certain test inside a ptest fails needs to be done in OE.
Another thing is that we need somehow separate test results between particular packets.
Currently we use QA reports to see only the package test result, now If you want to be able look at details you need to see the ptest log.
As an option we can use lava-test-set feature for that. So each test wit hin ptest will be marked as test case and packet name we will see as test set.
What do you think about that?
May be the lava-test-set is an option.
I would go to do the 1st option and then start to review/implement the idea of use lava-test-feature.
Regards, Anibal
[1] https://git.linaro.org/qa/test-definitions.git/tree/automated/linux/ptest/pt... [2] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cas...
Regards,
Alex
On 23.08.18 16:10, Anibal Limon wrote:
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev oterenti@cisco.com wrote:
Thank you Anibal for the fast response
On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git and received the following results: https://pastebin.com/nj9PYQzE
As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail.
Maybe it worth to analyze each test?
Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention about return code processing or a reference to the convention you mentioned in the answer.
I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed.
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/db...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2f...
http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gaw...
Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the pass/fail of every ptest. So the ptest.py needs to implement the same behavior.
Regards, Anibal
[1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cas...
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I see all ptests passed. Still, in log
https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I found 54 failed tests and wasn’t able to find a report which indicates those failures.
Is there such a report? It would be really useful to know that some tests failed.
Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li... [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/...
Regards, Anibal
Best regards, Alex
On Tue, 27 Nov 2018 20:22:42 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Thanks for replay. Is there any public LAVA instance where I can run the example test? Maybe on staging.validation.linaro.org https://staging.validation.linaro.org/ ? Can I be registered there and how?
Staging (and other instances hosted by Linaro) share the same authentication support as validation.linaro.org, which is described in the LAVA documentation.
https://staging.validation.linaro.org/static/docs/v2/first_steps.html#linaro...
If you do not have a Linaro LDAP account, you can register at https://register.linaro.org/.
You could use validation.linaro.org for your test.
Or maybe there is another approach to run that.
Once the script is available in a public git repository instead of only as a patch, it can be executed from any suitable LAVA test job. So push the script and associated support to public git somewhere and post the URL here. Someone on this list can put that into a test job and run that. If you have a test job definition ready to go, put that alongside the script and test definition.
Any public git repo will be fine, it doesn't have to be the original one on git.linaro.org.
On 27.11.18 19:41, Anibal Limon wrote:
On Tue, 27 Nov 2018 at 08:29, Oleksandr Terentiev <oterenti@cisco.com mailto:oterenti@cisco.com> wrote:
Hi Abibal, In our project we need to analyze a total number of passed and failed tests for each packet. To distinguish packets we use lava-test-set feature. In order to implement that I modified ptest.py and
send-to-lava.sh scripts. Could you please look at the patch and express your opinion? Maybe this code can be added to git.linaro.org/qa/test-definitions.git http://git.linaro.org/qa/test-definitions.git ?
Hi Oleksandr,
The code looks good, can you have an example of the LAVA test run to see the actual results?
Regards, Anibal
Best regards, Alex automated/linux/ptest: Analyze each test in package tests Currently ptest.py analyze only exit code of each package test to decide if it passed or not. However, ptest-runner can return success code even though some tests failed. So we need to parse test output and analyze it. It also quite useful to see exactly which tests failed. So
results are recorded for each particular test, and lava-test-set feature is used to distinguish packages.
Signed-off-by: Oleksandr Terentiev <oterenti@cisco.com> <mailto:oterenti@cisco.com> diff --git a/automated/linux/ptest/ptest.py b/automated/linux/ptest/ptest.py index 13feb4d..a28d7f0 100755 --- a/automated/linux/ptest/ptest.py +++ b/automated/linux/ptest/ptest.py @@ -84,20 +84,60 @@ def filter_ptests(ptests, requested_ptests, exclude): return filter_ptests +def parse_line(line): + test_status_list = { + 'pass': re.compile("^PASS:(.+)"), + 'fail': re.compile("^FAIL:(.+)"), + 'skip': re.compile("^SKIP:(.+)") + } + + for test_status, status_regex in test_status_list.items(): + test_name = status_regex.search(line) + if test_name: + return [test_name.group(1), test_status] -def check_ptest(ptest_dir, ptest_name, output_log): - status = 'pass' + return None - try: - output = subprocess.check_call('ptest-runner -d %s %s'
% - (ptest_dir, ptest_name), shell=True, - stderr=subprocess.STDOUT) - except subprocess.CalledProcessError: - status = 'fail' +def parse_ptest(log_file): + result = []
- with open(output_log, 'a+') as f: - f.write("%s %s\n" % (ptest_name, status)) + with open(log_file, 'r') as f: + for line in f: + result_tuple = parse_line(line) + if not result_tuple: + continue + print(result_tuple) + result.append(result_tuple) + continue + return result + +def run_command(command, log_file): + process = subprocess.Popen(command, + shell=True, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT) + with open(log_file, 'w') as f: + while True: + output = process.stdout.readline() + if output == '' and process.poll() is not None: + break + if output: + print output.strip() + f.write("%s\n" % output.strip()) + rc = process.poll() + return rc + +def check_ptest(ptest_dir, ptest_name, output_log): + log_name = os.path.join(os.getcwd(), '%s.log' % ptest_name) + status = run_command('ptest-runner -d %s %s' % (ptest_dir, ptest_name), log_name) + + with open(output_log, 'a+') as f: + f.write("lava-test-set start %s\n" % ptest_name) + f.write("%s %s\n" % (ptest_name, "pass" if status == 0 else "fail")) + for test, test_status in parse_ptest(log_name): + f.write("%s %s\n" % (re.sub(r'[^\w-]', '', test), test_status)) + f.write("lava-test-set stop %s\n" % ptest_name) def main(): parser = argparse.ArgumentParser(description="LAVA/OE ptest script", diff --git a/automated/utils/send-to-lava.sh b/automated/utils/send-to-lava.sh index bf2a477..db4442c 100755 --- a/automated/utils/send-to-lava.sh +++ b/automated/utils/send-to-lava.sh @@ -4,6 +4,8 @@ RESULT_FILE="$1" which lava-test-case > /dev/null 2>&1 lava_test_case="$?" +which lava-test-set > /dev/null 2>&1 +lava_test_set="$?" if [ -f "${RESULT_FILE}" ]; then while read -r line; do @@ -31,6 +33,18 @@ if [ -f "${RESULT_FILE}" ]; then else echo "<TEST_CASE_ID=${test} RESULT=${result} MEASUREMENT=${measurement} UNITS=${units}>" fi + elif echo "${line}" | egrep -iq "^lava-test-set.*";
then + test_set_status="$(echo "${line}" | awk '{print $2}')" + test_set_name="$(echo "${line}" | awk '{print $3}')" + if [ "${lava_test_set}" -eq 0 ]; then + lava-test-set "${test_set_status}" "${test_set_name}" + else + if [ "${test_set_status}" == "start" ]; then + echo "<LAVA_SIGNAL_TESTSET START ${test_set_name}>" + else + echo "<LAVA_SIGNAL_TESTSET STOP>" + fi + fi fi done < "${RESULT_FILE}" else
On 01.10.18 17:09, Anibal Limon wrote:
Hi, I was on vacation, that's the reason for the slow response, comments below, On Fri, 28 Sep 2018 at 03:59, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Hi, I would like to discuss the following question. As it was said now we have to analyze pass/fail of every ptest. From my point of view there are couple
options.
The first we can parse output and mark ptest as failed if there is even only one failed test found. Right I will choice this approach changes needs to be done in
the ptest lava script [1] to fail when any of the package tests failed like [2].
The second we can analyze each test within some packet and record corresponding results. I see a few issues here. First of all there will be a large number of test results as each ptest can run lots of tests. Right but that need to be handled in every OE ptest script, I mean if you want to fail if certain test inside a ptest fails needs to be done in OE. Another thing is that we need somehow separate test results between particular packets. Currently we use QA reports to see only the package test
result, now If you want to be able look at details you need to see the ptest log.
As an option we can use lava-test-set feature for that. So each test within ptest will be marked as test case and
packet name we will see as test set.
What do you think about that? May be the lava-test-set is an option. I would go to do the 1st option and then start to review/implement the idea of use lava-test-feature. Regards, Anibal [1] https://git.linaro.org/qa/test-definitions.git/tree/automated/linux/ptest/ptest.py [2] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n87 Regards, Alex On 23.08.18 16:10, Anibal Limon wrote:
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Thank you Anibal for the fast response On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>>
wrote:
Hi, I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git
and received the following results: https://pastebin.com/nj9PYQzE
As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze
return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail.
Maybe it worth to analyze each test? Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention
about return code processing or a reference to the convention you mentioned in the answer.
I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed. http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/acl/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/files/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/dbus/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gawk/gawk/run-ptest Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the
pass/fail of every ptest. So the ptest.py needs to implement the same behavior.
Regards, Anibal [1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n80
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used
it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I see all ptests passed. Still, in log https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I found 54 failed tests and wasn’t able to find a report which indicates those failures.
Is there such a report? It would be really useful
to know that some tests failed.
Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/ Regards, Anibal Best regards, Alex
Hi,
I wasn't able to sign up at https://register.linaro.org/ with the following message: "Linaro employees and assignees, and people from Member companies cannot use this form"
So I uploaded my changes to GitHub: https://github.com/oterenti/test-definitions/tree/ptest_modify
If https://validation.linaro.org/scheduler/job/1890442 can be good example of ptest run I'd suggest the following test action:
- test: namespace: dragonboard-820c name: qcomlt-ptest timeout: minutes: 160 definitions: - repository: https://github.com/oterenti/test-definitions.git from: git path: automated/linux/ptest/ptest.yaml name: linux-ptest branch: ptest_modify params: EXCLUDE: bluez5 libxml2 parted python strace
Regards, Alex
On 28.11.18 10:38, Neil Williams wrote:
On Tue, 27 Nov 2018 20:22:42 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Thanks for replay. Is there any public LAVA instance where I can run the example test? Maybe on staging.validation.linaro.org https://staging.validation.linaro.org/ ? Can I be registered there and how?
Staging (and other instances hosted by Linaro) share the same authentication support as validation.linaro.org, which is described in the LAVA documentation.
https://staging.validation.linaro.org/static/docs/v2/first_steps.html#linaro...
If you do not have a Linaro LDAP account, you can register at https://register.linaro.org/.
You could use validation.linaro.org for your test.
Or maybe there is another approach to run that.
Once the script is available in a public git repository instead of only as a patch, it can be executed from any suitable LAVA test job. So push the script and associated support to public git somewhere and post the URL here. Someone on this list can put that into a test job and run that. If you have a test job definition ready to go, put that alongside the script and test definition.
Any public git repo will be fine, it doesn't have to be the original one on git.linaro.org.
On 27.11.18 19:41, Anibal Limon wrote:
On Tue, 27 Nov 2018 at 08:29, Oleksandr Terentiev <oterenti@cisco.com mailto:oterenti@cisco.com> wrote:
Hi Abibal, In our project we need to analyze a total number of passed and failed tests for each packet. To distinguish packets we use lava-test-set feature. In order to implement that I modified ptest.py and
send-to-lava.sh scripts. Could you please look at the patch and express your opinion? Maybe this code can be added to git.linaro.org/qa/test-definitions.git http://git.linaro.org/qa/test-definitions.git ?
Hi Oleksandr,
The code looks good, can you have an example of the LAVA test run to see the actual results?
Regards, Anibal
Best regards, Alex automated/linux/ptest: Analyze each test in package tests Currently ptest.py analyze only exit code of each package test to decide if it passed or not. However, ptest-runner can return success code even though some tests failed. So we need to parse test output and analyze it. It also quite useful to see exactly which tests failed. So
results are recorded for each particular test, and lava-test-set feature is used to distinguish packages.
Signed-off-by: Oleksandr Terentiev <oterenti@cisco.com> <mailto:oterenti@cisco.com> diff --git a/automated/linux/ptest/ptest.py b/automated/linux/ptest/ptest.py index 13feb4d..a28d7f0 100755 --- a/automated/linux/ptest/ptest.py +++ b/automated/linux/ptest/ptest.py @@ -84,20 +84,60 @@ def filter_ptests(ptests, requested_ptests, exclude): return filter_ptests +def parse_line(line): + test_status_list = { + 'pass': re.compile("^PASS:(.+)"), + 'fail': re.compile("^FAIL:(.+)"), + 'skip': re.compile("^SKIP:(.+)") + } + + for test_status, status_regex in test_status_list.items(): + test_name = status_regex.search(line) + if test_name: + return [test_name.group(1), test_status] -def check_ptest(ptest_dir, ptest_name, output_log): - status = 'pass' + return None - try: - output = subprocess.check_call('ptest-runner -d %s %s'
% - (ptest_dir, ptest_name), shell=True, - stderr=subprocess.STDOUT) - except subprocess.CalledProcessError: - status = 'fail' +def parse_ptest(log_file): + result = []
- with open(output_log, 'a+') as f: - f.write("%s %s\n" % (ptest_name, status)) + with open(log_file, 'r') as f: + for line in f: + result_tuple = parse_line(line) + if not result_tuple: + continue + print(result_tuple) + result.append(result_tuple) + continue + return result + +def run_command(command, log_file): + process = subprocess.Popen(command, + shell=True, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT) + with open(log_file, 'w') as f: + while True: + output = process.stdout.readline() + if output == '' and process.poll() is not None: + break + if output: + print output.strip() + f.write("%s\n" % output.strip()) + rc = process.poll() + return rc + +def check_ptest(ptest_dir, ptest_name, output_log): + log_name = os.path.join(os.getcwd(), '%s.log' % ptest_name) + status = run_command('ptest-runner -d %s %s' % (ptest_dir, ptest_name), log_name) + + with open(output_log, 'a+') as f: + f.write("lava-test-set start %s\n" % ptest_name) + f.write("%s %s\n" % (ptest_name, "pass" if status == 0 else "fail")) + for test, test_status in parse_ptest(log_name): + f.write("%s %s\n" % (re.sub(r'[^\w-]', '', test), test_status)) + f.write("lava-test-set stop %s\n" % ptest_name) def main(): parser = argparse.ArgumentParser(description="LAVA/OE ptest script", diff --git a/automated/utils/send-to-lava.sh b/automated/utils/send-to-lava.sh index bf2a477..db4442c 100755 --- a/automated/utils/send-to-lava.sh +++ b/automated/utils/send-to-lava.sh @@ -4,6 +4,8 @@ RESULT_FILE="$1" which lava-test-case > /dev/null 2>&1 lava_test_case="$?" +which lava-test-set > /dev/null 2>&1 +lava_test_set="$?" if [ -f "${RESULT_FILE}" ]; then while read -r line; do @@ -31,6 +33,18 @@ if [ -f "${RESULT_FILE}" ]; then else echo "<TEST_CASE_ID=${test} RESULT=${result} MEASUREMENT=${measurement} UNITS=${units}>" fi + elif echo "${line}" | egrep -iq "^lava-test-set.*";
then + test_set_status="$(echo "${line}" | awk '{print $2}')" + test_set_name="$(echo "${line}" | awk '{print $3}')" + if [ "${lava_test_set}" -eq 0 ]; then + lava-test-set "${test_set_status}" "${test_set_name}" + else + if [ "${test_set_status}" == "start" ]; then + echo "<LAVA_SIGNAL_TESTSET START ${test_set_name}>" + else + echo "<LAVA_SIGNAL_TESTSET STOP>" + fi + fi fi done < "${RESULT_FILE}" else
On 01.10.18 17:09, Anibal Limon wrote:
Hi, I was on vacation, that's the reason for the slow response, comments below, On Fri, 28 Sep 2018 at 03:59, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Hi, I would like to discuss the following question. As it was said now we have to analyze pass/fail of every ptest. From my point of view there are couple
options.
The first we can parse output and mark ptest as failed if there is even only one failed test found. Right I will choice this approach changes needs to be done in
the ptest lava script [1] to fail when any of the package tests failed like [2].
The second we can analyze each test within some packet and record corresponding results. I see a few issues here. First of all there will be a large number of test results as each ptest can run lots of tests. Right but that need to be handled in every OE ptest script, I mean if you want to fail if certain test inside a ptest fails needs to be done in OE. Another thing is that we need somehow separate test results between particular packets. Currently we use QA reports to see only the package test
result, now If you want to be able look at details you need to see the ptest log.
As an option we can use lava-test-set feature for that. So each test within ptest will be marked as test case and
packet name we will see as test set.
What do you think about that? May be the lava-test-set is an option. I would go to do the 1st option and then start to review/implement the idea of use lava-test-feature. Regards, Anibal [1] https://git.linaro.org/qa/test-definitions.git/tree/automated/linux/ptest/ptest.py [2] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n87 Regards, Alex On 23.08.18 16:10, Anibal Limon wrote:
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Thank you Anibal for the fast response On 22.08.18 19:50, Anibal Limon wrote:
On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>>
wrote:
Hi, I launched util-linux ptest using automated/linux/ptest/ptest.yaml from https://git.linaro.org/qa/test-definitions.git
and received the following results: https://pastebin.com/nj9PYQzE
As you can see some tests failed. However, case util-linux marked as passed. It looks like ptest.py only analyze
return code of ptest-runner -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes correctly exit code is 0. Therefore all tests are always marked as passed, and users never know when some of the tests fail.
Maybe it worth to analyze each test? Talking about each ptest the result comes from the ptest script in the OE recipe [1], for convention if the OE ptest returns 0 means pass, so needs to be fixed in the OE ptest [2].
I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention
about return code processing or a reference to the convention you mentioned in the answer.
I looked through some OE run-ptest scripts. I suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed. http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/acl/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/files/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/dbus/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gawk/gawk/run-ptest Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the
pass/fail of every ptest. So the ptest.py needs to implement the same behavior.
Regards, Anibal [1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n80
Regarding the LAVA ptest.py script, I mark the run as succeed if there is no critical error in the ptest-runner and we have a QA-reports tool to analyse pass/fails in detail for every ptest executed [3].
I heard about QA-reports tool but I’ve never used
it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I see all ptests passed. Still, in log https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I found 54 failed tests and wasn’t able to find a report which indicates those failures.
Is there such a report? It would be really useful
to know that some tests failed.
Thanks
[1] http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest [2] https://wiki.yoctoproject.org/wiki/Ptest [3] https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/ Regards, Anibal Best regards, Alex
On Wed, 28 Nov 2018 18:26:34 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I wasn't able to sign up at https://register.linaro.org/ with the following message: "Linaro employees and assignees, and people from Member companies cannot use this form"
In which case, you should be able to sign in to validation.linaro.org yourself using your Linaro LDAP details. If your linaro email address is oleksandr.terentiev@linaro.org, your LDAP username to use with LAVA is oleksandr.terentiev and then your LDAP password. Otherwise, contact Linaro support / your Tech Lead in Linaro.
Once logged in, just need to ask an admin to give you submission rights. (JIRA ticket is easiest: https://projects.linaro.org/secure/CreateIssue%21default.jspa and select LAB & System Software (LSS) - remember to specify validation.linaro.org as the server you want to access.
So I uploaded my changes to GitHub: https://github.com/oterenti/test-definitions/tree/ptest_modify
If https://validation.linaro.org/scheduler/job/1890442 can be good example of ptest run I'd suggest the following test action:
- test:
namespace: dragonboard-820c name: qcomlt-ptest timeout: minutes: 160 definitions: - repository: https://github.com/oterenti/test-definitions.git from: git path: automated/linux/ptest/ptest.yaml name: linux-ptest branch: ptest_modify params: EXCLUDE: bluez5 libxml2 parted python strace
I tried a couple of test jobs but the first failed trying to fastboot flash and another kernel panicked: https://lkft-staging.validation.linaro.org/scheduler/job/18795
See if this one completes - it looks hung to me: https://validation.linaro.org/scheduler/job/1899004
+ ./ptest.py -o ./result.txt -t -e bluez5 libxml2 parted python strace b'1+0 records in' b'1+0 records out' b'1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405323 s, 259 MB/s' b'mke2fs 1.43.8 (1-Jan-2018)' b'' b'Filesystem too small for a journal' [ 19.740484] sdd: sdd1 sdd2 sdd3 b'Makefile:20: ../include/builddefs: No such file or directory' b"make: *** No rule to make target '../include/builddefs'." b"make: Failed to remake makefile '../include/builddefs'."
Regards, Alex
On 28.11.18 10:38, Neil Williams wrote:
On Tue, 27 Nov 2018 20:22:42 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Thanks for replay. Is there any public LAVA instance where I can run the example test? Maybe on staging.validation.linaro.org https://staging.validation.linaro.org/ ? Can I be registered there and how?
Staging (and other instances hosted by Linaro) share the same authentication support as validation.linaro.org, which is described in the LAVA documentation.
https://staging.validation.linaro.org/static/docs/v2/first_steps.html#linaro...
If you do not have a Linaro LDAP account, you can register at https://register.linaro.org/.
You could use validation.linaro.org for your test.
Or maybe there is another approach to run that.
Once the script is available in a public git repository instead of only as a patch, it can be executed from any suitable LAVA test job. So push the script and associated support to public git somewhere and post the URL here. Someone on this list can put that into a test job and run that. If you have a test job definition ready to go, put that alongside the script and test definition.
Any public git repo will be fine, it doesn't have to be the original one on git.linaro.org.
On 27.11.18 19:41, Anibal Limon wrote:
On Tue, 27 Nov 2018 at 08:29, Oleksandr Terentiev <oterenti@cisco.com mailto:oterenti@cisco.com> wrote:
Hi Abibal, In our project we need to analyze a total number of passed
and failed tests for each packet. To distinguish packets we use lava-test-set feature. In order to implement that I modified ptest.py and send-to-lava.sh scripts. Could you please look at the patch and express your opinion? Maybe this code can be added to git.linaro.org/qa/test-definitions.git http://git.linaro.org/qa/test-definitions.git ?
Hi Oleksandr,
The code looks good, can you have an example of the LAVA test run to see the actual results?
Regards, Anibal
Best regards, Alex automated/linux/ptest: Analyze each test in package tests Currently ptest.py analyze only exit code of each package
test to decide if it passed or not. However, ptest-runner can return success code even though some tests failed. So we need to parse test output and analyze it.
It also quite useful to see exactly which tests failed. So
results are recorded for each particular test, and lava-test-set feature is used to distinguish packages.
Signed-off-by: Oleksandr Terentiev <oterenti@cisco.com> <mailto:oterenti@cisco.com> diff --git a/automated/linux/ptest/ptest.py b/automated/linux/ptest/ptest.py index 13feb4d..a28d7f0 100755 --- a/automated/linux/ptest/ptest.py +++ b/automated/linux/ptest/ptest.py @@ -84,20 +84,60 @@ def filter_ptests(ptests,
requested_ptests, exclude):
return filter_ptests +def parse_line(line): + test_status_list = { + 'pass': re.compile("^PASS:(.+)"), + 'fail': re.compile("^FAIL:(.+)"), + 'skip': re.compile("^SKIP:(.+)") + } + + for test_status, status_regex in
test_status_list.items(): + test_name = status_regex.search(line) + if test_name: + return [test_name.group(1), test_status]
-def check_ptest(ptest_dir, ptest_name, output_log): - status = 'pass' + return None - try: - output = subprocess.check_call('ptest-runner -d %s
%s' % - (ptest_dir, ptest_name), shell=True, - stderr=subprocess.STDOUT) - except subprocess.CalledProcessError: - status = 'fail' +def parse_ptest(log_file): + result = []
- with open(output_log, 'a+') as f: - f.write("%s %s\n" % (ptest_name, status)) + with open(log_file, 'r') as f: + for line in f: + result_tuple = parse_line(line) + if not result_tuple: + continue + print(result_tuple) + result.append(result_tuple) + continue + return result + +def run_command(command, log_file): + process = subprocess.Popen(command, + shell=True, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT) + with open(log_file, 'w') as f: + while True: + output = process.stdout.readline() + if output == '' and process.poll() is not None: + break + if output: + print output.strip() + f.write("%s\n" % output.strip()) + rc = process.poll() + return rc + +def check_ptest(ptest_dir, ptest_name, output_log): + log_name = os.path.join(os.getcwd(), '%s.log' %
ptest_name) + status = run_command('ptest-runner -d %s %s' % (ptest_dir, ptest_name), log_name) + + with open(output_log, 'a+') as f: + f.write("lava-test-set start %s\n" % ptest_name) + f.write("%s %s\n" % (ptest_name, "pass" if status == 0 else "fail")) + for test, test_status in parse_ptest(log_name): + f.write("%s %s\n" % (re.sub(r'[^\w-]', '', test), test_status)) + f.write("lava-test-set stop %s\n" % ptest_name)
def main(): parser = argparse.ArgumentParser(description="LAVA/OE
ptest script", diff --git a/automated/utils/send-to-lava.sh b/automated/utils/send-to-lava.sh index bf2a477..db4442c 100755 --- a/automated/utils/send-to-lava.sh +++ b/automated/utils/send-to-lava.sh @@ -4,6 +4,8 @@ RESULT_FILE="$1"
which lava-test-case > /dev/null 2>&1 lava_test_case="$?" +which lava-test-set > /dev/null 2>&1 +lava_test_set="$?" if [ -f "${RESULT_FILE}" ]; then while read -r line; do @@ -31,6 +33,18 @@ if [ -f "${RESULT_FILE}" ]; then else echo "<TEST_CASE_ID=${test} RESULT=${result} MEASUREMENT=${measurement} UNITS=${units}>" fi + elif echo "${line}" | egrep -iq "^lava-test-set.*";
then + test_set_status="$(echo "${line}" | awk '{print $2}')" + test_set_name="$(echo "${line}" | awk '{print $3}')" + if [ "${lava_test_set}" -eq 0 ]; then + lava-test-set "${test_set_status}" "${test_set_name}" + else + if [ "${test_set_status}" == "start" ]; then + echo "<LAVA_SIGNAL_TESTSET START ${test_set_name}>" + else + echo "<LAVA_SIGNAL_TESTSET STOP>" + fi + fi fi done < "${RESULT_FILE}" else
On 01.10.18 17:09, Anibal Limon wrote:
Hi, I was on vacation, that's the reason for the slow response, comments below, On Fri, 28 Sep 2018 at 03:59, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Hi, I would like to discuss the following question. As it was said now we have to analyze pass/fail of every ptest. From my point of view there are couple
options.
The first we can parse output and mark ptest as failed
if there is even only one failed test found.
Right I will choice this approach changes needs to be done
in the ptest lava script [1] to fail when any of the package tests failed like [2].
The second we can analyze each test within some packet
and record corresponding results. I see a few issues here. First of all there will be a large number of test results as each ptest can run lots of tests.
Right but that need to be handled in every OE ptest script,
I mean if you want to fail if certain test inside a ptest fails needs to be done in OE.
Another thing is that we need somehow separate test
results between particular packets.
Currently we use QA reports to see only the package test
result, now If you want to be able look at details you need to see the ptest log.
As an option we can use lava-test-set feature for that.
So each test within ptest will be marked as test case and packet name we will see as test set.
What do you think about that? May be the lava-test-set is an option. I would go to do the 1st option and then start to review/implement the idea of use lava-test-feature. Regards, Anibal [1] https://git.linaro.org/qa/test-definitions.git/tree/automated/linux/ptest/ptest.py [2] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n87 Regards, Alex On 23.08.18 16:10, Anibal Limon wrote:
On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev <oterenti@cisco.com <mailto:oterenti@cisco.com>> wrote: Thank you Anibal for the fast response On 22.08.18 19:50, Anibal Limon wrote:
> > On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev > <oterenti@cisco.com mailto:oterenti@cisco.com> > wrote: > > Hi, > > I launched util-linux ptest using > automated/linux/ptest/ptest.yaml from > https://git.linaro.org/qa/test-definitions.git > and received the > following results: > https://pastebin.com/nj9PYQzE > > As you can see some tests failed. However, > case util-linux marked as > passed. It looks like ptest.py only analyze > return code of ptest-runner > -d <ptest_dir> <ptest_name> command. And since > ptest-runner finishes > correctly exit code is 0. Therefore all tests > are always marked as > passed, and users never know when some of the > tests fail. > > Maybe it worth to analyze each test? > > > Talking about each ptest the result comes from the > ptest script in the OE recipe [1], for convention > if the OE ptest returns 0 means pass, so > needs to be fixed in the OE ptest [2]. I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few times more. There are prescriptions about output format. But I didn’t find any mention about return code processing or a reference to the convention you mentioned in the answer.
I looked through some OE run-ptest scripts. I
suspect they don’t verify if some of their tests failed, and exit with 0 even if all their tests failed.
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/acl/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/files/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/dbus/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gawk/gawk/run-ptest Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the
pass/fail of every ptest. So the ptest.py needs to implement the same behavior.
Regards, Anibal [1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n80
> Regarding the LAVA ptest.py script, I mark the > run as succeed if there is no critical error in the > ptest-runner and we have a QA-reports tool to > analyse pass/fails > in detail for every ptest executed [3]. I heard about QA-reports tool but I’ve never used it before, so maybe I missed something. From https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I see all ptests passed. Still, in log https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... I found 54 failed tests and wasn’t able to find a report which indicates those failures.
Is there such a report? It would be really
useful to know that some tests failed.
Thanks
> [1] > http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-li... > [2] https://wiki.yoctoproject.org/wiki/Ptest > [3] > https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/... > > Regards, > Anibal > > > Best regards, > Alex >
Neil Williams codehelp@debian.org writes:
On Wed, 28 Nov 2018 18:26:34 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I wasn't able to sign up at https://register.linaro.org/ with the following message: "Linaro employees and assignees, and people from Member companies cannot use this form"
In which case, you should be able to sign in to validation.linaro.org yourself using your Linaro LDAP details. If your linaro email address is oleksandr.terentiev@linaro.org, your LDAP username to use with LAVA is oleksandr.terentiev and then your LDAP password. Otherwise, contact Linaro support / your Tech Lead in Linaro.
Once logged in, just need to ask an admin to give you submission rights. (JIRA ticket is easiest: https://projects.linaro.org/secure/CreateIssue%21default.jspa and select LAB & System Software (LSS) - remember to specify validation.linaro.org as the server you want to access.
So I uploaded my changes to GitHub: https://github.com/oterenti/test-definitions/tree/ptest_modify
If https://validation.linaro.org/scheduler/job/1890442 can be good example of ptest run I'd suggest the following test action:
- test:
namespace: dragonboard-820c name: qcomlt-ptest timeout: minutes: 160 definitions: - repository: https://github.com/oterenti/test-definitions.git from: git path: automated/linux/ptest/ptest.yaml name: linux-ptest branch: ptest_modify params: EXCLUDE: bluez5 libxml2 parted python strace
I tried a couple of test jobs but the first failed trying to fastboot flash and another kernel panicked: https://lkft-staging.validation.linaro.org/scheduler/job/18795
See if this one completes - it looks hung to me: https://validation.linaro.org/scheduler/job/1899004
That hang looks similar to the results I see. After the first ptest runs (acl), I see the "b'STOP: ptest-runner'" and then no more output[1], then a LAVA timeout.
I'm using the "ptest_modify" branch of the Oleksandr's test-definitions repository, and attempting to run a subset of the ptests[2].
FWIW, this is a yocto build of automotive-grade linux (AGL) for the raspberrypi.
Kevin
[1] http://lava.baylibre.com:10080/scheduler/job/58182#bottom [2] http://lava.baylibre.com:10080/scheduler/job/58182/definition#defline89
Kevin Hilman khilman@baylibre.com writes:
Neil Williams codehelp@debian.org writes:
On Wed, 28 Nov 2018 18:26:34 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I wasn't able to sign up at https://register.linaro.org/ with the following message: "Linaro employees and assignees, and people from Member companies cannot use this form"
In which case, you should be able to sign in to validation.linaro.org yourself using your Linaro LDAP details. If your linaro email address is oleksandr.terentiev@linaro.org, your LDAP username to use with LAVA is oleksandr.terentiev and then your LDAP password. Otherwise, contact Linaro support / your Tech Lead in Linaro.
Once logged in, just need to ask an admin to give you submission rights. (JIRA ticket is easiest: https://projects.linaro.org/secure/CreateIssue%21default.jspa and select LAB & System Software (LSS) - remember to specify validation.linaro.org as the server you want to access.
So I uploaded my changes to GitHub: https://github.com/oterenti/test-definitions/tree/ptest_modify
If https://validation.linaro.org/scheduler/job/1890442 can be good example of ptest run I'd suggest the following test action:
- test:
namespace: dragonboard-820c name: qcomlt-ptest timeout: minutes: 160 definitions: - repository: https://github.com/oterenti/test-definitions.git from: git path: automated/linux/ptest/ptest.yaml name: linux-ptest branch: ptest_modify params: EXCLUDE: bluez5 libxml2 parted python strace
I tried a couple of test jobs but the first failed trying to fastboot flash and another kernel panicked: https://lkft-staging.validation.linaro.org/scheduler/job/18795
See if this one completes - it looks hung to me: https://validation.linaro.org/scheduler/job/1899004
That hang looks similar to the results I see. After the first ptest runs (acl), I see the "b'STOP: ptest-runner'" and then no more output[1], then a LAVA timeout.
I'm using the "ptest_modify" branch of the Oleksandr's test-definitions repository, and attempting to run a subset of the ptests[2].
And for reference, even trying to run a single ptest (acl)[1], I run into the same problem.
Kevin
[1] http://lava-baylibre.baylibre:10080/scheduler/job/58187/definition#defline89
On Wed, 28 Nov 2018 at 17:48, Kevin Hilman khilman@baylibre.com wrote:
Kevin Hilman khilman@baylibre.com writes:
Neil Williams codehelp@debian.org writes:
On Wed, 28 Nov 2018 18:26:34 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I wasn't able to sign up at https://register.linaro.org/ with the following message: "Linaro employees and assignees, and people from Member companies cannot use this form"
In which case, you should be able to sign in to validation.linaro.org yourself using your Linaro LDAP details. If your linaro email address is oleksandr.terentiev@linaro.org, your LDAP username to use with LAVA is oleksandr.terentiev and then your LDAP password. Otherwise, contact Linaro support / your Tech Lead in Linaro.
Once logged in, just need to ask an admin to give you submission rights. (JIRA ticket is easiest: https://projects.linaro.org/secure/CreateIssue%21default.jspa and select LAB & System Software (LSS) - remember to specify validation.linaro.org as the server you want to access.
So I uploaded my changes to GitHub: https://github.com/oterenti/test-definitions/tree/ptest_modify
If https://validation.linaro.org/scheduler/job/1890442 can be good example of ptest run I'd suggest the following test action:
- test: namespace: dragonboard-820c name: qcomlt-ptest timeout: minutes: 160 definitions:
- repository: https://github.com/oterenti/test-definitions.git from: git path: automated/linux/ptest/ptest.yaml name: linux-ptest branch: ptest_modify params: EXCLUDE: bluez5 libxml2 parted python strace
I tried a couple of test jobs but the first failed trying to fastboot flash and another kernel panicked: https://lkft-staging.validation.linaro.org/scheduler/job/18795
See if this one completes - it looks hung to me: https://validation.linaro.org/scheduler/job/1899004
That hang looks similar to the results I see. After the first ptest runs (acl), I see the "b'STOP: ptest-runner'" and then no more output[1], then a LAVA timeout.
I'm using the "ptest_modify" branch of the Oleksandr's test-definitions repository, and attempting to run a subset of the ptests[2].
And for reference, even trying to run a single ptest (acl)[1], I run into the same problem.
Same problem here gets stalled after ACL,
https://validation.linaro.org/scheduler/job/1899050
And this continues same run without changes,
https://validation.linaro.org/scheduler/job/1899052
Regardsm Anibal
Kevin
[1] http://lava-baylibre.baylibre:10080/scheduler/job/58187/definition#defline89
Anibal Limon anibal.limon@linaro.org writes:
On Wed, 28 Nov 2018 at 17:48, Kevin Hilman khilman@baylibre.com wrote:
Kevin Hilman khilman@baylibre.com writes:
Neil Williams codehelp@debian.org writes:
On Wed, 28 Nov 2018 18:26:34 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I wasn't able to sign up at https://register.linaro.org/ with the following message: "Linaro employees and assignees, and people from Member companies cannot use this form"
In which case, you should be able to sign in to validation.linaro.org yourself using your Linaro LDAP details. If your linaro email address is oleksandr.terentiev@linaro.org, your LDAP username to use with LAVA is oleksandr.terentiev and then your LDAP password. Otherwise, contact Linaro support / your Tech Lead in Linaro.
Once logged in, just need to ask an admin to give you submission rights. (JIRA ticket is easiest: https://projects.linaro.org/secure/CreateIssue%21default.jspa and select LAB & System Software (LSS) - remember to specify validation.linaro.org as the server you want to access.
So I uploaded my changes to GitHub: https://github.com/oterenti/test-definitions/tree/ptest_modify
If https://validation.linaro.org/scheduler/job/1890442 can be good example of ptest run I'd suggest the following test action:
- test: namespace: dragonboard-820c name: qcomlt-ptest timeout: minutes: 160 definitions:
- repository: https://github.com/oterenti/test-definitions.git from: git path: automated/linux/ptest/ptest.yaml name: linux-ptest branch: ptest_modify params: EXCLUDE: bluez5 libxml2 parted python strace
I tried a couple of test jobs but the first failed trying to fastboot flash and another kernel panicked: https://lkft-staging.validation.linaro.org/scheduler/job/18795
See if this one completes - it looks hung to me: https://validation.linaro.org/scheduler/job/1899004
That hang looks similar to the results I see. After the first ptest runs (acl), I see the "b'STOP: ptest-runner'" and then no more output[1], then a LAVA timeout.
I'm using the "ptest_modify" branch of the Oleksandr's test-definitions repository, and attempting to run a subset of the ptests[2].
And for reference, even trying to run a single ptest (acl)[1], I run into the same problem.
Same problem here gets stalled after ACL,
https://validation.linaro.org/scheduler/job/1899050
And this continues same run without changes,
Continues to run further, but still fails with
"Test error: lava-test-shell timed out after 6410 seconds"
Kevin
On 29.11.18 21:06, Kevin Hilman wrote:
Anibal Limon anibal.limon@linaro.org writes:
On Wed, 28 Nov 2018 at 17:48, Kevin Hilman khilman@baylibre.com wrote:
Kevin Hilman khilman@baylibre.com writes:
Neil Williams codehelp@debian.org writes:
On Wed, 28 Nov 2018 18:26:34 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I wasn't able to sign up at https://register.linaro.org/ with the following message: "Linaro employees and assignees, and people from Member companies cannot use this form"
In which case, you should be able to sign in to validation.linaro.org yourself using your Linaro LDAP details. If your linaro email address is oleksandr.terentiev@linaro.org, your LDAP username to use with LAVA is oleksandr.terentiev and then your LDAP password. Otherwise, contact Linaro support / your Tech Lead in Linaro.
I raised a ticket on Wednesday, but there's no any updates yet. https://support.linaro.org/hc/en-us/requests/2412 So, I'm not able to resubmit jobs so far.
Once logged in, just need to ask an admin to give you submission rights. (JIRA ticket is easiest: https://projects.linaro.org/secure/CreateIssue%21default.jspa and select LAB & System Software (LSS) - remember to specify validation.linaro.org as the server you want to access.
So I uploaded my changes to GitHub: https://github.com/oterenti/test-definitions/tree/ptest_modify
If https://validation.linaro.org/scheduler/job/1890442 can be good example of ptest run I'd suggest the following test action:
- test: namespace: dragonboard-820c name: qcomlt-ptest timeout: minutes: 160 definitions: - repository: https://github.com/oterenti/test-definitions.git from: git path: automated/linux/ptest/ptest.yaml name: linux-ptest branch: ptest_modify params: EXCLUDE: bluez5 libxml2 parted python strace
I tried a couple of test jobs but the first failed trying to fastboot flash and another kernel panicked: https://lkft-staging.validation.linaro.org/scheduler/job/18795
See if this one completes - it looks hung to me: https://validation.linaro.org/scheduler/job/1899004
That hang looks similar to the results I see. After the first ptest runs (acl), I see the "b'STOP: ptest-runner'" and then no more output[1], then a LAVA timeout.
I'm using the "ptest_modify" branch of the Oleksandr's test-definitions repository, and attempting to run a subset of the ptests[2].
And for reference, even trying to run a single ptest (acl)[1], I run into the same problem.
Same problem here gets stalled after ACL,
https://validation.linaro.org/scheduler/job/1899050
And this continues same run without changes,
Continues to run further, but still fails with
"Test error: lava-test-shell timed out after 6410 seconds"
Kevin
From https://validation.linaro.org/scheduler/job/1899052 I see that my code didn't cause problems after ACL tests, as job got stalled even with old ptest.
Alex
Hi, It seems that I'm able to provide an example of the LAVA test run to see the actual results. https://validation.linaro.org/scheduler/job/1900356 But the thing is when we analyze each test, LAVA generates bigger output and it causes 502 Proxy Error as there are 5586 tests. Anyway, you can see the results: https://validation.linaro.org/results/1900356 The final patch is here: https://github.com/oterenti/test-definitions/commit/652b14c7683e9ed2978da1aa...
Regards, Alex
-----Original Message----- From: Kevin Hilman [mailto:khilman@baylibre.com] Sent: Thursday, November 29, 2018 9:06 PM To: Anibal Limon; Oleksandr Terentiev -X (oterenti - GLOBALLOGIC INC at Cisco) Cc: codehelp@debian.org; Nicolas Dechesne; linaro-validation@lists.linaro.org; xe-linux-external(mailer list) Subject: Re: [Linaro-validation] ptest.py script seems to generate not really informative results
Anibal Limon anibal.limon@linaro.org writes:
On Wed, 28 Nov 2018 at 17:48, Kevin Hilman khilman@baylibre.com wrote:
Kevin Hilman khilman@baylibre.com writes:
Neil Williams codehelp@debian.org writes:
On Wed, 28 Nov 2018 18:26:34 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I wasn't able to sign up at https://register.linaro.org/ with the following message: "Linaro employees and assignees, and people from Member companies cannot use this form"
In which case, you should be able to sign in to validation.linaro.org yourself using your Linaro LDAP details. If your linaro email address is oleksandr.terentiev@linaro.org, your LDAP username to use with LAVA is oleksandr.terentiev and then your LDAP password. Otherwise, contact Linaro support / your Tech Lead in Linaro.
Once logged in, just need to ask an admin to give you submission rights. (JIRA ticket is easiest: https://projects.linaro.org/secure/CreateIssue%21default.jspa and select LAB & System Software (LSS) - remember to specify validation.linaro.org as the server you want to access.
So I uploaded my changes to GitHub: https://github.com/oterenti/test-definitions/tree/ptest_modify
If https://validation.linaro.org/scheduler/job/1890442 can be good example of ptest run I'd suggest the following test action:
- test: namespace: dragonboard-820c name: qcomlt-ptest timeout: minutes: 160 definitions:
- repository: https://github.com/oterenti/test-definitions.git from: git path: automated/linux/ptest/ptest.yaml name: linux-ptest branch: ptest_modify params: EXCLUDE: bluez5 libxml2 parted python strace
I tried a couple of test jobs but the first failed trying to fastboot flash and another kernel panicked: https://lkft-staging.validation.linaro.org/scheduler/job/18795
See if this one completes - it looks hung to me: https://validation.linaro.org/scheduler/job/1899004
That hang looks similar to the results I see. After the first ptest runs (acl), I see the "b'STOP: ptest-runner'" and then no more output[1], then a LAVA timeout.
I'm using the "ptest_modify" branch of the Oleksandr's test-definitions repository, and attempting to run a subset of the ptests[2].
And for reference, even trying to run a single ptest (acl)[1], I run into the same problem.
Same problem here gets stalled after ACL,
https://validation.linaro.org/scheduler/job/1899050
And this continues same run without changes,
Continues to run further, but still fails with
"Test error: lava-test-shell timed out after 6410 seconds"
Kevin
Hi Alex,
I reviewed your patch and the results and looks good, I will send a PR to the master repo.
Regards, Anibal
On Wed, 12 Dec 2018 at 09:32, Oleksandr Terentiev -X (oterenti - GLOBALLOGIC INC at Cisco) oterenti@cisco.com wrote:
Hi, It seems that I'm able to provide an example of the LAVA test run to see the actual results. https://validation.linaro.org/scheduler/job/1900356 But the thing is when we analyze each test, LAVA generates bigger output and it causes 502 Proxy Error as there are 5586 tests. Anyway, you can see the results: https://validation.linaro.org/results/1900356 The final patch is here:
https://github.com/oterenti/test-definitions/commit/652b14c7683e9ed2978da1aa...
Regards, Alex
-----Original Message----- From: Kevin Hilman [mailto:khilman@baylibre.com] Sent: Thursday, November 29, 2018 9:06 PM To: Anibal Limon; Oleksandr Terentiev -X (oterenti - GLOBALLOGIC INC at Cisco) Cc: codehelp@debian.org; Nicolas Dechesne; linaro-validation@lists.linaro.org; xe-linux-external(mailer list) Subject: Re: [Linaro-validation] ptest.py script seems to generate not really informative results
Anibal Limon anibal.limon@linaro.org writes:
On Wed, 28 Nov 2018 at 17:48, Kevin Hilman khilman@baylibre.com wrote:
Kevin Hilman khilman@baylibre.com writes:
Neil Williams codehelp@debian.org writes:
On Wed, 28 Nov 2018 18:26:34 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
Hi,
I wasn't able to sign up at https://register.linaro.org/ with the following message: "Linaro employees and assignees, and people from Member companies cannot use this form"
In which case, you should be able to sign in to
validation.linaro.org
yourself using your Linaro LDAP details. If your linaro email address is oleksandr.terentiev@linaro.org, your LDAP username to use with
LAVA
is oleksandr.terentiev and then your LDAP password. Otherwise,
contact
Linaro support / your Tech Lead in Linaro.
Once logged in, just need to ask an admin to give you submission
rights.
(JIRA ticket is easiest: https://projects.linaro.org/secure/CreateIssue%21default.jspa and
select
LAB & System Software (LSS) - remember to specify
validation.linaro.org
as the server you want to access.
So I uploaded my changes to GitHub: https://github.com/oterenti/test-definitions/tree/ptest_modify
If https://validation.linaro.org/scheduler/job/1890442 can be good example of ptest run I'd suggest the following test action:
- test: namespace: dragonboard-820c name: qcomlt-ptest timeout: minutes: 160 definitions:
- repository: https://github.com/oterenti/test-definitions.git from: git path: automated/linux/ptest/ptest.yaml name: linux-ptest branch: ptest_modify params: EXCLUDE: bluez5 libxml2 parted python strace
I tried a couple of test jobs but the first failed trying to fastboot flash and another kernel panicked: https://lkft-staging.validation.linaro.org/scheduler/job/18795
See if this one completes - it looks hung to me: https://validation.linaro.org/scheduler/job/1899004
That hang looks similar to the results I see. After the first ptest runs (acl), I see the "b'STOP: ptest-runner'" and then no more output[1], then a LAVA timeout.
I'm using the "ptest_modify" branch of the Oleksandr's
test-definitions
repository, and attempting to run a subset of the ptests[2].
And for reference, even trying to run a single ptest (acl)[1], I run into the same problem.
Same problem here gets stalled after ACL,
https://validation.linaro.org/scheduler/job/1899050
And this continues same run without changes,
Continues to run further, but still fails with
"Test error: lava-test-shell timed out after 6410 seconds"
Kevin
On Wed, 12 Dec 2018 at 12:24, Anibal Limon anibal.limon@linaro.org wrote:
Hi Alex,
I reviewed your patch and the results and looks good, I will send a PR to the master repo.
The patch was integrated,
https://git.linaro.org/qa/test-definitions.git/commit/?id=03e82bcd3d5bb0b00b...
Regards, Anibal
On Wed, 12 Dec 2018 at 09:32, Oleksandr Terentiev -X (oterenti - GLOBALLOGIC INC at Cisco) oterenti@cisco.com wrote:
Hi, It seems that I'm able to provide an example of the LAVA test run to see the actual results. https://validation.linaro.org/scheduler/job/1900356 But the thing is when we analyze each test, LAVA generates bigger output and it causes 502 Proxy Error as there are 5586 tests. Anyway, you can see the results: https://validation.linaro.org/results/1900356 The final patch is here:
https://github.com/oterenti/test-definitions/commit/652b14c7683e9ed2978da1aa...
Regards, Alex
-----Original Message----- From: Kevin Hilman [mailto:khilman@baylibre.com] Sent: Thursday, November 29, 2018 9:06 PM To: Anibal Limon; Oleksandr Terentiev -X (oterenti - GLOBALLOGIC INC at Cisco) Cc: codehelp@debian.org; Nicolas Dechesne; linaro-validation@lists.linaro.org; xe-linux-external(mailer list) Subject: Re: [Linaro-validation] ptest.py script seems to generate not really informative results
Anibal Limon anibal.limon@linaro.org writes:
On Wed, 28 Nov 2018 at 17:48, Kevin Hilman khilman@baylibre.com
wrote:
Kevin Hilman khilman@baylibre.com writes:
Neil Williams codehelp@debian.org writes:
On Wed, 28 Nov 2018 18:26:34 +0200 Oleksandr Terentiev oterenti@cisco.com wrote:
> Hi, > > I wasn't able to sign up at https://register.linaro.org/ with the > following message: > "Linaro employees and assignees, and people from Member companies > cannot use this form"
In which case, you should be able to sign in to
validation.linaro.org
yourself using your Linaro LDAP details. If your linaro email
address
is oleksandr.terentiev@linaro.org, your LDAP username to use with
LAVA
is oleksandr.terentiev and then your LDAP password. Otherwise,
contact
Linaro support / your Tech Lead in Linaro.
Once logged in, just need to ask an admin to give you submission
rights.
(JIRA ticket is easiest: https://projects.linaro.org/secure/CreateIssue%21default.jspa and
select
LAB & System Software (LSS) - remember to specify
validation.linaro.org
as the server you want to access.
> > So I uploaded my changes to GitHub: > https://github.com/oterenti/test-definitions/tree/ptest_modify > > If https://validation.linaro.org/scheduler/job/1890442 can be good > example of ptest run > I'd suggest the following test action: > > - test: > namespace: dragonboard-820c > name: qcomlt-ptest > timeout: > minutes: 160 > definitions: > - repository:
https://github.com/oterenti/test-definitions.git
> from: git > path: automated/linux/ptest/ptest.yaml > name: linux-ptest > branch: ptest_modify > params: > EXCLUDE: bluez5 libxml2 parted python strace
I tried a couple of test jobs but the first failed trying to
fastboot
flash and another kernel panicked: https://lkft-staging.validation.linaro.org/scheduler/job/18795
See if this one completes - it looks hung to me: https://validation.linaro.org/scheduler/job/1899004
That hang looks similar to the results I see. After the first ptest runs (acl), I see the "b'STOP: ptest-runner'" and then no more output[1], then a LAVA timeout.
I'm using the "ptest_modify" branch of the Oleksandr's
test-definitions
repository, and attempting to run a subset of the ptests[2].
And for reference, even trying to run a single ptest (acl)[1], I run into the same problem.
Same problem here gets stalled after ACL,
https://validation.linaro.org/scheduler/job/1899050
And this continues same run without changes,
Continues to run further, but still fails with
"Test error: lava-test-shell timed out after 6410 seconds"
Kevin
Oleksandr Terentiev oterenti@cisco.com writes:
Hi Abibal,
In our project we need to analyze a total number of passed and failed tests for each packet. To distinguish packets we use lava-test-set feature. In order to implement that I modified ptest.py and send-to-lava.sh scripts. Could you please look at the patch and express your opinion? Maybe this code can be added to git.linaro.org/qa/test-definitions.git ?
Best regards, Alex
automated/linux/ptest: Analyze each test in package tests
Currently ptest.py analyze only exit code of each package test to decide if it passed or not. However, ptest-runner can return success code even though some tests failed. So we need to parse test output and analyze it.
It also quite useful to see exactly which tests failed. So results are recorded for each particular test, and lava-test-set feature is used to distinguish packages.
Signed-off-by: Oleksandr Terentiev oterenti@cisco.com
I gave this a quick test and found a minor problem below...
diff --git a/automated/linux/ptest/ptest.py b/automated/linux/ptest/ptest.py index 13feb4d..a28d7f0 100755 --- a/automated/linux/ptest/ptest.py +++ b/automated/linux/ptest/ptest.py @@ -84,20 +84,60 @@ def filter_ptests(ptests, requested_ptests, exclude):
return filter_ptests
+def parse_line(line): + test_status_list = { + 'pass': re.compile("^PASS:(.+)"), + 'fail': re.compile("^FAIL:(.+)"), + 'skip': re.compile("^SKIP:(.+)") + }
+ for test_status, status_regex in test_status_list.items(): + test_name = status_regex.search(line) + if test_name: + return [test_name.group(1), test_status]
-def check_ptest(ptest_dir, ptest_name, output_log): - status = 'pass' + return None
- try: - output = subprocess.check_call('ptest-runner -d %s %s' % - (ptest_dir, ptest_name), shell=True, - stderr=subprocess.STDOUT) - except subprocess.CalledProcessError: - status = 'fail' +def parse_ptest(log_file): + result = []
- with open(output_log, 'a+') as f: - f.write("%s %s\n" % (ptest_name, status)) + with open(log_file, 'r') as f: + for line in f: + result_tuple = parse_line(line) + if not result_tuple: + continue + print(result_tuple) + result.append(result_tuple) + continue
+ return result
+def run_command(command, log_file): + process = subprocess.Popen(command, + shell=True, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT) + with open(log_file, 'w') as f: + while True: + output = process.stdout.readline() + if output == '' and process.poll() is not None: + break + if output: + print output.strip()
This causes a syntax error in python. Don't you need to wrap this in () ?
Kevin
linaro-validation@lists.linaro.org