Hi,
As you might have noticed, there are couple of 'new' directories in
the test-definitions.git [1] repository. This is an attempt to refresh
the approach to test execution. There are 2 main reasons behind it:
- decouple from LAVA helper scripts (as much as possible)
- allow local execution of the scripts outside LAVA
All 'new' tests are now placed in the 'automated/' and 'manual/'
paths. The old layout should now be considered obsolete. This means
that the following directories are no longer updated and will be
deleted:
- android
- common
- fedora
- openembedded
- ubuntu
Please check if the test you're using is already included in the
'automated/' directory. There are 2 subdirs - android and linux. These
are the ones containing tests (each test in separate directory). If
the test you're currently using isn't there, please reply to this
thread with the details. The plan is to migrate all tests that are in
use. Some of the tests in the above mentioned directories are
abandoned for a long time. Such tests won't be migrated.
I'm planning to delete the deprecated directories from the repository
by the end of June 2017.
[1] https://git.linaro.org/qa/test-definitions.git
Best Regards,
milosz
Hello,
The LITE team appreciates bootstrapping of Zephyr-related LAVA testing
done by LAVA, LAVA Lab, B&B and QA teams. It was quite a backlogged
task for ourselves to be more involved with LAVA testing, and
hopefully, the time has come ;-).
I've reviewed the current status of on-device testing for Zephyr CI
jobs and see the following picture (feel free to correct me if
something is wrong are missing): "zephyr-upstream" and
"zephyr-upstream-arm" (https://ci.linaro.org/view/lite-iot-ci/) CI jobs
submit a number of tests to LAVA (via https://qa-reports.linaro.org/)
for the following boards: arduino_101, frdm_k64f, frdm_kw41z,
qemu_cortex_m3. Here's an example of cumulative test report for these
platforms: https://qa-reports.linaro.org/lite/zephyr-upstream/tests/
That's really great! (Though the list of tests to run in LAVA seems to
be hardcoded:
https://git.linaro.org/ci/job/configs.git/tree/zephyr-upstream/submit_for_t…)
But we'd like to test things beyond Zephyr testsuite, for example,
application frameworks (JerryScript, Zephyr.js, MicroPython) and
the mcuboot bootloader. For starters, we'd like to perform just a boot
test to make sure that each application can boot and start up, then
later hopefully to extend that to functional testing.
The most basic testing would be just check that after boot there's an
expected prompt from each of the apps, i.e. test it in "passive" manner,
similar to Zephyr unittests discussed above. I tried this with
Zephyr.js and was able to make it work (with manual submission so far):
https://validation.linaro.org/scheduler/job/1534097 . A peculiarity in
this case is that the default test app of Zephyr.js outputs just a
single line "Hello, ZJS world!", whereas LAVA's test/monitors test
job config specifies testsuite begin pattern, end pattern, and testcase
patterns, and I had a suspicion that each of them need to be on a
separate line. But I was able to make it pass with the following config:
- test:
monitors:
- name: foo
start: ""
end: Hello, ZJS world!
pattern: (?P<result>(PASS|FAIL))\s-\s(?P<test_case_id>\w+)\.
So, the "start" substring is empty, and perhaps matches a line output by
a USB multiplexer or board bootloader. "End" substring is actually the
expected single-line output. And "pattern" is unused (dunno if it can
be dropped without def file syntax error). Is there a better way to
handle single-line test output?
Well, beyond a simple output matching, it would be nice even for the
initial "smoke testing" to actually make some input into the application
and check the expected output (e.g., input: "2+2", expected output:
"4"). Is this already supported for LAVA "v2" pipeline tests? I may
imagine that would be the same kind of functionality required to test
bootloaders like U-boot for Linux boards.
Thanks,
Paul
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
Hi Team,
Splice test case is newly running from kselftest (after updating Makefile)
splice stall on LAVA and pass on Local HiKey running linux-next and
linux-rc-4.9.
And tested with set -e from run_kselftest.sh and still pass.
The question is Why stall on LAVA? i re-submitted job couple of times
and able to reproduce that issue.
The problem is coming from
#!/bin/sh
n=`./default_file_splice_read </dev/null | wc -c`
test "$n" = 0 && exit 0
echo "default_file_splice_read broken: leaked $n"
exit 1
Seems like LAVA not happy having "</dev/null" in any script or exit 0 via LXC.
I request to investigate this problem.
Running tests in splice
========================================
selftests: default_file_splice_read [PASS]
selftests: default_file_splice_read.sh [PASS]
Test case source:
https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git/t…
LAVA job id:
https://lkft.validation.linaro.org/scheduler/job/8875#L2574https://lkft.validation.linaro.org/scheduler/job/8806#L3077
Best regards
Naresh Kamboju