Dear lava Community,
I want to use Image charts2.0 for viewing my lava job results.
I am using "lava-test-case" to check pass/fail & I am getting results also.
steps:
- lava-test-case linux-linaro-ubuntu-pwd --shell pwd
- lava-test-case linux-linaro-ubuntu-uname --shell uname -a
- lava-test-case linux-linaro-ubuntu-vmstat --shell vmstat
I want to know how to get these results in image charts, I can see it ask to add chart & addfilter , but there no data is available when I try to add filter?
Similarly if I have to use query api, what kind of query should be used to fetch test data from lava suite?
Detailed info to use image chart will be appreciated, as I am new to Charts/Lava...or at any link.
Thanks,
Varun
Hi everyone,
I'm trying to add own device-type to my lab, but I'm facing some
difficulties when running jobs. I have the following log error:
https://pastebin.com/Eznq6RLA
I clearly understand the log but I'm not able to figure out what to do: I
thought it will be enough describing boot/power actions into device-type.
But it seems not... Do I need to create a .conf file into
/usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types
folder ?
By the way I'm not sure to understand the .conf purpose ? Are they here,
only to be some default files ?
I attached my device-type and my job, maybe it will help you !
Thanks a lot ;)
P.S: I already did some jobs on Qemu and bbb and read the whole
documentation.
--
- Confidential -
Hi,
I'm not entirely sure this job definition is correct, but the only
error I'm getting is only "Problem with submitted job data: Invalid
YAML - check for consistent use of whitespace indents.". The YAML
validates just fine so I'm unsure what is wrong. Any hints? The YAML
in question is enclosed.
milosz
Hello,
We use a python script, LAVA_via_API, to trigger our test jobs.
I must say that I don't know whether this script is a pure internal creation or whether it's been inspired by a Linaro script.
Its role is simple, create a lava job with quite a few parameters (job name, server, worker, kernel, rootfs, dtb, device, device_type, and so on), submit the job, get results and logs.
Whatever, before reworking completely this script, I assume that a reference one exists on one of the Linaro gits. Can you tell me where to find this?
Thanks,
Denis
ALWAYS keep the list in CC please.
On 7 July 2017 at 10:28, Chetan Sharma <chetan.sharma(a)timesys.com> wrote:
> Hi Neil
> Thanks for sharing detailed information to work with LXC.
> 1. I am following sample pipeline job with BBB.
> https://git.linaro.org/lava/lava-dispatcher.git/tree/lava_dispatcher/pipeli…
>
> I have modified this job details with following value, but i am getting an
> Error as ['Invalid job - missing protocol']
Check with the valid job:
https://staging.validation.linaro.org/scheduler/job/178130
> As i have defined protocol as "lava-lxc" which is valid protocol, But job
> object does not have any protocol details, i have verified by printing
> details of self.job.protocols is []
Then that is an invalid job. Your modifications have got something
wrong. There are a lot of changes in your file against the example.
Change only one thing at a time.
> 2. How test action execute on lxc and device ?
Run a test action in the same namespace as the LXC. In the case of the
example, namespace: probe.
https://staging.validation.linaro.org/scheduler/job/178130/definition#defli…
> Can we execute test action in this process
> First lxc test action execute ---> Device1 test action execution start
> -> Device2 test action execution start
>
>
> ==================================================
> device_type: beaglebone-black
>
> job_name: LXC and bbb
Please attach files to emails to the list. There's no need to quote
the entire file to the list.
Take time to understand namespaces. The LXC is transparent and the
namespaces are used to tie the different actions together into one
sequence in the LXC and one sequence for the test device.
LXC protocol support is not the same as MultiNode - operations happen
in series. The LXC protocol is not a substitute for MultiNode either.
If you need parallel execution, you have to use MultiNode.
Split up your test shell definitions if necessary.
Also, attach (not include) the full test job log because that contains
details of the package versions being used and other information.
> On Fri, Jul 7, 2017 at 1:32 PM, Neil Williams <neil.williams(a)linaro.org>
> wrote:
>>
>> On 7 July 2017 at 07:23, Chetan Sharma <chetan.sharma(a)timesys.com> wrote:
>> > Hi Everyone
>> > Hopefully everyone is doing well in this group.
>> > Main intend of writing this email to seek assistance in regard to
>> > one
>> > feature of Lava as lava-lxc which help to create a LXC instance on
>> > worker
>>
>>
>> https://staging.validation.linaro.org/scheduler/job/174215/definition#defli…
>>
>> > and then we can execute any script on worker and propagate its
>> > characteristics and result in another actions of Job on board.
>>
>> That isn't entirely clear. What are you trying to do in the LXC?
>>
>> You need to deploy an LXC, start it with a boot action for the LXC and
>> then start a test shell in the LXC where you can install the tools you
>> need to execute.
>>
>> Talking to the device from the LXC can be difficult depending on how
>> the device is configured. To use networking, you would have to have a
>> fixed IP for the device. To use USB, you need to use the device_info
>> support in the device dictionary to add the USB device to the LXC.
>>
>> > I have gone through documentation shared on Lava V2 instance for
>> > LXC
>> > job creation, but i am not able to successfully execute job on Debian
>> > Jessie
>> > Release.
>> >
>> > https://validation.linaro.org/static/docs/v2/deploy-lxc.html#index-0
>> >
>> > Can you assist me and share a reference process/document to proceed
>> > further to create a job using this feature.
>>
>> More information is needed on exactly what you are trying to do, how
>> the LXC is to connect to the device and what support the device offers
>> to allow for those operations.
>>
>> --
>>
>> Neil Williams
>> =============
>> neil.williams(a)linaro.org
>> http://www.linux.codehelp.co.uk/
>
>
>
>
> --
> Thanks & Regards
> Chetan Sharma
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hi Everyone
Hopefully everyone is doing well in this group.
Main intend of writing this email to seek assistance in regard to one
feature of Lava as lava-lxc which help to create a LXC instance on worker
and then we can execute any script on worker and propagate its
characteristics and result in another actions of Job on board.
I have gone through documentation shared on Lava V2 instance for LXC
job creation, but i am not able to successfully execute job on Debian
Jessie Release.
https://validation.linaro.org/static/docs/v2/deploy-lxc.html#index-0
Can you assist me and share a reference process/document to proceed
further to create a job using this feature.
Look forward to hear with positive response.
--
Thanks & Regards
Chetan Sharma
Hi,
I have been investigating LAVA for use in our organisation and i'm stuck trying to get a hello world test case running on our hardware and looking for some help. We looked at the YOCTO test tools however it can only use devices with a fixed ip which we can't guarantee or want during our testing as we also test network settings. It's also limited in configuration, LAVA package seems to meet all our requirements however i'm still unsure on how to do a few things.
We use Yotco images and boot with the grub bootloader.
All our devices are connected via Ethernet only and power and peripheral switching is controlled via usb relays.
After reading through all the documentation i'm still unsure of how to set up and actually run a test on our hardware. What tools do i need to install in the test image and how do i get it to communicate with grub? I assume a base image is one that includes nothing but the tools and grub. We have a recovery partition with tiny core which could facilitate that but it's not required for the automated testing.
I've used the akbennet/lava-server docker image and it is up and running, although test jobs are scheduled but never run on the qemu devices so a little stuck there.
Basically, I need help to get LAVA to connect to one of our devices to load the image and run tests?
Choosing the image, writing tests and mostly configuring the pipeline I understand.
After 2 weeks i'm posting here hoping someone can assist me.
Regards,
Elric
Elric Hindy
Test Engineer
T +61 2 6103 4700
M +61 413 224 841
E elric.hindy(a)seeingmachines.com
W www.seeingmachines.com<http://www.seeingmachines.com>
[Seeing Machines]<https://www.seeingmachines.com/>
This email is confidential. If you are not the intended recipient, destroy all copies and do not disclose or use the information within. No warranties are given that this email does not contain viruses or harmful code.
Hi All,
I am trying to setup a remote lab using Raspberry pi on my local network.
I installed lava-server and a worker on my laptop and its working fine.
I installed raspbian on R-Pi and follow the instruction given on lava site,
but when slave is trying to connect to master its not getting any response,
I am able to ping master from my R-pi board and default port 3079 is open
on my machine.
I used no encryption and use URL to connect master as follow.
MASTER_URL="tcp://10.42.0.24:3079"
LOGGER_URL="tcp://10.42.0.24:3079"
I continuosly getting log messgaes like,
DEBUG Sending new HELLO_RETRY message to the master (are they both running
the same version?)
INFO Waiting for the master to reply
DEBUG Sending new HELLO_RETRY message to the master (are they both running
the same version?)
INFO Waiting for the master to reply
DEBUG Sending new HELLO_RETRY message to the master (are they both running
the same version?)
Please, if any one have some idea why I am not able to connect please help.
Thanks,
Ankit
On Mon, 3 Jul 2017 23:50:25 +0300
Paul Sokolovsky <paul.sokolovsky(a)linaro.org> wrote:
> Hello Milosz,
>
> I appreciate getting at least some response ;-). Some questions
> however could use a reply from LAVA team, I guess.
>
> On Mon, 3 Jul 2017 13:34:49 +0100
> Milosz Wasilewski <milosz.wasilewski(a)linaro.org> wrote:
>
> []
>
> > > jobs submit a number of tests to LAVA (via
> > > https://qa-reports.linaro.org/) for the following boards:
> > > arduino_101, frdm_k64f, frdm_kw41z, qemu_cortex_m3. Here's an
> > > example of cumulative test report for these platforms:
> > > https://qa-reports.linaro.org/lite/zephyr-upstream/tests/
> > >
> > > That's really great! (Though the list of tests to run in LAVA
> > > seems to be hardcoded:
> > > https://git.linaro.org/ci/job/configs.git/tree/zephyr-upstream/submit_for_t…)
> > >
> >
> > It is, as I wasn't really sure what to test. The build job needs to
> > prepare the test templates to be submitted to LAVA. In case of
> > zephyr each tests is a separate binary. So we end up with the
> > number of file paths to substitute in the template. Hardcoding was
> > the easiest thing to get things running. But I see no reason why it
> > wouldn't be changed with some smarter code to discover the
> > binaries. The problem with this approach is that some of these
> > tests are just build time. They have no meaning when running on the
> > board and need to be filter out somehow.
Running the build tests within the Jenkins build makes a lot of sense.
Typically, the build tests will have a different command syntax to the
runtime tests (otherwise Jenkins would attempt to run both), so
filtering should be possible. If the build tests are just a different
set of binary blobs from the runtime tests, that may need a fix
upstream in Zephyr to distinguish between the two modes.
> I see, that makes some sense. But thinking further, I'm not entirely
> sure about "build only" tests. Zephyr's sanitycheck test has such
> concept, but I'd imagine it comes from the following reasons: a)
> sanitycheck runs tests on QEMU, which has very bare hardware support,
> so many tests are not runnable; b) sanitycheck can operate on
> "samples", not just "tests", as sample can be interactive, etc. it
> makes sense to only build them, not run.
>
> So, I'm not exactly sure about build-only tests on real HW boards. The
> "default" idea would be that they should run, but I imagine in
> reality, some may need to be filtered out. But then blacklisting
> would be better approach than whitelisting. And I'm not sure if
> Zephyr has concept of "skipped" tests which may be useful to handle
> hardware variations. (Well, I actually dunno if LAVA supports skipped
> tests!)
Yes, LAVA has support for pass, fail, skip, unknown.
For POSIX shell tests, the test writer just calls lava-test-case name
--result skip
For monitor tests, like Zephyr, it's down to the pattern but skip is as
valid as pass and fail (as is unknown) for the result of the matches
within the pattern.
> Anyway, these are rough ideas for the future. I've spent couple of
> weeks of munging with LITE CI setup, there're definitely some
> improvements, but also a Pandora box of other ideas and improvements
> to make. I'm wrapping up for now, but hope to look again in some time
> (definitely hope to look before the Connect, so we can discuss further
> steps there). In the meantime, I hope that more boards will be
> installed in the Lab and stability of them improves (so far they seem
> to be pretty flaky).
There are known limitations with the USB subsystem and associated
hardware across all architectures, affecting test devices and the
workers which run the automation. LAVA has to drive that subsystem very
hard for both fastboot devices and IoT devices. There are also problems
due to the design of methods like fastboot and some of the IoT support
which result from a single-developer model, leading to buggy
performance when used at scale and added complexity in deploying
workarounds to isolate such protocols in order to prevent interference
between tests. The protocols themselves often lack robust error
handling or retry support.
Other deployment methods which rely on TFTP/network deployments are
massively more reliable at scale, so comparing reliability across
different device types is problematic.
> []
>
> > > - test:
> > > monitors:
> > > - name: foo
> > > start: ""
> > > end: Hello, ZJS world!
> > > pattern: (?P<result>(PASS|FAIL))\s-\s(?P<test_case_id>\w+)\.
> > >
> > > So, the "start" substring is empty, and perhaps matches a line
> > > output by a USB multiplexer or board bootloader. "End" substring
> > > is actually the expected single-line output. And "pattern" is
> > > unused (dunno if it can be dropped without def file syntax
> > > error). Is there a better way to handle single-line test
> > > output?
> >
> > You're making a silent assumption that if there is a matching line,
> > the test is passed. In case of other tests (zephyr unit tests), it's
> > not the case. The 'start' matches some line which is displayed when
> > zephyr is booting. End matches the line which is displayed after all
> > testing is done. The pattern follows the unit test pattern.
>
> Thanks, but I'm not sure I understand this response. I don't challenge
> that Zephyr unittests need this support, or the way they're handled.
> LITE however needs to test more things than "batch" Zephyr unittests.
> I present another usercase which albeit simple, barely supported by
> LAVA. (That's a question to LAVA team definitely.)
LAVA result handling is ultimately a pattern matching system. Patterns
must have a unique and reliable start string and a unique and reliable
end string. An empty start string is just going to cause misleading
results and bad pattern matches as the reality is that most boards emit
some level of random junk immediately upon connection which needs to be
ignored. So there needs to be a reliable, unique, start string emitted
by the test device. It is not enough to *assume* a start at line zero,
doing so increases the problems with reliability.
>
> > > Well, beyond a simple output matching, it would be nice even for
> > > the initial "smoke testing" to actually make some input into the
> > > application and check the expected output (e.g., input: "2+2",
> > > expected output: "4"). Is this already supported for LAVA "v2"
> > > pipeline tests? I may imagine that would be the same kind of
> > > functionality required to test bootloaders like U-boot for Linux
> > > boards.
> >
> > I didn't use anything like this in v2 so far, but you're probably
> > best off doing sth like
> >
> > test 2+2=4 PASS.
> >
> > than you can easily create pattern that will filter the output. In
> > case of zephyr pattern is the only way to filter things out as there
> > is no shell (?) on the board.
>
> So, the problem, for starters, is how to make LAVA *feed* the
> input, as specified in the test definition (like "2+2") into a board.
That will need code changes, so please make a formal request for this
support at CTT
https://projects.linaro.org/servicedesk/customer/portal/1 so that we
can track exactly what is required.
>
> As there were no reply from LAVA team (I may imagine they're busy with
> other things), I decided to create a user story in Jira for them, as I
> couldn't create a LAVA-* ticket, I created it as
> https://projects.linaro.org/browse/LITE-175 . Hopefully that won't go
> unnoticed and LAVA team would get to it eventually.
That JIRA story is in the LITE project. Nobody in the LAVA team can
manage those stories. It needs a CTT issue which can then be linked to
the LITE story and from which a LAVA story can also be linked.
Sadly, any story in the LITE project would go completely unnoticed by
the LAVA software team until it is linked to CTT so that the work can
be prioritised and the relevant LAVA story created. That's just how
JIRA works.
>
> >
> > milosz
>
> Thanks!
>
> --
> Best Regards,
> Paul
>
> Linaro.org | Open source software for ARM SoCs
> Follow Linaro: http://www.facebook.com/pages/Linaro
> http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
> _______________________________________________
> linaro-validation mailing list
> linaro-validation(a)lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/linaro-validation
--
Neil Williams
=============
http://www.linux.codehelp.co.uk/