Hello Lava-Users,
I have UEFI based x86 target board which I want to connect to LAVA and
execute tests.
When I go through
https://validation.linaro.org/static/docs/v2/integrate-uefi.html it
specifies different mechanism available.I am confused here as I am not
completely clear on differences between systems mentioned.
If I just know UEFI implementations method of target is it enough to select
which method can be used for booting.
What are the things I need to know before concluding the method to be used
for booting x86 based target board?
Thanks,
Hemanth.
Hi Dan,
>Here's an example of generating lava templates using jinja2. It's probably more complicated than you need.
>
>See
>https://git.linaro.org/ci/job/configs.git/tree/openembedded-lkft/lava-job-d…
>for a directory of templates, and see
>https://git.linaro.org/ci/job/configs.git/tree/openembedded-lkft/submit_for…
>for the python code that does the generation. Lastly, you can run https://git.linaro.org/ci/job/configs.git/tree/openembedded-lkft/test_submi…
>and it will generate all of the YAML files into "./tmp".
>
>Like I said, this is likely more complicated than you are looking for. I suggest starting simpler using https://pypi.python.org/pypi/jinja2-cli/0.6.0.
Thank you, that really helps a lot. We want to trigger the LAVA jobs from Jenkins as well, and we need lots of metadata because the tests will be part of our release notes in the end. So no, this is definitely not more complicated than we need. In fact, this is a very good starting point for us.
You should think about adding a chapter to the LAVA documentation about how to generate YAML job submissions via templates. I assume we are not the only ones facing this problem. And I had basically no experience with jinja2 at all before evaluating LAVA, so I would not have come up with the idea of using this mechanism for the job submissions myself.
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
The LAVA software team created a document to summarise the experience, so
far, of automating devices for validation and that document formed the
basis of a presentation at Linaro Connect in Hong Kong earlier this year by
Steve McIntyre.
The content has been living inside a wiki within Linaro but there have been
delays in making the URL visible to anonymous viewers. I have therefore
migrated the content to my blog:
https://linux.codehelp.co.uk/automation-and-risk.html
A link to the presentation is included.
Available under CC BY-SA 3.0 and shortly to appear on planet.debian.org
Please share with anyone involved in designing hardware which is likely to
end up on your desk for automation support and anyone else who may be
interested in the hardware challenges of device automation.
A second shorter post on Firmware Best Practices will follow - some of the
elements of that document were also covered in the presentation at Connect.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
发件人: Neil Williams [mailto:neil.williams@linaro.org]
发送时间: 2018年6月26日 16:25
收件人: Chenchun (coston) <chenchun7(a)huawei.com>
抄送: Lava Users Mailman list <lava-users(a)lists.linaro.org>; Lixuan (F) <joe.lixuan(a)huawei.com>; Fengguilin (Alan) <fengguilin(a)huawei.com>; yangjianxiang <yangjianxiang(a)huawei.com>; Yewenzhong (jackyewenzhong) <jack.yewenzhong(a)huawei.com>; Liyuan (Larry, Turing Solution) <Larry.T(a)huawei.com>; zengmeifang <zengmeifang(a)huawei.com>; pangbo (A) <pangbo6(a)huawei.com>; liuchunfeng (A) <liuchunfeng2(a)huawei.com>; Zhangyi ac <zhangyi.ac(a)huawei.com>
主题: Re: [Lava-users] Some problem I met in Setting up CI environment using Lava2
On Tue, 26 Jun 2018 at 09:08, Chenchun (coston) <chenchun7(a)huawei.com<mailto:chenchun7@huawei.com>> wrote:
Hi all
Problem 1:
I find Lava2 re-download Test case repository if more than two test suite needed test in a lava job. Personal think this is waste of time. How can I make lava download Test
case repository only once in a lava job ( ps : all of our test suite in some repository) .I want to know lava can do or cant not do, if can please tell me how to do THX.
Not currently supported. We already support shallow clones by default and you can use the history option to remove the .git directory if you are short of space.
https://staging.validation.linaro.org/static/docs/v2/test-repositories.html…
We have looked at various options for this support - none were reliable across all devices & test jobs at scale.
Not due to short of space But spent so many time to re-download
Problem 2:
As all we know lava must install os before execute the test task in standard procedure. But in our practical application we find no need mechanical implementation of the
Process. In some case, we just need install os only ones and repeated test, even we install manually os and just use lava to do test. We want lava2 like M-lab install os and
execute test task uncoupling. we met the problems : can not judge the os installed in DUT(device under test ) whether is our SUT(system under test). All caused by very little
information can we read from DUT. I want to know how can I uncoupling install system and execute test ,user can choose only install os 、only execute test 、install os and execute
test 、install os ones and repeat execute test ...
That is entirely down to your own test shell definitions - follow best practice for portability and then run separate test actions.
Do not use simplistic testing with the problems of persistence - each test job needs to start with a clean setup, not whatever was left behind by a previous test job.
Please explain in more detail what you are actually trying to achieve.
The test writer can already choose to install (i.e. deploy) and then test - the test job specifies deploy, boot and test actions.
*If* the test writer knows that it is then safe to run other tests, those can be added into another job by extending the list of test definitions used by the action.
Not all device-types support rebooting the device between test actions in the same test job. This is a limitation of the hardware.
We want lava more smarter. do not re-deploy system if more than two lava job show the same type system(centos、ubuntu、debian)—just deploy system ones. We
have eliminated destructive cases
problem 3:
According to your experience, optimally how many DUT can mount under a lava worker and what is a constrain bottleneck?
That depends on a wide range of factors.
* What kind of test jobs are being run on the worker?
** TFTP is lighter load, fastboot is high load (1 CPU core per fastboot device + at least 1 core for the underlying OS)
* What is the I/O performance of the worker?
** Many images need manipulation and that can produce high I/O load spikes
* Infrastructure limits play a role as well - network switch load and LAN topology.
There is no definitive answer to your question.
Start small, scale slowly and test with known good functional tests at every change.
We will try fellow the rules: Start small, scale slowly and test with known good functional tests at every change.
Best
coston
_______________________________________________
Lava-users mailing list
Lava-users(a)lists.linaro.org<mailto:Lava-users@lists.linaro.org>
https://lists.linaro.org/mailman/listinfo/lava-users
--
Neil Williams
=============
neil.williams(a)linaro.org<mailto:neil.williams@linaro.org>
http://www.linux.codehelp.co.uk/
The mailing list does not accept a long list of people on CC - everyone who
is expecting a reply from the list needs to be subscribed to the list.
(This is part of the spam management for the list.)
---------- Forwarded message ---------
From: Neil Williams <neil.williams(a)linaro.org>
Date: Tue, 26 Jun 2018 at 09:25
Subject: Re: [Lava-users] Some problem I met in Setting up CI environment
using Lava2
To:
<chenchun7(a)huawei.com>
Cc: Lava Users Mailman list <lava-users(a)lists.linaro.org>, <
joe.lixuan(a)huawei.com>, <fengguilin(a)huawei.com>, <yangjianxiang(a)huawei.com>,
<jack.yewenzhong(a)huawei.com>, <Larry.T(a)huawei.com>, <zengmeifang(a)huawei.com>,
<pangbo6(a)huawei.com>, <liuchunfeng2(a)huawei.com>, <zhangyi.ac(a)huawei.com>
On Tue, 26 Jun 2018 at 09:08, Chenchun (coston) <chenchun7(a)huawei.com>
wrote:
> Hi all
>
> Problem 1:
>
> I find Lava2 re-download Test case repository if more than two test
> suite needed test in a lava job. Personal think this is waste of time. How
> can I make lava download Test
>
> case repository only once in a lava job ( ps : all of our test suite in
> some repository) .I want to know lava can do or cant not do, if can please
> tell me how to do THX.
>
Not currently supported. We already support shallow clones by default and
you can use the history option to remove the .git directory if you are
short of space.
https://staging.validation.linaro.org/static/docs/v2/test-repositories.html…
We have looked at various options for this support - none were reliable
across all devices & test jobs at scale.
>
> Problem 2:
>
> As all we know lava must install os before execute the test task in
> standard procedure. But in our practical application we find no need
> mechanical implementation of the
>
> Process. In some case, we just need install os only ones and repeated
> test, even we install manually os and just use lava to do test. We want
> lava2 like M-lab install os and
>
> execute test task uncoupling. we met the problems : can not judge the os
> installed in DUT(device under test ) whether is our SUT(system under test).
> All caused by very little
>
> information can we read from DUT. I want to know how can I uncoupling
> install system and execute test ,user can choose only install os 、only
> execute test 、install os and execute
>
> test 、install os ones and repeat execute test ...
>
That is entirely down to your own test shell definitions - follow best
practice for portability and then run separate test actions.
Do not use simplistic testing with the problems of persistence - each test
job needs to start with a clean setup, not whatever was left behind by a
previous test job.
Please explain in more detail what you are actually trying to achieve.
The test writer can already choose to install (i.e. deploy) and then test -
the test job specifies deploy, boot and test actions.
*If* the test writer knows that it is then safe to run other tests, those
can be added into another job by extending the list of test definitions
used by the action.
Not all device-types support rebooting the device between test actions in
the same test job. This is a limitation of the hardware.
>
> problem 3:
>
> According to your experience, optimally how many DUT can mount under a
> lava worker and what is a constrain bottleneck?
>
That depends on a wide range of factors.
* What kind of test jobs are being run on the worker?
** TFTP is lighter load, fastboot is high load (1 CPU core per fastboot
device + at least 1 core for the underlying OS)
* What is the I/O performance of the worker?
** Many images need manipulation and that can produce high I/O load spikes
* Infrastructure limits play a role as well - network switch load and LAN
topology.
There is no definitive answer to your question.
Start small, scale slowly and test with known good functional tests at
every change.
>
> Best
>
> coston
>
> _______________________________________________
> Lava-users mailing list
> Lava-users(a)lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lava-users
>
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hi all
Problem 1:
I find Lava2 re-download Test case repository if more than two test suite needed test in a lava job. Personal think this is waste of time. How can I make lava download Test
case repository only once in a lava job ( ps : all of our test suite in some repository) .I want to know lava can do or cant not do, if can please tell me how to do THX.
Problem 2:
As all we know lava must install os before execute the test task in standard procedure. But in our practical application we find no need mechanical implementation of the
Process. In some case, we just need install os only ones and repeated test, even we install manually os and just use lava to do test. We want lava2 like M-lab install os and
execute test task uncoupling. we met the problems : can not judge the os installed in DUT(device under test ) whether is our SUT(system under test). All caused by very little
information can we read from DUT. I want to know how can I uncoupling install system and execute test ,user can choose only install os 、only execute test 、install os and execute
test 、install os ones and repeat execute test ...
problem 3:
According to your experience, optimally how many DUT can mount under a lava worker and what is a constrain bottleneck?
Best
coston
Hi All,
I am having a bit of problem running the lava_scheduler_app unit tests as per the instructions at https://validation.linaro.org/static/docs/v2/dispatcher-testing.html. I keep getting errors such as the following:
$ sudo ./ci-run lava_scheduler_app.tests.test_device.TestTemplates.test_x86_template
+ set -e
+ getopts :pdty opt
+ shift 0
+ pep8 --ignore E501,E722 .
+ '[' -n '' ']'
+ echo 'Removing old .pyc files and cache'
Removing old .pyc files and cache
+ echo
+ find . -name '*.pyc' -delete
+ rm -rf ./.cache/
+ rm -rf ./__init__.py
+ echo 'Starting unit tests'
Starting unit tests
+ echo
+ '[' -z '' -a -z '' -a -z '' ']'
+ echo 'If it exists, a broken test database will be deleted without prompting.'
If it exists, a broken test database will be deleted without prompting.
+ python3 ./lava_server/manage.py test --noinput -v 2 lava_scheduler_app linaro_django_xmlrpc.tests lava_results_app
Traceback (most recent call last):
File "./lava_server/manage.py", line 78, in <module>
main()
File "./lava_server/manage.py", line 74, in main
execute_from_command_line(django_options)
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
utility.execute()
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 356, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3/dist-packages/django/core/management/commands/test.py", line 29, in run_from_argv
super(Command, self).run_from_argv(argv)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3/dist-packages/django/core/management/commands/test.py", line 62, in handle
failures = test_runner.run_tests(test_labels)
File "/usr/lib/python3/dist-packages/django/test/runner.py", line 600, in run_tests
suite = self.build_suite(test_labels, extra_tests)
File "/usr/lib/python3/dist-packages/django/test/runner.py", line 526, in build_suite
suite = reorder_suite(suite, self.reorder_by, self.reverse)
File "/usr/lib/python3/dist-packages/django/test/runner.py", line 640, in reorder_suite
partition_suite_by_type(suite, classes, bins, reverse=reverse)
File "/usr/lib/python3/dist-packages/django/test/runner.py", line 663, in partition_suite_by_type
partition_suite_by_type(test, classes, bins, reverse=reverse)
File "/usr/lib/python3/dist-packages/django/test/runner.py", line 663, in partition_suite_by_type
partition_suite_by_type(test, classes, bins, reverse=reverse)
File "/usr/lib/python3/dist-packages/django/test/runner.py", line 667, in partition_suite_by_type
bins[i].add(test)
File "/usr/lib/python3/dist-packages/django/utils/datastructures.py", line 17, in add
self.dict[item] = None
TypeError: unhashable type: 'TestSchedulerAPI'
I have backed out all my changes and I still get the TypeErrors. I tried the latest in the master branch, and also the 2018.5 release tag. Could someone please let me know what I am doing incorrectly? Thanks!
Cheers,
Edmund
This message and any attachments may contain confidential information from Cypress or its subsidiaries. If it has been received in error, please advise the sender and immediately delete this message.
Hello Folks,
Long time no see. Seems that I am back (for limited time) to Lava testing,
and after all the setups and catches 22, I managed to get to the bottom of
it, within few days.
I have interesting problem to report.
vagrant@stretch:/etc/lava-server/dispatcher-config/device-types$ dpkg -l
lava-server lava-dispatcher
Desired=Unknown/Install/Remove/Purge/Hold
|
Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version
Architecture Description
+++-==========================================-==========================-==========================-==========================================================================================
ii lava-dispatcher 2018.5-3~bpo9+1
amd64 Linaro Automated Validation Architecture
dispatcher
ii lava-server 2018.5-3~bpo9+1
all Linaro Automated Validation Architecture server
## Issue Background
Issue CIP testing #16 seems to be very similar: Beaglebone Black
health-check job is failing at restart
## Issue description
Wrong Ramdisk Image Format Ramdisk image is corrupt or invalid
## Acceptance criteria
The tftp 0x88080000 22/tftp-deploy-on2jld77/ramdisk/ramdisk.cpio.gz.uboot
Using cpsw devtftp 0x88080000
22/tftp-deploy-on2jld77/ramdisk/ramdisk.cpio.gz.uboot
The initramdisk is built by the following instructions:
https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipsystembuild…
I used both BusyBox 28.0 and latest stable BusyBox 28.4 (failure seems to
be the same)!
Should download seamlessly, but it does not. It reports that the image is
corrupt. The full log is at:
local test of ramdisk test on bbb - Lava job 22
https://pastebin.com/y9n4GM5G
The .yaml file is at:
[lava 2018.5-3] job_name: local test of ramdisk test on bbb
https://pastebin.com/kqS2dqWM
_______
Namely, the download order is somehow scrambled!
Thank you,
Zoran Stojsavljevic
Hello
Since our upgrade to 2018.4 we experience lots of lava-logs crash with the
following trace in lava-logs.log
2018-06-20 13:43:08,964 INFO Saving 1 test cases
2018-06-20 13:43:16,614 DEBUG PING => master
2018-06-20 13:43:16,618 DEBUG master => PONG(20)
2018-06-20 13:43:19,524 INFO Saving 21 test cases
2018-06-20 13:43:29,535 INFO Saving 62 test cases
2018-06-20 13:43:37,983 DEBUG PING => master
2018-06-20 13:43:37,985 DEBUG master => PONG(20)
2018-06-20 13:43:39,541 INFO Saving 3 test cases
2018-06-20 13:43:58,009 DEBUG PING => master
2018-06-20 13:43:58,010 DEBUG master => PONG(20)
2018-06-20 13:44:01,770 INFO Saving 9 test cases
2018-06-20 13:44:01,771 ERROR [EXIT] Unknown exception raised, leaving!
2018-06-20 13:44:01,771 ERROR 'bool' object has no attribute 'pk'
Traceback (most recent call last):
File
"/usr/lib/python3/dist-packages/lava_server/management/commands/lava-logs.py",
line 181, in handle
self.main_loop()
File
"/usr/lib/python3/dist-packages/lava_server/management/commands/lava-logs.py",
line 232, in main_loop
self.flush_test_cases()
File
"/usr/lib/python3/dist-packages/lava_server/management/commands/lava-logs.py",
line 217, in flush_test_cases
TestCase.objects.bulk_create(self.test_cases)
File "/usr/lib/python3/dist-packages/django/db/models/manager.py", line
85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/lib/python3/dist-packages/django/db/models/query.py", line
441, in bulk_create
self._populate_pk_values(objs)
File "/usr/lib/python3/dist-packages/django/db/models/query.py", line
404, in _populate_pk_values
if obj.pk is None:
AttributeError: 'bool' object has no attribute 'pk'
2018-06-20 13:44:02,109 INFO [EXIT] Disconnect logging socket and
process messages
2018-06-20 13:44:02,109 DEBUG [EXIT] unbinding from 'tcp://0.0.0.0:5555'
2018-06-20 13:44:02,185 INFO Saving 9 test cases
2018-06-20 13:44:02,186 ERROR [EXIT] Unknown exception raised, leaving!
2018-06-20 13:44:02,186 ERROR 'bool' object has no attribute 'pk'
Traceback (most recent call last):
File
"/usr/lib/python3/dist-packages/lava_server/management/commands/lava-logs.py",
line 201, in handle
self.flush_test_cases()
File
"/usr/lib/python3/dist-packages/lava_server/management/commands/lava-logs.py",
line 217, in flush_test_cases
TestCase.objects.bulk_create(self.test_cases)
File "/usr/lib/python3/dist-packages/django/db/models/manager.py", line
85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/lib/python3/dist-packages/django/db/models/query.py", line
441, in bulk_create
self._populate_pk_values(objs)
File "/usr/lib/python3/dist-packages/django/db/models/query.py", line
404, in _populate_pk_values
if obj.pk is None:
AttributeError: 'bool' object has no attribute 'pk'
2018-06-20 13:44:02,186 INFO Saving 9 test cases
Any idea on how to fix this ?
Thanks
Regards
Hello everyone,
I have two cases in which I need to reboot my device during tests:
1. Reboot is active part of the test (e.g. store some persistent settings, reboot, check if persistent settings are correctly loaded after reboot)
2. Reboot is triggered and has to be evaluated (e.g. activate watchdog, stop resetting it, wait, check if system reboots automatically)
How can I hadle these two cases in LAVA?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com<http://www.garz-fricke.com/>
SOLUTIONS THAT COMPLETE!
[cid:image001.jpg@01D407D7.E4232AA0]
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz