The 2016.2 release marks the start of the migration to the pipeline
dispatcher. The previous dispatcher code, the documentation for that
code and the support for JSON job formats are now *deprecated*. As a
result, the corresponding lava-server code has also been *deprecated*,
this includes Bundles, Filters and Image Reports (original and 2.0).
The deprecated Dashboard is replaced by Results with Queries and
Charts which have been ported to the data produced by the pipeline
testjobs.
Future development within LAVA will be based solely on the pipeline
dispatcher and associated server side objects. Support for the
deprecated dispatcher will be maintained for future releases only
during the migration and *will be removed* once the migration is
complete. The migration is expected to take most of 2016. Individual
devices and particular items of job support will be gradually disabled
until the migration completes for all devices and all jobs.
The migration process involves:
0: Adding pipeline support for devices and deployment methods
1: Migrating user submissions and automated submissions to the
pipeline support. For the Linaro LAVA instances, this will happen
within the particular teams inside Linaro.
2: Making selected devices exclusive to the pipeline so that JSON jobs
can no longer be submitted for those devices. At this point, support
for JSON jobs which rely on those devices will cease.
3: Once all devices are exclusive, reject all JSON submissions.
4: Disable the old scheduler daemon on the LAVA instance.
At some point, probably in 2017 - a month or two after JSON
submissions cease - the migration will complete by executing:
5: Removal of code support for the old dispatcher in lava-dispatcher
and lava-tool codebases and associated documentation.
6: Removal of code support for database objects specific to the old
dispatcher, like Bundle, in lava-server with associated documentation.
This will involve the deletion of the Bundle data as well as image
reports, filters and BundleStreams. TestJob objects which are not
pipeline jobs will also be deleted.
7: Removal of the rest of the code which is still dependent upon or
only used to support deprecated objects and functions.
8: Modification of code which only exists to isolate the deprecated
objects from the pipeline objects. e.g. newly created jobs or devices
will default to pipeline support.
LAVA instances outside Linaro will need to manage their own migrations
during 2016 if updates are to be applied during 2017.
It is not possible to retain the database objects without the
deprecated code, so owners of individual instances may choose to
create an archive instance which has no online devices, accepts no
submissions, just provides access to a snapshot of the data at the
time that JSON submissions ceased. To maintain access to the archived
data, this instance must not upgrade to LAVA releases after the
archive is created and the original devices should persist in the
database but be kept in the Offline state. Devices should also have
submissions restricted to a single administrator account for which no
API token exists.
More information on the details of the migration and the state of
support for jobs using the old dispatcher will be announced on this
mailing list.
As a reminder - lava-announce is a read-only list. Posts to this list
are only made by the LAVA software team. Replies need to be directed
to
linaro-validation(a)lists.linaro.org - your email client should do this
for you, using the Reply-To header added by mailman.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hi,
I would like to use lava for coreboot.
coreboot is an open-source bootloader for multiple architectures
including replacing BIOS/UEFI on x86.
coreboot needs to installed on flash which is soldered on the mainboard.
To flash/install coreboot you need an external SPI flasher e.g. the
Beagleboneblack can do this. Using a SPI Testclip, it's possible to
flash the SPI chip in-circuit. The SPI chip and partially the mainboard
needs to be powered externally.
My concrete test setup is a thinkpad x60.
To flash coreboot, I have to do the following:
1 disconnect the power to DUT
2 connect external 3.3V to the SPI chip (and partially DUT)
3 connect the beagleboneblack's SPI pins to the DUT
4 flash coreboot via beagleboneblack
5 disconnect beagleboneblack
6 disconnect external power supply
7 power the x60
8 press the power button of the DUT
9 run test on linux
I've managed to do all the hardware work.
* control the power to the DUT via pdudaemon using a ubiquity power [1].
* control external 3.3V via raspberry gpio's on a relais card
* control powerbutton via raspberry gpio's on a relais card
* put beagleboneblack's spi pins into high-z via dts-overlays [2]
TLDR;
I'm looking for feedback about implementing a lava for coreboot.
What's a good way to integrate the coreboot flashing flow into lava?
I started writing a pipeline driver for flashrom [3].
How should I integrate the raspberrypi's gpios controlling important
things? Using pdudaemon again?
Do you use safeguards to protect damage from devices?
E.g. It's possible to enable both powersupply for DUT and might kill one.
[1] https://github.com/mattface/pdudaemon/pull/1
[2] https://github.com/lynxis/bbb_highz_spi
[3] https://github.com/lynxis/lava-dispatcher/blob/master/lava_dispatcher/pipel…
--
Alexander Couzens
mail: lynxis(a)fe80.eu
jabber: lynxis(a)fe80.eu
mobile: +4915123277221
gpg: 390D CF78 8BF9 AA50 4F8F F1E2 C29E 9DA6 A0DF 8604
Hi,
I'm trying to setup a pipeline lava instances and the documentation is
a bit unclear to me.
But what does it exactly means with
"Pipeline devices need the worker hostname to be set manually in the
database". ???
In the database I can only see a worker_id. Or does it just means, I
have to set the worker within django?
My goal is to setup a lava master and a slave using the new pipeline
framework.
The master is setted up on a debian jessie 8.2 using lava
production repository as well as backports.
I connected the slave to the master using the .service file. tcpdump
shows some 'PING' and 'PONG' traffic between the two hosts.
Should I see the slave somewhere in django application?
On the Admin/worker view I only see the master.
master: lava-server 2015.12-3~bpo8+1
slave: lava-dispatcher@c391b53231fba978532327d2bdff5173fcb55db4
Best,
lynxis
--
Alexander Couzens
mail: lynxis(a)fe80.eu
jabber: lynxis(a)fe80.eu
mobile: +4915123277221
gpg: 390D CF78 8BF9 AA50 4F8F F1E2 C29E 9DA6 A0DF 8604
Hi there,
what is the development state of lava-android-test?
* last development happen year ago, last tag 2014.01
* not in Linaro's patchwork (https://patches.linaro.org/)
* no Debian package (there is empty git repo pkg/lava-android-test.git)
* last build isn't in PyPi (well other packages seems to be even pulled from
PyPi).
Is it considered as done, or just abandoned? (if yes, why?)
I'm considering to use it for testing Android firmware builds, it looks nice...
Thanks for info.
Kind regards,
Petr
PS: I'm sorry to posting to both lists, not sure which one is the right one.
Hi,
With help of Amit Khare we managed to update metadata on all YAML
files in test-definitions.git repository. I added automated check for
metadata format and contents. So if any patch has invalid metadata,
the sanity check will fail. This change was introduced because the
docs are generated automatically basing on the metadata. It doesn't
affect any actual testing results.
Best Regards,
Milosz Wasilewski
Hi all,
In formulating our global backup solution, we’ve encountered a directory that contains nearly 600,000 empty files. It is in "/var/lib/lava-server/default/media/lava-logs”.
Looking in the lava-server code, I *think* the offending line is in models.py:
log_file = models.FileField(
upload_to='lava-logs', default=None, null=True, blank=True)
The worrying thing is, there are a few files that *do* have something in - a total of 22M worth to date.
The issue is really about eating up inodes.
Can anyone enlighten us as to if these files server any purpose, and if we can perhaps at least ignore the empty ones?
Thanks
Dave
Hey all,
Our lava instance is currently well over 120.000 jobs ran, which is
great.
Unfortunately this means we've got quite a lot of historical data, the
postgresql database is around 45 gigs and well over 600 gigs of job
output data in the filesystem. We'd love to trim that down somewhat to
more sensible sizes by pruning the older job information. Are there
some guidelines on how to do that (and or tools available for)?
--
Sjoerd Simons
Collabora Ltd.
Hi,
I hooked up my Android device to my Linux Box using USB cable and I can see it using adb devices.
I am not sure how do I add this device to my Lava Instance such that I can schedule test jobs. I did not find much documentation/examples on how to configure and Android devices in Lava.
Any pointers in this regard is greatly appreciated.
Thanks
Please let us know if you are using OpenID authentication with LAVA.
Newer versions of django will make it impossible to support
django-openid-auth in Debian unstable and testing. The version of
django-openid-auth in Jessie can continue to be used, so we would like
to know how many users want to continue with this support.
OpenID as a protocol has been dying for some time and Linaro has moved
over to LDAP, which is fine if LDAP is already available.
The time pressure for this change is coming from the schedule to get
the latest django and the latest lava packages into Ubuntu Xenial
16.04LTS which means that support needs to be implemented in the
2015.12 or 2016.1 LAVA releases. This is why this is quickly following
the trusty change. We have been aware of the issues with
django-openid-auth for some time, it was only when we had completed
the move of the Cambridge lab to LDAP that changes involving
django-openid-auth could be considered.
If you are using OpenID authentication (e.g. using Launchpad or Google
OpenID), please let us know.
If you would like to see some other forms of authentication supported,
also let us know. We can investigate Python Social Auth
(http://psa.matiasaguirre.net/), if there is interest.
If we don't hear from users who want django-openid-auth support for
use on Debian Jessie, we will drop django-openid-auth support from all
lava builds. This will leave LDAP and local Django accounts in
2015.12.
If anyone has experience of other django authentication modules, also
let us know.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
From: abhishek sharma <abhishek.sharma81(a)gmail.com>
Replies should go to the linaro-validation mailing list - announce has
a limited list of people who can post.
> We are currently using LAVA on ubuntu trusty. Unfortunately we cannot move to debian because of legacy issues.
What legacy issues are these?
> We are planning to continue with trusty in the near future. Please suggest how can we incorporate the latest LAVA release on trusty.
The only LAVA release for trusty is 2015.9 and there will be minor
updates to the documentation in the 2015.9.post1 release which is due
shortly. It will arrive in the trusty-repo on
images.validation.linaro.org and that will be the last change.
At this stage, there is *no* supportable way to incorporate any
releases after 2015.9.post1 on Trusty, that is why we are freezing
trusty support.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/