Hi,
I would like to use lava for coreboot.
coreboot is an open-source bootloader for multiple architectures
including replacing BIOS/UEFI on x86.
coreboot needs to installed on flash which is soldered on the mainboard.
To flash/install coreboot you need an external SPI flasher e.g. the
Beagleboneblack can do this. Using a SPI Testclip, it's possible to
flash the SPI chip in-circuit. The SPI chip and partially the mainboard
needs to be powered externally.
My concrete test setup is a thinkpad x60.
To flash coreboot, I have to do the following:
1 disconnect the power to DUT
2 connect external 3.3V to the SPI chip (and partially DUT)
3 connect the beagleboneblack's SPI pins to the DUT
4 flash coreboot via beagleboneblack
5 disconnect beagleboneblack
6 disconnect external power supply
7 power the x60
8 press the power button of the DUT
9 run test on linux
I've managed to do all the hardware work.
* control the power to the DUT via pdudaemon using a ubiquity power [1].
* control external 3.3V via raspberry gpio's on a relais card
* control powerbutton via raspberry gpio's on a relais card
* put beagleboneblack's spi pins into high-z via dts-overlays [2]
TLDR;
I'm looking for feedback about implementing a lava for coreboot.
What's a good way to integrate the coreboot flashing flow into lava?
I started writing a pipeline driver for flashrom [3].
How should I integrate the raspberrypi's gpios controlling important
things? Using pdudaemon again?
Do you use safeguards to protect damage from devices?
E.g. It's possible to enable both powersupply for DUT and might kill one.
[1] https://github.com/mattface/pdudaemon/pull/1
[2] https://github.com/lynxis/bbb_highz_spi
[3] https://github.com/lynxis/lava-dispatcher/blob/master/lava_dispatcher/pipel…
--
Alexander Couzens
mail: lynxis(a)fe80.eu
jabber: lynxis(a)fe80.eu
mobile: +4915123277221
gpg: 390D CF78 8BF9 AA50 4F8F F1E2 C29E 9DA6 A0DF 8604
Hi,
I'm trying to setup a pipeline lava instances and the documentation is
a bit unclear to me.
But what does it exactly means with
"Pipeline devices need the worker hostname to be set manually in the
database". ???
In the database I can only see a worker_id. Or does it just means, I
have to set the worker within django?
My goal is to setup a lava master and a slave using the new pipeline
framework.
The master is setted up on a debian jessie 8.2 using lava
production repository as well as backports.
I connected the slave to the master using the .service file. tcpdump
shows some 'PING' and 'PONG' traffic between the two hosts.
Should I see the slave somewhere in django application?
On the Admin/worker view I only see the master.
master: lava-server 2015.12-3~bpo8+1
slave: lava-dispatcher@c391b53231fba978532327d2bdff5173fcb55db4
Best,
lynxis
--
Alexander Couzens
mail: lynxis(a)fe80.eu
jabber: lynxis(a)fe80.eu
mobile: +4915123277221
gpg: 390D CF78 8BF9 AA50 4F8F F1E2 C29E 9DA6 A0DF 8604
Hi there,
what is the development state of lava-android-test?
* last development happen year ago, last tag 2014.01
* not in Linaro's patchwork (https://patches.linaro.org/)
* no Debian package (there is empty git repo pkg/lava-android-test.git)
* last build isn't in PyPi (well other packages seems to be even pulled from
PyPi).
Is it considered as done, or just abandoned? (if yes, why?)
I'm considering to use it for testing Android firmware builds, it looks nice...
Thanks for info.
Kind regards,
Petr
PS: I'm sorry to posting to both lists, not sure which one is the right one.
Hi,
With help of Amit Khare we managed to update metadata on all YAML
files in test-definitions.git repository. I added automated check for
metadata format and contents. So if any patch has invalid metadata,
the sanity check will fail. This change was introduced because the
docs are generated automatically basing on the metadata. It doesn't
affect any actual testing results.
Best Regards,
Milosz Wasilewski
Hi all,
In formulating our global backup solution, we’ve encountered a directory that contains nearly 600,000 empty files. It is in "/var/lib/lava-server/default/media/lava-logs”.
Looking in the lava-server code, I *think* the offending line is in models.py:
log_file = models.FileField(
upload_to='lava-logs', default=None, null=True, blank=True)
The worrying thing is, there are a few files that *do* have something in - a total of 22M worth to date.
The issue is really about eating up inodes.
Can anyone enlighten us as to if these files server any purpose, and if we can perhaps at least ignore the empty ones?
Thanks
Dave
Hey all,
Our lava instance is currently well over 120.000 jobs ran, which is
great.
Unfortunately this means we've got quite a lot of historical data, the
postgresql database is around 45 gigs and well over 600 gigs of job
output data in the filesystem. We'd love to trim that down somewhat to
more sensible sizes by pruning the older job information. Are there
some guidelines on how to do that (and or tools available for)?
--
Sjoerd Simons
Collabora Ltd.