I noticed today while looking at a merge proposal that we store some
test data files for LAVA on the wiki[1]. I'm sure that was chosen
because it works and its easy to manage. However, I'm not a huge fan of
this for a few reasons:
* easy to delete an attachment
* Moin provides no revision control over attachments
* I took this job to get out of the wiki business :)
I was thinking of two alternate ways to manage our data files:
1. Create something on people.linaro.org like the toolchain team did[2].
2. Create something like validation.linaro.org/testdata
Seems like option 1 is the easiest and has been done before. Any opinions?
1: https://wiki.linaro.org/TestDataLinkPage
2: http://people.linaro.org/~toolchain/
Hi all,
the Graphics WG needs to run daily graphics-oriented test jobs
(actually, we only need to run them whenever something changes, but I
think this is not implemented yet). We had previously scheduled a daily
job as a cron job in the lab, but it seems that that got lost at some
point.
Submitting test jobs from a remote machine (e.g. my desktop) is not
really an option, since I think it's important for job runs to be
independent of the availability of any one person's computer and
network.
Our needs have begun to grow (we need more test jobs) and I would also
like to experiment a bit, so pestering someone from the validation group
to add or modify scheduled runs doesn't seem very efficient. I would
much prefer to have an official mechanism that didn't require the direct
intervention of the validation group (e.g. "My scheduled jobs" in the
v.l.o web site).
We would like to have a set of test jobs running again for 12.05. How do
you propose we should handle this in the short-term?
Thanks,
Alexandros
Hi all,
Suppose there is a LAVA user, and to avoid taxing my imagination let's
call him Alexandros. He wants to have some jobs submitted automatically
from ci.linaro.org to lava that deposit results in a bundle stream that
only members of linaro can see, which all seems reasonable enough.
Currently though, the story for tokens around this is a bit horrible.
To be able to submit to the a /private/team/linaro/... bundle, you have
to submit the job as a member of the linaro group in v.l.o.
I can think of a few ways of doing this, but I don't really like any of
them:
1) jenkins on ci.linaro.org could use one of alf's tokens, but that
seems a little tied to him (what if he leaves linaro, etc)
2) Another way is to create a user that does not correspond to a user on
LP (gfx-daily-job-submitter or somethign) and add it to the linaro
group on v.l.o. This feels a bit better, but it's not very 'self
service' -- the only way to create such a user is via the admin panel
afaik.
3) A third way is to create a fake user on LP and add it to the ~linaro
team there. This also seems a bit horrible.
There is a fourth way that is actually happening but doesn't help --
create a user on LP and do _not_ add it ~linaro:
https://launchpad.net/~ciadmin [1].
I don't really have a suggestion for what would be better here. It
feels a bit like the model we have for access and handling tokens is
perhaps a bit too simple currently. What do you guys think?
Cheers,
mwh
[1] this is why ci.linaro.org lost the job-submitting permission -- I
didn't realize ciadmin on v.l.o corresponded to a user on LP!
Hi folks.
I'd like to propose that we keep all the lava discussion in
#linaro-lava if possible, this will allow participants, who are not
working for linaro, to join and quickly identify people that share the
interest in our common framework. The channel name is a compromise
between unavailable #lava (already owned by unrelated project) and
staying in #linaro (that sees a fair amount of unrelated traffic). I
also considered #linaro-validation but I think we have agreed in the
past that we want to transition away from the "validation" keyword to
"lava".
This is not formally done yet, I'd like to have a chanserv registry (I
think all #linaro-* channels are automatically managed though) and
public logs, much like we have on #linaro today.
So, if you support this idea and find yourself talking or observing a
discussion about LAVA in #linaro please gently suggest the
participants to move to #linaro-lava
Thanks
Zygmunt Krynicki
Hi gang,
As you may have seen if you actually read all the merge proposal email
you get <wink>, as part of my quest to get us closer to a continuous
delivery stype of development I've written a script that updates a LAVA
instance to the tip of trunk of all the various LAVA components. I will
set things up so that this runs every night on the staging instance
soon (need to fiddle things somehow so that a sudo password isn't
necessary to restart the instance after the code is upgraded).
However, Spring is right now using the staging instance to test some
dispatcher changes. If I'd set the cronjob I refer to above up and
Spring was particularly unlucky, the cronjob would wipe out the changes
he's made to the instance to test while he's testing them. This seems
less than ideal.
Because what we do is always going to be hard to test outside a
production like environment, I think what we need is yet another
instance: one that we can hack about relentlessly as needed to test
changes that we haven't yet landed. I propose the name "dogfood" for
this (maybe we'd even go as far as having scripts that completely
removed the dogfood instance and could recreate it as needed to ensure a
clean baseline).
So we'd have:
* production: duh
* staging: this is _solely_ used to determine if changes that have
landed to the trunk of some component are safe to deploy to
production
* dogfood: random hacking
I think that potentially we could have some way of bringing up dogfood
instances as needed in our private cloud, but tbh for now one on control
that we share amongst ourselves in a loose way (i.e. check on IRC before
doing stuff with it) would be a useful thing.
What do you guys think?
Cheers,
mwh
Hi all,
I've made a few small changes to lava-server and linaro-django-xmlrpc to
work with Django 1.4. I have clicked around a bit locally, but
certainly not hit each page, so if you can upgrade your local Djangos
and let me know how it goes, that'd be great.
Cheers,
mwh
Part of the whole SD card mux game involves finding the card reader the SD
card for a particular board is plugged into.
Following a lead provided by Zygmunt, it seems that you can address
devices by USB topology by looking in /sys/bus/usb/devices/ -- for
example, the front right USB port on my laptop seems to correspond to a
directories called "1-1.2" and "1-1.2:1.0" in here, the back left port
corresponds to "3-1" and "3-1:1.0" and a particular port on a USB hub
plugged into the front left USB port seems to correspond to "2-1.2.3"
and "2-1.2.3:1.0". So this seems to be reasonably straightforward
(although I don't know if the mapping is necessarily stable across
reboots or kernel upgrades -- seems like it should be though).
The next fun is mapping this directory to a block device. Poking finds
that (with the card reader plugged into the last location mentioned
above) the directory at:
/sys/bus/usb/devices/2-1.2.3:1.0/host31/target31:0:0/31:0:0:0/block/
contains a directory called 'sdb' and indeed the sd card is mounted as
/dev/sdb. Playing around shows that the 31 here is a number that gets
incremented each time you plug/unplug the reader (or maybe any USB
device). So this suggests that we could address the card reader in this
port as "2-1.2.3:1.0" and to e.g. run l-m-c targetting the card in the
reader, we should look at
/dev/$(ls /sys/bus/usb/devices/${address}/host*/target*/*:0:0:0/block)
which seems like it would work but frankly also seems like a total hack.
Do any of you know if this can be done in a cleaner way?
Cheers,
mwh
I was trying to get my local LAVA instance to work with our lab's data
base dump. So I had the DB, but not all the media files. LAVA almost
works when you do this, but I hit two small issues that prevented me
from looking at bundle streams and it their test runs.
I realize this is an edge case, but its handy for when you want to test
with a real database, but don't want all the media files. I'm not sure
this is the exact way we want to fix this, so rather than do a merge
proposal I thought I'd get your thoughts on the patch first.
There were two main issues in the code:
bundle_detail wants to get the "document format" for the media file
associated with the bundle. Since the file didn't exist we hit an
exception. I added a small check to the get_document_format to handle
this by returning "n/a".
bundle_detail and test_run_detail wanted to display the size of media
files. This calls a new function content_size which will catch an error
if the file doesn't exist and simply return 0.
I'm not sure if this patch is doing to good a job of hiding the fact an
error does exist. Maybe we should put a conditional in the template file
to display an error when the file doesn't exist also?
-andy
I've been setting up a local LAVA lab on my home network. I've used the
lava-deployment-tool and lava-master-image-scripts to get my Beagle
hooked up.
I'm really close to having everything work. However, when I submit a job
I hit a problem with the soft-reboot logic not being able to instruct
u-boot to use the test image.
I'm flashing a 12.01 pre-built nano image, but I don't think that
matters. I think the real issue might be something with a newer uboot
being used by the linaro-master-image-script and the device.conf for
beagle not having the right stuff.
Has anyone else run into this or have suggestions? I tried briefly
updating the .conf file for beagle, but it didn't seem to stick. It
almost looked like we were sending the commands before u-boot was ready,
but that may just be how the output is displayed.
-andy