Hey folks.
Initial batch of LAVA tests in fast models are now running in the Linaro
validation lab. This initial run is designed to see how behaves in
practice and to check for omissions occurring away from my computer.
The branch that I've deployed is lp:~zkrynicki/lava-core/demo-3 (it
depends on unreleased json-document tree from github if you want to try
it out, there are instructions in the tree).
We've got the licensing server setup for production usage and started a
(arguably dummy) stream lava-test test based on
hwpack_linaro-vexpressdt-rtsm_20120511-22_armhf_supported.tar.gz and
linaro-precise-developer-20120426-86.tar.gz which is the combination I
was using locally.
Over the next few days we'll be working on improving the process so that
we can start accepting more realistic tests. Initially do expect high
failure rate due to imperfections in the technology, configuration
issues, etc.
The plan is to quickly move to practical use cases. So far I'm aware of
the switcher tests that the QA team is using, and the kvm tests but I
have not checked either, in practice, on a fast model yet.
My question to you, is to give me pointers (ideally as simple, practical
instructions that show it works) for things that you want to run. I'm
pretty ignorant about Android side of the story so any message from our
android team would be appreciated.
Please note that iteraction cycle is very slow. It takes 10+ hours to do
trivial things (doing apt-get update, installing a few packages,
compiling trivial program and getting it to run for a short moment).
Please don't ask us to run monkey for you as we'll be wasting time at
this point.
My goal is to understand what's missing and to estimate how long given
tests typically takes so that we can compare how our infrastructure
compares to your needs.
Many thanks
Zygmunt Krynicki
--
Zygmunt Krynicki
Linaro Validation Team
Hi there. We'd like to run a Fast Model in the validation lab for KVM
testing. Is there a blueprint for this? What's the status?
Paul and I discussed a rough plan a few months ago. It was along the lines of:
* A x86 machine as the Fast Model host
* An emulated vexpress-a15 as the KVM host
* A vexpress-a15 as the KVM guest
* LAVA treats the Fast Model as a board
* Jobs are spawned into the LAVA scheduler
* Once the KVM host is running, everything else is toolchain specific
and done via shell scripts
The dispatcher would:
* Grab the hwpack
* Grab the nano rootfs
* Build the rootfs with separate kernel, initrd, and dtb using
linaro-media-create
* Start the Fast Model with the boot wrapper, kernel, and rootfs
* Use the console to run the test script
There's more information on the steps required at:
https://wiki.linaro.org/MichaelHope/Sandbox/KVMUseCase
-- Michael
Hi, I've been playing around with this "lava" thing a bit :)
>From the perspective of the QA team, I'm currently interested in a very
specific use case, but it's one that I think has some common things with
what others want to do:
We have test results for various {ubuntu, android} images. I'd like to see
a current snapshot of the results on those images. I'd also like to see
the history for previous results on the same type of image. This way, as
we near release time, if we look at trying to figure out which build is
likely to be the best one to use for release candidate testing, we can
glance at the results and see where most of our automated tests passed.
So just looking at android as an example, because we already store data
there about which image we are testing, which build number, which url.
It's pretty simple to pull all bundles in a stream. It's also not too bad
to look at a test_run in a bundle and get the fact that it has a certain
attribute. It's not so easy to say "show me all the bundles that have this
attribute". Am I missing something?
I think I brought this up before, but I ran across a project (heck if I can
find it now) that had a python library providing an API to make it
convenient for finding your test data. This is what the ORM *should* be
providing, I know, but it's not always the most convenient for us as we've
all struggled with this in the past.
This is nothing new, just wanted to present another use case that
highlights something we're trying to do, but doesn't seem to be too easy at
the moment, and looking for suggestions about how to proceed.
Thanks,
Paul Larson
Hello,
I am sending this email to bring up a topic that has been discussed
before[0] , and that I believe it is still relevant and worth
considering; about moving away the test definitions from the lava-test
package into its own project/package.
We also had a discussion at #linaro-lava yesterday, some points were
raised in favour or regarding separating the test definitions, among them:
- A separated test definitions package would allow to upgrade, modify
existing tests and/or release new ones without the need of releasing a
whole new lava-test package; and the other way around (it would make
maintenance easier and more flexible, something that current test
definitions seem to be lacking).
- It would promote more specialized tests definitions; since we have an
independent test definitions package, it should be easier to tailor and
branch for specific platforms or projects (and even encourage that).
- It is cleaner, from a development point of view, to keep test
definitions separated from the lava-test tool. Test definitions are not
components, but test files, similar in functionality to json job files.
This also would improve maintenance and collaboration efforts.
- A possible test definitions package should use a kind of versioning to
keep compliance with lava-test API, and hence avoiding any breakage, but
at the same time making the packages independent enough so we can
upgrade/modify one without affecting the other.
- We still could keep some minimal test definitions in the lava-test
package, though these would be more like 'simple' test cases, serving
more like examples for the given lava-test API/Core version.
Apart of the above discussed issues, there are also some valid points at
[1]. Of special interest are the maintenance problem and to keep proper
cross-platform support in the tests.
My initial idea was to have a possible 'lava-test-definitions' package
that could initially contain all the current available tests from the
latest lava-test package, update/fix the existing broken tests (if any),
then get a launchpad project following the TODO items from the blueprint
at [1] , and that should be enough to get us on the way.
This email is intended to start a discussion that hopefully could bring
some technical decision about the subject, hence it is an open door for
ideas and comments, so please, share yours :)
Cheers and Regards,
--- Luis
[0]
http://lists.linaro.org/pipermail/linaro-validation/2012-April/000350.html
[1]
https://blueprints.launchpad.net/lava-test/+spec/lava-test-dedicated-test-r…
I noticed today while looking at a merge proposal that we store some
test data files for LAVA on the wiki[1]. I'm sure that was chosen
because it works and its easy to manage. However, I'm not a huge fan of
this for a few reasons:
* easy to delete an attachment
* Moin provides no revision control over attachments
* I took this job to get out of the wiki business :)
I was thinking of two alternate ways to manage our data files:
1. Create something on people.linaro.org like the toolchain team did[2].
2. Create something like validation.linaro.org/testdata
Seems like option 1 is the easiest and has been done before. Any opinions?
1: https://wiki.linaro.org/TestDataLinkPage
2: http://people.linaro.org/~toolchain/
Hi all,
the Graphics WG needs to run daily graphics-oriented test jobs
(actually, we only need to run them whenever something changes, but I
think this is not implemented yet). We had previously scheduled a daily
job as a cron job in the lab, but it seems that that got lost at some
point.
Submitting test jobs from a remote machine (e.g. my desktop) is not
really an option, since I think it's important for job runs to be
independent of the availability of any one person's computer and
network.
Our needs have begun to grow (we need more test jobs) and I would also
like to experiment a bit, so pestering someone from the validation group
to add or modify scheduled runs doesn't seem very efficient. I would
much prefer to have an official mechanism that didn't require the direct
intervention of the validation group (e.g. "My scheduled jobs" in the
v.l.o web site).
We would like to have a set of test jobs running again for 12.05. How do
you propose we should handle this in the short-term?
Thanks,
Alexandros
Hi all,
Suppose there is a LAVA user, and to avoid taxing my imagination let's
call him Alexandros. He wants to have some jobs submitted automatically
from ci.linaro.org to lava that deposit results in a bundle stream that
only members of linaro can see, which all seems reasonable enough.
Currently though, the story for tokens around this is a bit horrible.
To be able to submit to the a /private/team/linaro/... bundle, you have
to submit the job as a member of the linaro group in v.l.o.
I can think of a few ways of doing this, but I don't really like any of
them:
1) jenkins on ci.linaro.org could use one of alf's tokens, but that
seems a little tied to him (what if he leaves linaro, etc)
2) Another way is to create a user that does not correspond to a user on
LP (gfx-daily-job-submitter or somethign) and add it to the linaro
group on v.l.o. This feels a bit better, but it's not very 'self
service' -- the only way to create such a user is via the admin panel
afaik.
3) A third way is to create a fake user on LP and add it to the ~linaro
team there. This also seems a bit horrible.
There is a fourth way that is actually happening but doesn't help --
create a user on LP and do _not_ add it ~linaro:
https://launchpad.net/~ciadmin [1].
I don't really have a suggestion for what would be better here. It
feels a bit like the model we have for access and handling tokens is
perhaps a bit too simple currently. What do you guys think?
Cheers,
mwh
[1] this is why ci.linaro.org lost the job-submitting permission -- I
didn't realize ciadmin on v.l.o corresponded to a user on LP!
Hi folks.
I'd like to propose that we keep all the lava discussion in
#linaro-lava if possible, this will allow participants, who are not
working for linaro, to join and quickly identify people that share the
interest in our common framework. The channel name is a compromise
between unavailable #lava (already owned by unrelated project) and
staying in #linaro (that sees a fair amount of unrelated traffic). I
also considered #linaro-validation but I think we have agreed in the
past that we want to transition away from the "validation" keyword to
"lava".
This is not formally done yet, I'd like to have a chanserv registry (I
think all #linaro-* channels are automatically managed though) and
public logs, much like we have on #linaro today.
So, if you support this idea and find yourself talking or observing a
discussion about LAVA in #linaro please gently suggest the
participants to move to #linaro-lava
Thanks
Zygmunt Krynicki
Hi gang,
As you may have seen if you actually read all the merge proposal email
you get <wink>, as part of my quest to get us closer to a continuous
delivery stype of development I've written a script that updates a LAVA
instance to the tip of trunk of all the various LAVA components. I will
set things up so that this runs every night on the staging instance
soon (need to fiddle things somehow so that a sudo password isn't
necessary to restart the instance after the code is upgraded).
However, Spring is right now using the staging instance to test some
dispatcher changes. If I'd set the cronjob I refer to above up and
Spring was particularly unlucky, the cronjob would wipe out the changes
he's made to the instance to test while he's testing them. This seems
less than ideal.
Because what we do is always going to be hard to test outside a
production like environment, I think what we need is yet another
instance: one that we can hack about relentlessly as needed to test
changes that we haven't yet landed. I propose the name "dogfood" for
this (maybe we'd even go as far as having scripts that completely
removed the dogfood instance and could recreate it as needed to ensure a
clean baseline).
So we'd have:
* production: duh
* staging: this is _solely_ used to determine if changes that have
landed to the trunk of some component are safe to deploy to
production
* dogfood: random hacking
I think that potentially we could have some way of bringing up dogfood
instances as needed in our private cloud, but tbh for now one on control
that we share amongst ourselves in a loose way (i.e. check on IRC before
doing stuff with it) would be a useful thing.
What do you guys think?
Cheers,
mwh
Hi all,
I've made a few small changes to lava-server and linaro-django-xmlrpc to
work with Django 1.4. I have clicked around a bit locally, but
certainly not hit each page, so if you can upgrade your local Djangos
and let me know how it goes, that'd be great.
Cheers,
mwh