Hi,
I think we can deploy a Squid analyzer on the v.l.o, which will visualize
the squid access log. Though it doesn't hit the big size tarball caching
issue, but it's useful for the squid monitoring. I've deployed one on my
own PC, seems cool.
http://squidanalyzer.darold.net/<http://squidanalyzer.darold.net/install.html>
--
Best wishes,
Spring Zhang
Closing out one of my 2012.05 items requires deploying lava-server and
lava-scheduler, so I think I may as well release all components with
interesting changes, make a 2012.04.1 bundle and update production to
that.
Here's the list of unreleased revisions:
lava-android-test 0.4 1 unreleased revisions
157 merge with branch that add cache-coherency iozone memtester test
lava-scheduler-tool 0.4 1 unreleased revisions
18 Add the lava-scheduler-tool command to go with the resubmit api (Paul Larson)
lava-tool 0.4 1 unreleased revisions
175 Open 0.5 for development
lava-test 0.7 4 unreleased revisions
148 fix bug 1002285: set DEBIAN_FRONTEND=noninteractive when installing test dependencies
147 Fix pwrmgmt test dependencies.
146 Merge support for bundle format version 1.3
145 Bump version to 0.8 dev
lava-kernel-ci-views 0.4.0 no unreleased revisions
lava-scheduler 0.13 5 unreleased revisions
170 Add support for looping of health care jobs
169 ensure job_summary_mail.txt is included in the sdist
168 add resubmit_job to the api (Paul Larson)
167 move static files around to be in line with what django.contrib.staticfiles expects
166 post release version bump
lava-dashboard-tool 0.7 1 unreleased revisions
159 Open 0.8 for development
lava-dashboard 0.15 1 unreleased revisions
314 merge with the Bug #877984 fix branch
lava-dispatcher 0.7.1 5 unreleased revisions
298 skip raising exception when home screen if it is health check jobs
297 Fixed reboot issues
296 Add support to install extra debian packages from lava_test_install (Luis Araujo)
295 Added the config option LAVA_TEST_DEB, to allow the installation of lava-test with apt-get. (Rafael
294 post release bump
lava-raven 0.1.1 no unreleased revisions
lava-server 0.12 8 unreleased revisions
370 fix bug #1002526 by correcting the arguments to the {% url %} tag (Rafael Martins)
369 read SERVER_EMAIL from the settings.conf file
368 Display OpenID login forms only if OpenID auth is enabled (Jean-François Fortin Tam)
367 one more 1.4 thing, just a comment this time
366 more django 1.4ness: admin needs to be in STATICFILES_PREPEND_LABEL_APPS iff we are running before 1
365 another Django 1.4 compatibility thing: django.contrib.messages.middleware.MessageMiddleware is now
364 small fixes to work with django 1.4
363 bump version post release
Does anything look particularly scary to anyone here? Note that it
feels like it's time to upgrade to Django 1.4 now, which is a little
scary -- but staging has been on 1.4 for weeks now.
Cheers,
mwh
Hi all,
Earlier in the week in a call Loic mentioned that he had looked at a
Mozilla/Boot2Gecko project that did HDMI capture and we could look at it
for two reasons:
1) (the obvious) because we want to do HDMI capture ourselves, and
2) (slightly less obvious) to see how they specify tests that involve
running code on a device and capturing data from an external source.
I don't know if this is what he had in mind, but google found for me
"project eideticker":
https://wiki.mozilla.org/Project_Eideticker
(the reason I'm not sure if this is what Loic had in mind is that it's
more a Fennec/Firefox thing than a B2G thing).
I haven't dug into the test specification side yet, but I did think this
page was interesting:
https://wiki.mozilla.org/Project_Eideticker/DeckLink_Primer
IIRC, we had considered Blackmagic cards before (supported on Linux,
relatively cheap). The limitations on this page look a bit less than
ideal though...
Anyway, interesting reading for you all.
Cheers,
mwh
Multiple conversations over the last week have convinced me that
lava-test, as it currently is, is not well suited to the way LAVA is
changing.
I should say that I'm writing this email more to start us thinking
about where we're going rather han any immediate plans to start
coding.
The fundamental problem is that it runs solely on the device under
test (DUT). This has two problems:
1) It seems ill-suited to tests where the not all of the data
produced by the test originates from the device being tested
(think power measurement or hdmi capture here).
2) We do too much work on the DUT. As Zygmunt can tell you, just
installing lava-test on a fast model is quite a trial; doing the
test result parsing and bundle formatting there is just silly.
I think that both of these things suggest that the 'brains' of the
test running process should run on the host side, somewhat as
lava-android-test does already.
Surprisingly enough, I don't think this necessarily requires changing
much at all about how we specify the tests. At the end of the day, a
test definition defines a bunch of shell commands to run, and we could
move to a model where lava-test sends these to the board[1] to be
executed rather than running them through os.system or whatever it
runs now (parsing is different I guess, but if we can get the output
onto the the host, we can just run parsing there).
To actually solve the problems of 1 and 2 above though we will want
some extensions I think.
For point 1, we clearly need some way to specify how to get the data
from the other data source. I don't have any bright ideas here :-)
In the theme of point 2, if we can specify installation in a more
declarative way than "run these shell commands" there is a change we
can perform some of these steps on the host -- for example, stream
installation could really just drop a pre-compiled binary at a
particular location on the testrootfs before flashing it to the SD
card. Tests can already depend on debian packages to be installed,
which I guess is a particular case of this (and "apt-get install"
usually works fine when chrooted into a armel or armhf rootfs with
qemu-arm-static in the right place).
We might want to take different approaches for different backends --
for example, running the install steps on real hardware might not be
any slower and certainly parallizes better than running them on the
host via qemu, but the same is emphatically not the case for fast
models.
Comments? Thoughts?
Cheers,
mwh
[1] One way of doing this would be to create (on the testrootfs) a
shell script that runs all the tests and an upstart job that runs
the tests on boot -- this would avoid depending on a reliable
network or serial console in the test image (although producing
output on the serial console would still be useful for people
watching the job).
Hi,
After the last few days of poking at things, I think it's time to
finally move fully away from conmux to a connection_command /
hard_reset_command based approach.
I think the actual config file mangling can be done with a short shell
script. Although lava-core will fix this properly, I can spend a quick
10 minutes hacking up lava console and lava powerstab commands to get
around the loss of easy conmux-console based command lines.
Thoughts?
Cheers,
mwh
Hey folks.
Initial batch of LAVA tests in fast models are now running in the Linaro
validation lab. This initial run is designed to see how behaves in
practice and to check for omissions occurring away from my computer.
The branch that I've deployed is lp:~zkrynicki/lava-core/demo-3 (it
depends on unreleased json-document tree from github if you want to try
it out, there are instructions in the tree).
We've got the licensing server setup for production usage and started a
(arguably dummy) stream lava-test test based on
hwpack_linaro-vexpressdt-rtsm_20120511-22_armhf_supported.tar.gz and
linaro-precise-developer-20120426-86.tar.gz which is the combination I
was using locally.
Over the next few days we'll be working on improving the process so that
we can start accepting more realistic tests. Initially do expect high
failure rate due to imperfections in the technology, configuration
issues, etc.
The plan is to quickly move to practical use cases. So far I'm aware of
the switcher tests that the QA team is using, and the kvm tests but I
have not checked either, in practice, on a fast model yet.
My question to you, is to give me pointers (ideally as simple, practical
instructions that show it works) for things that you want to run. I'm
pretty ignorant about Android side of the story so any message from our
android team would be appreciated.
Please note that iteraction cycle is very slow. It takes 10+ hours to do
trivial things (doing apt-get update, installing a few packages,
compiling trivial program and getting it to run for a short moment).
Please don't ask us to run monkey for you as we'll be wasting time at
this point.
My goal is to understand what's missing and to estimate how long given
tests typically takes so that we can compare how our infrastructure
compares to your needs.
Many thanks
Zygmunt Krynicki
--
Zygmunt Krynicki
Linaro Validation Team
Hi there. We'd like to run a Fast Model in the validation lab for KVM
testing. Is there a blueprint for this? What's the status?
Paul and I discussed a rough plan a few months ago. It was along the lines of:
* A x86 machine as the Fast Model host
* An emulated vexpress-a15 as the KVM host
* A vexpress-a15 as the KVM guest
* LAVA treats the Fast Model as a board
* Jobs are spawned into the LAVA scheduler
* Once the KVM host is running, everything else is toolchain specific
and done via shell scripts
The dispatcher would:
* Grab the hwpack
* Grab the nano rootfs
* Build the rootfs with separate kernel, initrd, and dtb using
linaro-media-create
* Start the Fast Model with the boot wrapper, kernel, and rootfs
* Use the console to run the test script
There's more information on the steps required at:
https://wiki.linaro.org/MichaelHope/Sandbox/KVMUseCase
-- Michael
Hi, I've been playing around with this "lava" thing a bit :)
>From the perspective of the QA team, I'm currently interested in a very
specific use case, but it's one that I think has some common things with
what others want to do:
We have test results for various {ubuntu, android} images. I'd like to see
a current snapshot of the results on those images. I'd also like to see
the history for previous results on the same type of image. This way, as
we near release time, if we look at trying to figure out which build is
likely to be the best one to use for release candidate testing, we can
glance at the results and see where most of our automated tests passed.
So just looking at android as an example, because we already store data
there about which image we are testing, which build number, which url.
It's pretty simple to pull all bundles in a stream. It's also not too bad
to look at a test_run in a bundle and get the fact that it has a certain
attribute. It's not so easy to say "show me all the bundles that have this
attribute". Am I missing something?
I think I brought this up before, but I ran across a project (heck if I can
find it now) that had a python library providing an API to make it
convenient for finding your test data. This is what the ORM *should* be
providing, I know, but it's not always the most convenient for us as we've
all struggled with this in the past.
This is nothing new, just wanted to present another use case that
highlights something we're trying to do, but doesn't seem to be too easy at
the moment, and looking for suggestions about how to proceed.
Thanks,
Paul Larson
Hello,
I am sending this email to bring up a topic that has been discussed
before[0] , and that I believe it is still relevant and worth
considering; about moving away the test definitions from the lava-test
package into its own project/package.
We also had a discussion at #linaro-lava yesterday, some points were
raised in favour or regarding separating the test definitions, among them:
- A separated test definitions package would allow to upgrade, modify
existing tests and/or release new ones without the need of releasing a
whole new lava-test package; and the other way around (it would make
maintenance easier and more flexible, something that current test
definitions seem to be lacking).
- It would promote more specialized tests definitions; since we have an
independent test definitions package, it should be easier to tailor and
branch for specific platforms or projects (and even encourage that).
- It is cleaner, from a development point of view, to keep test
definitions separated from the lava-test tool. Test definitions are not
components, but test files, similar in functionality to json job files.
This also would improve maintenance and collaboration efforts.
- A possible test definitions package should use a kind of versioning to
keep compliance with lava-test API, and hence avoiding any breakage, but
at the same time making the packages independent enough so we can
upgrade/modify one without affecting the other.
- We still could keep some minimal test definitions in the lava-test
package, though these would be more like 'simple' test cases, serving
more like examples for the given lava-test API/Core version.
Apart of the above discussed issues, there are also some valid points at
[1]. Of special interest are the maintenance problem and to keep proper
cross-platform support in the tests.
My initial idea was to have a possible 'lava-test-definitions' package
that could initially contain all the current available tests from the
latest lava-test package, update/fix the existing broken tests (if any),
then get a launchpad project following the TODO items from the blueprint
at [1] , and that should be enough to get us on the way.
This email is intended to start a discussion that hopefully could bring
some technical decision about the subject, hence it is an open door for
ideas and comments, so please, share yours :)
Cheers and Regards,
--- Luis
[0]
http://lists.linaro.org/pipermail/linaro-validation/2012-April/000350.html
[1]
https://blueprints.launchpad.net/lava-test/+spec/lava-test-dedicated-test-r…