Hi,
I'm tring to deploy my own board in LAVA(a board that haven't been
supported by LAVA lib).
But something puzzling me...
According to the LAVA docs(http://lava.readthedocs.org/en/latest/index.html),
LAVA server seems workable on debug mode, But still, I don't know how to
deploy my own board(from Fujitsu) in lava. I got some docs from
_http://lava.readthedocs.org/en/latest/index.html_, but a bit too
simple,
So, Are there any docs that explain deploying lava to a board more detail?
To explain the image format, How to write a .json file, what kind of
fields are there for a
.json file, what is the format, what every field means, what the
field's function,
how to add my own tests and so on.
Best regards,
Jinghui.Shi
Zymunt tried to send this to validation, but it bounced.
Begin forwarded message:
> From: Zygmunt Bazyli Krynicki <zkrynicki(a)gmail.com>
> Date: 9 July 2013 15:48:58 GMT+01:00
> To: Dave Pigott <dave.pigott(a)linaro.org>
> Subject: Fwd: Automatically testing and landing approved LAVA merge requests
>
>
>
> ---------- Forwarded message ----------
> From: Zygmunt Krynicki <zkrynicki(a)gmail.com>
> Date: 2013/7/5
> Subject: Automatically testing and landing approved LAVA merge requests
> To: linaro-validation(a)lists.linaro.org
> Cc: tyler.baker(a)linaro.org
>
>
> Hi.
>
> I'm working on CI that automatically tests and lands approved branches to lp:lava subprojects. Currently the process of adding another project to support is manual. It requires adding a few lines to tarmac configuration file and creating the required ci-info directory.
>
> I've done that for lp:lava-tool and lp:lava-server (which fails CI today) and I'm doing that for lp:lava-dispatcher (also fails today).
>
> Currently the automatic merges are *already enabled* for lp:lava-tool. Any merge request that has at least one "approved" review comment the whole merge request is switched to "approved" state is automatically landed. Failure to run tests obviously prevents landing with an appropriate comment being added to the merge request. Merges happen every hour, I can easily make that more common if you want to though.
>
> Currently all testing happens on Ubuntu 12.04.2 LTS. If required we can enable testing on any OS of choice without much issues but someone should add 1GB to the machine that runs all tests for proper virtualization support. I was thinking that we could extend that to Fedora and Debian if there is interest.
>
> I'll be rolling this out with Tyler's approval to all the other projects in the LAVA group. Currently the CI information (how to run tests, etc) is inside the lava-landing-tests repository but it can be moved to particular project repositories (the code already supports that).
>
> I've pushed the code to https://github.com/zyga/lava-landing-tests - comments, questsions, bugs, pull requests are all welcome.
>
> Thanks
> ZK
>
---------- Forwarded message ----------
From: Paul Sokolovsky <paul.sokolovsky(a)linaro.org>
Date: 9 July 2013 00:09
Subject: LAVA Server on ARM?..
To: linaro-validation(a)linaro.org, Tyler Baker <tyler.baker(a)linaro.org>,
Antonio Terceiro <antonio.terceiro(a)linaro.org>, Riku Voipio <
riku.voipio(a)linaro.org>
Hello,
So, did anyone really run complete LAVA server install (as produced by
lava-deployment-tool) on an ARM board? Myself, after working around upstart
issues pertinent specifically to chroot on Chromebook (upstart issues), I
hit this uwsgi issue:
http://lists.unbit.it/pipermail/uwsgi/2012-January/003349.html .
Riku, did you hear about such problems with embedding Python on ARM (that
thread mentions problems with at least 2 packages)?
Thanks,
Paul
On 8 July 2013 14:48, Siarhei Siamashka <siarhei.siamashka(a)gmail.com> wrote:
> And naturally, any PSU rated for just something like 1A will not work
> right, that's a common sense. The modern multi-core ARM boards can
> easily consume a lot more than this under load. But at least 2.5A or
> 3A should be sufficient if you don't attach many power hungry USB
> peripherals.
>
AFAICR, that PSU is 2.5A, but I could be wrong. I'll check when I get back
home. But it is cheap, and it could itself overheat, too.
So, for the time being, I'll leave it on 920MHz and keep the bots running
until we sort out the power/temp problem.
Anyway, I'd like to move away from Pandas to something with a bit more
horse power.
cheers,
--renato
On 4 July 2013 17:13, Siarhei Siamashka <siarhei.siamashka(a)gmail.com> wrote:
> By the way, power consumption is not constant and heavily depends on
> what the CPU is actually doing. And 100% CPU load in one application
> does not mean that it would consume the same amount of power as 100%
> CPU load in another application.
This is really interesting, I had not considered it until now. If I
understood correctly, this has to do with what/how many paths are taken
inside the cores (CPU, GPU), or how much data is passing between
mem/cache/registers, etc.
For toolchain, there isn't much of floating going on, but if your compiler
was auto-vectorized, you'll probably be using NEON, and there will be a lot
of data movement, too, so I'm guessing compilers can stretch quite a lot
the CPU overall. And since building a large project (like GCC or LLVM)
takes several hours with very little happening outside the CPU, there isn't
much time to cool down the CPU between compilation jobs.
> Some time ago, I tossed my Cortex-A9 cpuburn to the ODROID-X people.
> And coincidentally they quickly got the thermal framework properly
> integrated into their kernels and also started to offer optional
> active coolers to their customers :-)
>
Hahahaha! Yes, that's what I'm talking about. I don't think anyone did that
with Pandas or Arndales, and somebody really should.
In my opinion, the right
> solution for modern ARM SoCs is just to always ensure proper throttling
> support (both in the hardware and in the software). ARM can even call it
> "turbo-boost", "turbo-core" or use some other marketing buzzword ;-)
>
Absolutely! Though, while throttling is the way to go, it might be simple
to wait for it with a decent cooling solution than with a lower frequency.
ODroid folks seem to have understood that pretty well.
It would be a lot easier to convince hardware vendors and cluster builders
to buy huge active coolers, than convince them to lower the CPU frequency.
The former show failure in software support, but the latter show failure in
system design...
cheers,
--renato
On 4 July 2013 17:34, Siarhei Siamashka <siarhei.siamashka(a)gmail.com> wrote:
> For getting reproducible benchmark results, you just need to ensure
> that thermal throttling never kicks in. If the kernel is compiled
> with cpufreq stats enabled, you can compare these stats before/after
> your benchmark to ensure that it spent all the time running at the
> same designated clock frequency.
>
I did that on my Chromebook, put it on power mode and I get pretty
consistent build and benchmark times. It's an art to make sure the
benchmark run-time is enough to give you statistically relevant results
while not being too much to deal with overheating or scheduling issues, but
that, as you say, can be "fixed" by running on lower frequencies, I don't
mind about that.
What I really mind is to lower the frequency of our buildbots, the ones
that should be building and testing under 20 minutes (like octo-core i7s)
but take 3 hours to do so (on a dual Panda). While the comparison is in no
way fair, reducing the freq. will only make it worse. Coming from a server
farm culture, where noise, power and air-conditioning are always topped up
and never too expensive, it's hard not to giggle when hearing that you
should lower the frequency to get "expected results".
Yes, ARM devices were designed with the phone market in mind, but today
they're a lot more than that, and if they're to get into the server space,
they have to be consistent, even when cranked up all the way to 11.
Anyway, I recommend you to start the tests for the hardware
> robustness with:
>
> wget https://raw.github.com/ssvb/cpuburn/master/cpuburn-a9.S
> arm-linux-gnueabihf-gcc -o cpuburn-a9 cpuburn-a9.S
>
I'll do that and report on my findings.
Thanks for the overall, it was very educational. ;)
cheers,
--renato
Hi, All
The dashboard display for build juice-base-aosp on lava is broken:
http://validation.linaro.org/lava-server/dashboard/image-reports/juice-base…
When I access it, I got the following error:
Internal Server Error
The server encountered an internal error or misconfiguration and was unable
to complete your request.
Please contact the server administrator, webmaster@localhost and inform
them of the time the error occurred, and anything you might have done that
may have caused the error.
More information about this error may be available in the server error log.
------------------------------
Apache/2.2.22 (Ubuntu) Server at validation.linaro.org Port 80
--
Thanks,
Yongqin Liu
---------------------------------------------------------------
#mailing list
linaro-android(a)lists.linaro.org <linaro-dev(a)lists.linaro.org>
http://lists.linaro.org/mailman/listinfo/linaro-android
linaro-validation(a)lists.linaro.org <linaro-dev(a)lists.linaro.org>
http://lists.linaro.org/pipermail/linaro-validation