Hi guys.
This is just a note, perhaps we'll be able to go down this route
(eventually) perhaps it will never be viable.
SD cards are utter crap. USB disks are fantastic. I'm currently
recovering my home server that died yesterday night due to a bad SD
card. Luckily I kept the image as l-m-c would crash attempting to
write it to my SD card adapter.
Now I'm just finishing the final configuration bits (sadly those parts
were lost with that card) and came to realize how painless and
efficient a simple USB disk is. Would it be possible to change our
deployment to use SD _just_ to store a recovery uboot and recovery
kernel? Could we try to stick the master rootfs and test rootfs (and
all those android partitions) directly on the HDD?
PS: Apart from just working, it seems that current kernels have a big
issue with very slow storage (like thumb drives and sd cards) and
transparent huge tables. I cannot say it is directly the case on my
iMX53 but "idle" system load on a regular SD card was around 4-5
(writing to syslog, keeping up with small squid traffic of my home
users) while the same system, with the same set of services on a run
off the mill 2.5" HDD can barely climb above 0.5.
Best regards
ZK
HI.
I'm building a LAVA service for running fast models. Quite soon (*)
we'll be ready to open an alpha access. Right now you will need to
bring your own root filesystem and kernel image to use it. With that
in mind I wanted to start a discussion about the state of A15 support
in Linaro kernel(s). I need to understand two things:
1) Are we ready to do automatic builds for A15 kernels?
2) If so, which configs and trees should we consider
Thanks
Zygmunt Krynicki
Hi.
This a brainstorm message. I'd like to hear your pros and cons on
merging the scheduler and dashboard. For me the advantages outweigh
the disadvantages:
Pros:
* A simple model for tracking a result back to a job, without having
to jump through javascript hoops and hard synchronization/availability
issues (and back!)
* A stronger case that lava-{dashboard,scheduler} has all the key
models - devices, tests, results - all in one place. Easier for
extension developers to target as their backend.
* Less fuss for extension developers that don't really consider the
implications of this distributed system
* Easier workflow, one less component to release ;-)
* We get to clean up dashboard_app and rename it to lava-*
Cons:
* We may need to do a flag day when we have one big migration that
will rename tables, etc
* This will break sql-based reports (but I don't think that we really
care about them anymore)
* I'm not sure what we'd do with the lava-dashboard, lava-scheduler
projects on launchpad
PS: We could move this a step forward and consider merging with
lava-server (at the source code repo level) where we''d have clear
separation from the core (server,scheduler,dashboard) and extensions
that build on the core.
Thanks
ZK
For the past few weeks I've been consistently using namespace packages
(like zope does if you are familiar with that) to streamline LAVA
APIs. From the end user's point of view there is no difference, for
developers it is a bit easier as everything comes from a nested lava
package, like this:
>>> from lava.serial.console import xxx
>>> from lava.utils.data_tables.backends import ...
Technically this means that any python package (thing with a setup.py
file) can add modules to a shared "lava" namespace. Currently I've
setup two namespaces: lava. and lava.utils. Eventually I'd like to
move each lava module there (after a grace period, with possible
eternal backwards compatibility modules). Unless there is a strong
voice against adopting this pattern across all new code I'd like to
request this to be the standard way of writing new modules.
For some tips/guides on how to do this please refer to the following
resources: [1], [2], [3], also, you can look at lava-serial and
lava-server (the new parts)
[1]: http://stackoverflow.com/questions/1675734/how-do-i-create-a-namespace-pack…
[2]: http://www.python.org/dev/peps/pep-0382/
[3]: http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
Thanks
ZK
Hi folks
This is my first public mail to our own validation mailing list ;-).
I'm CCing a few people that could be interested in this topic.
Now to the topic at hand. I'm trying to make it possible to use fast
models (in any way) by the end of this week.
I'm tracking this blueprint [1] but right now it is just an empty
shell as I'm still wrapping my head around this.
A few observations so far:
1) Running a fast model on a 32bit VM is not practical, it seems to
crash after a few / dozen minutes due to insufficient address space
2) I still have not managed to get autologin to work. This is mildly
annoying but should be possible to get over.
3) A poweroff in the machine causes kernel panic
4) A reboot in the machine seems to just hang and do nothing
5) After much tinkering (installing additional packages, etc, etc) I
managed to run stream, I've attached the bundle if anyone is
interested.
6) If you need to modify the rootfs in any way it is about 10x faster
to do it via chroot + qemu than to run the fast model (on core i7)
7) The system boots with nfsroot, the IP of the server and the
directory to mount is burned in the .axf file
8) An axf file is created from a kernel image and some other bits that
I don't yet understand. It seems that boot-wrapper is a kind of
bootloader for linux in the fast model. I've found at least two
sources for that [2] (from the wiki page by Peter Maydell) and [3]
(from Michael Hope)
9) I'm not at all sure why we need to cross compile qemu for arm (as
instructed in [4]) - is this so that we can test visualization support
in a15?
10) The instructions did miss a few things and I had to run apt-get -f
install / install extra cross packages. In the end I'm not sure how
much of this is only required to build qemu, how much is required to
build the rootfs and how much is required to build the kernel. It
seems to be too messy ATM, I'd like to split this into separate
problems that could be automated with standalone scripts we could fire
in a VM or chroot (so that we are sure we got the dependencies right
and everything really works fine)
11) I'm not at all sure what's the difference between model_shell's
terminal_[0123] and uart[0123]. That is, I'd like to know what those
options do with a little bit more precision. Looking at the pdfs that
came with the program did not help.
The major roadblock:
*) I'm still trying to parametrize the kernel command line. There seem
to be a few possible ways to do it. One based on just rebuilding the
axf file (longest/ugilest), one on changing the boot-wrapper
bootloader to accept some extra data at the end and pass that down to
the kernel (not my specialty, could take a moment to get to work),
looking at model_shell -l output it seems there is a way to pass
something known as 'cluster.cpu0.semihosting-cmd_line='. Perhaps that
could be used to put some data that the kernel can read at runtime.
That's it.
Keep fighting
ZK
[1]: https://blueprints.launchpad.net/lava-dispatcher/+spec/lava-fast-model-clie…
[2]: git://git.ncl.cs.columbia.edu/pub/git/boot-wrapper
[3]: https://github.com/virtualopensystems/boot-wrapper
[4]: https://wiki.linaro.org/PeterMaydell/A15OnFastModels#Cross_compile_QEMU