Hi.
I've registered a new blueprint about a new proposed component. Please
read the blueprint and comment here on the mailing list. My goal is to
put planning for that in this cycle and perhaps one small blueprint that
would implement some of the code behind that.
You can find the blueprint on the new project page:
https://blueprints.launchpad.net/lava-device-manager/+spec/lava-device-mana…
Thanks
ZK
--
Zygmunt Krynicki
Linaro Validation Team
Hi there. The GCC build time is approaching 24 hours and our five
Panda boards can't keep up. Sounds like a good time to LAVAise the
toolchain build process a bit more...
Mike Hudson and I talked about doing a hybrid LAVA/cbuild as a first
step where LAVA manages the boards and cbuild does the build. The
idea is:
* There's a image with the standard toolchain kernel, rootfs, build
environment, and build scripts
* The LAVA scheduler manages access to the boards
* The LAVA dispatcher spins up the board and then invokes the cbuild
scripts to build and test GCC
This gives me a fixed environment and re-uses the existing build
scripts. In the future these can be replaced with different LAVA
components. There's a few details:
* cbuild self updates on start so the image can stay fixed
* Full results are pushed by cbuild to 'control'
* The dispatcher records the top level log and an overall pass/fail
* No other bundles are generated
Most of this fits in with the existing LAVA features. There's a few
additions like a 'run command' JSON action, passing the board name to
the instance, and passing the job name as an argument. These seem OK.
I'd like some of the boards to be fitted with fast USB flash drives
for temporary storage. SD cards are slow and unreliable especially
when GCC is involved.
Thoughts?
-- Michael
The PandaBoard auto builders are having a hard time keeping with
longer build and test times of 4.7 and the re-enabled libstdc++ tests.
For reference, here's how much each step costs:
Bootstrap GCC with C, C++, Fortran, and Obj-C: 9 hours
Test GCC: 9.5 hours
Test libstdc++: 4.4 hours
Test libgomp: 0.9 hours
Other tests: 0.2 hours
for a grand total of 23.8 hours. Every new commit gives a merge
request and trunk build for both A9 and ARMv5 giving 95 hours of
compute time. GCC 4.6 takes five hours to build and 5.5 to test.
This is just a FYI. I'll think about ways of speeding things up or
adding capacity. An i.MX6 with 2 GB of RAM and SATA would be nice...
-- Michael
Hi.
This time a small blueprint that can be easily done in one or two days.
I think that, if successful, we could apply this pattern to other tools
to cut down the number of parts we need to release.
(offtopic)
If successful I'd like to schedule the following merges:
lava-dashboard-tool + lava-dashboard -> lava-dashboard
lava-scheduler-tool + lava-scheduler -> lava-scheduler
lava-server + lava-tool -> lava-core
The last part would define the core dependency of any lava component.
This may be a good idea or a bad idea, depending on how you think about
it. Feel free to think about it and comment but please keep the
separation of concerns so that this (dashboard focused) blueprint can be
considered without runaway discussion.
(ontopic)
The blueprint is on the lava-dashboard-tool project. Please comment here
on the mailing list.
https://blueprints.launchpad.net/lava-dashboard-tool/+spec/lava-dashboard-t…
Thanks
ZK
--
Zygmunt Krynicki
Linaro Validation Team
Hi
Another 2014.04 planning blueprint. A trivial two-hour blueprint with
great potential for future usage.
A copy of the description:
Currently all boards we are interested in have a working mmc controller.
SD specification defines the CID register that contains per-card unique
identification numbers. The actual syntax of the register is irrelevant
but if made available from the master image could be used to track and
identify SD cards in a particular lava installation.
A master image could detect when a post-first-boot image was copied to
another card (thus breaking uniqueness of first-boot-uuid). LAVA device
manager could track the wear of each card even if it was moved from
board to board without explicit notification done manually by the system
administrator. The device manager could also track SD card performance
as experienced by different devices in different master images. This
could be used as basis for additional argument to timeout expressions it
dispatcher jobs.
See more (work items) at:
https://blueprints.launchpad.net/lava-master-image-scripts/+spec/master-ima…
Thanks
ZK
--
Zygmunt Krynicki
Linaro Validation Team
W dniu 27.03.2012 15:04, James Tunnicliffe pisze:
> On 27 March 2012 12:48, Zygmunt Krynicki <zygmunt.krynicki(a)linaro.org> wrote:
>> W dniu 27.03.2012 13:32, Paul Sokolovsky pisze:
>>> Hello,
>>>
>>> Yong Qin is working on the blueprint
>>> https://blueprints.launchpad.net/lava-android-test/+spec/modify-android-bui…
>>> to add arbitrary custom client-side scripts to Android Build. He
>>> submitted first implementation of it as
>>> https://code.launchpad.net/~liuyq0307/linaro-android-build-tools/run-custom…
>>> and documented as
>>> https://wiki.linaro.org/Platform/Android/AndroidBuild-LavaIntegration .
>>>
>>> Unfortunately, I'm not thrilled with that implementation, more
>>> specifically, its "user interface" (i.e. any parts which user directly
>>> faces) by the following reasons:
>>>
>>> 1. The idea behind Android Build's build config was that they're short
>>> and easy for human to parse, essentially one glance-over would enough
>>> to get a good idea what is built here, even for outsider. Consequently,
>>> the configs should not be overloaded with details not related to
>>> building. If there's a need for integration with other systems, we have
>>> good pattern of externalizing such details and then just referring to
>>> them with a single variable in a build config.
>>>
>>> 2. The whole approach in
>>> https://wiki.linaro.org/Platform/Android/AndroidBuild-LavaIntegration
>>> seems like trying to encode hierarchical structure in the shell syntax,
>>> which is not much supporting of that. The end result looks pretty much
>>> like representation of graph structure in raw assembler - it's
>>> spaghetti mix of data pieces and labels, requiring long time to wrap
>>> hand around to understand it, and cumbersome and error-prone to write.
>>>
>>>
>>> So, I would like to propose alternative syntax solving the issues
>>> above. I probably should start with saying that if the talk is about
>>> LAVA, then using native LAVA JSON request immediately comes to mind.
>>> Well, I guess human-writability wasn't a design goal for that, so I
>>> skip it. It still makes sense to stick to general-purpose hierarchical
>>> structure syntax though. Except that JSON has 2 problems: a) it doesn't
>>> support comments natively, so we'll need to pre-process it; b) error
>>> reporting/localization may be still no ideal.
>>
>> Hi, just jumping into the conversation briefly to look at small
>> technical aspect. I have not really been tracking this command effort
>> and I don't understand what it's about.
>>
>> On JSON: I think that comment 2 is inaccurate. We have very precise
>> syntax error reporting (down to line/column and text range on some
>> errors) and even better format reporting (the javascript expression that
>> pinpoints the part of json document that does not match the schema, same
>> for the schema actually).
>>
>> Anyway, you have my full support for native json formats. I think that
>> if comments are an issue I can provide a parser that simply ignores
>> comments. We could then keep the human readable documents and strictly
>> machine readable, schema-backed data.
>
> There is always YAML - very much human readable, supports hierarchical
> structures, references and comments.
CC-ing the list again.
My long established stance is that YAML is too baroque to be used as a
data interchange format intended for humans to _write_.
Compare YAML spec: http://www.yaml.org/spec/1.2/spec.html
With JSON spec: http://www.json.org/
Make sure to scroll down on the YAML spec, that's _is_ what people need
to remember to understand what they write. The JSON spec is entirely
defined in that railroad graph that fills my screen.
IMHO YAML is filled so many feature that it is virtually impossible to
remember the full syntax. Last time I've checked there was still no
single python parser that supported everything the spec has to offer.
While JSON is strict to the point that it's sometimes annoying it does
not let you make a mistake by accident, there's like 5 syntax elements
to remember. YAML is full of, comparatively, exotic features that while
otherwise harmless can easily confuse users (seemingly random syntax has
meaning) and tools (oh, my parser does not support that feature).
The second argument against YAML is that JSON remains usable on the web
and you don't need two interfaces. Keeping everything in one format is a
good idea IMHO. This may be of no relevance here but if you look at the
entire LAVA ecosystem so far there's no YAML there, just JSON.
Lastly, we have also built nice validation for JSON so that not only
syntax but also the full structure can be checked automatically and
reliably and I'm not aware of any for YAML at this time. The fact that
YAML has support for loops and pointers makes the tools we have useless.
For the sake of human-readability _and_ writeability, a small subset of
YAML might be better than JSON but the full spec IMHO suffers just a
much as JSON, yet for opposite reasons.
My vote is -1.
Thanks
ZK
--
Zygmunt Krynicki
Linaro Validation Team
Hey all,
Lately I've been studying the LAVA QA Tracker by looking at its code,
setting it up on a LAVA dashboard instance in a VM and trying to use it,
and asking some questions on IRC.
I recall asking at the beginning of the month if there were plans on the
Linaro side to allocate developers to finish the QA Tracker in a
specific timeframe, and was told that some development for this
[monthly] cycle should start "next week".
However, except my own bazaar branch (crappy; it does not really
solve/fix anything so far), I haven't heard of recent development
activity. Or did anything happen that I missed?
Also, some people I spoke with were not aware at all of those plans for
this cycle. So I'd just like to check out a few things:
- Do you have a target deadline to meet for this?
- Who are the people who will specifically be working on this?
Thanks :)
Hi, All
Now lava can support to specify android commands in the job file directly,
and this makes it necessary for us to discuss the interface on
android-build page that used to generate the expected lava job files.
I have written some description about this interface on the wiki:
https://wiki.linaro.org/Platform/Android/AndroidBuild-LavaIntegration
The BP is
https://blueprints.launchpad.net/lava-android-test/+spec/modify-android-bui…,
and pfalcon has already given some comments there.
You can see that for your reference.
And here I want to get more feedback from you about it to make it more
convenient for us to use.
Please give me your comments.
BRs,
Yongqin Liu