Hi,,
i did a run through anon bundlestreams and identified the following
that we want to remove:
All anon ci- bundlestreams:
=======================
/anonymous/ci-asac-linux-linaro-3_0/ not set 36 4
/anonymous/ci-asac-linux-linaro-3_0-build/ not set 4 4
/anonymous/ci-asac-linux-linaro-3_1/ not set 28 4
/anonymous/ci-asac-linux-linaro-3_1-build/ not set 4 4
/anonymous/ci--build/ not set 18 2
/anonymous/ci-build/ not set 15 7
/anonymous/ci-build-build/ not set 4 4
/anonymous/ci-build-with-ext4-3_0/ not set 41 9
/anonymous/ci-build-with-ext4-3_0-build/ not set 12 12
/anonymous/ci-lci-build-changes/ not set 3 3
/anonymous/ci-lci-build-changes-build/ not set 6 6
/anonymous/ci-linux-igloo-kernel-snowball/ not set 1 1
/anonymous/ci-linux-Igloo-Kernel-Snowball/ not set 5 4
/anonymous/ci-linux-igloo-kernel-snowball-build/ not set 1 1
/anonymous/ci-linux-linaro-snowball-cma-test/ not set 3 3
/anonymous/ci-linux-linaro-snowball-cma-test-build/ not set 5 5
/anonymous/ci-linux-maintainers-build/ not set 10 2
/anonymous/ci-linux-maintainers-build-build/ not set 2 2
/anonymous/ci-linux-maintainers-kernel/ not set 5 5
/anonymous/ci-linux-maintainers-kernel-build/ not set 18 18
/anonymous/ci-packaged-linux-linaro-3_0/ not set 62 10
/anonymous/ci-packaged-linux-linaro-3_0-build/ not set 24 24
/anonymous/ci-packaged-linux-linaro-3_1/ not set 190 51
/anonymous/ci-packaged-linux-linaro-3_1-build/ not set 144 144
/anonymous/ci-packaged-linux-linaro-build/ not set 1 1
/anonymous/ci-packaged-linux-linaro-build-build/ not set 1 1
/anonymous/ci-packaged-linux-linaro-lt-3_1/ not set 10 5
/anonymous/ci-packaged-linux-linaro-lt-3_1-build/ not set 30 30
/anonymous/ci-Test-TI-working-tree/ not set 21 19
/anonymous/ci-Test-TI-working-tree-build/ not set 28 28
/anonymous/ci-TI-working-tree/ not set 28 15
/anonymous/ci-TI-working-tree-build/ not set 18 18
/anonymous/ci-triage-build/ not set 3 3
/anonymous/ci-upstream/ not set 119 48
All anon lava-android-:
===================
/anonymous/lava-android-leb-panda/ not set 910 770
/anonymous/lava-android-panda/ not set 13 13
/anonymous/lava-android-test/ not set 7 7
More random picks:
=================
/anonymous/anonymous/ not set 8 5
/anonymous/bootchart/ not set 8 4
/anonymous/chander-kashyap/
/anonymous/lava-daily/
/anonymous/linux-linaro-tracking/
/anonymous/miscellaneous/
/anonymous/panda01-ltp/ Restored from backup 45 44
/anonymous/panda01-posixtestsuite/ Restored from backup 1 1
/anonymous/panda01-stream/ Restored from backup 1 1
/anonymous/qemu-testing-dumping-ground/ not set 12 7
/anonymous/smem/ smem 2 1
/anonymous/USERNAME/ not set 87 69
If we can just hide them that's also fine I guess.
--
Alexander Sack
Technical Director, Linaro Platform Teams
http://www.linaro.org | Open source software for ARM SoCs
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
cc LAVA mailing list.
On 20 June 2012 06:19, Zach Pfeffer wrote:
> Hey Andy,
>
> During our call with ARM (invite on the way), the fact that LAVA has a
> network dependency came up again.
Could you expand on the network dependency issue?
Filing a bug is even better.
> We also have an issue with Connectivity Manager testing. Is there anything that can be done in
> 12.07 (SD mux) to get over this?
Two jobs:
http://validation.linaro.org/lava-server/scheduler/job/23593/log_file
and
http://validation.linaro.org/lava-server/scheduler/job/23602/log_file
If you grep for root.tgz and look at the time stamps around it, the
first one took a little over 10 minutes. The second one I started maybe
an hour ago or so and it's still deploying root.tgz, so I have no idea
how long it will wind up taking. Both of these are the same image, same
job (except that I added reboots between tests for the second one to see
if that would cause any issues, but we are not yet to that point so it
shouldn't be affecting anything), and both of them ran on snowball02.
So unless I'm missing something, these "timestamp in the future"
messages are not only creating insanely large logfiles for us, but are
also drastically increasing the amount of time it takes to deploy the
root filesystem.
Additionally, I saw some things like this in the log:
tar: .: time stamp 2012-06-27 16:06:34 is 394095692.518984424 s in the
future
tar: Error is not recoverable: exiting now
0K 0%
3.46M=0.03s
Cannot write to `-' (Broken pipe).
root@master:~# [rc=2]: <LAVA_DISPATCHER>2012-06-27 09:27:39 PM WARNING:
Deploy http://192.168.1.10/images/tmp/tmpVCOkDB/root.tgz failed. 4 retry
left.
<LAVA_DISPATCHER>2012-06-27 09:27:42 PM INFO: Wait 60 second before
retry
So... I guess it's also possible the the delays are introducing additional
errors and instability.
I submitted a simple fix [1] for this which I think might help. If Michael
and Andy are in agreement, I'd like to see it in production quickly to see
if it will make a big difference.
Thanks,
Paul Larson
[1]
https://code.launchpad.net/~pwlars/lava-dispatcher/supress-timestamp-warnin…
Hi, All
About the sdcard partition support on lava,
As you all have already known, the mmcblk0p8 partition can not be
recognized in the master image for iMX and origen boards.
Also I have tried with my panda to mount the mmcblk0p8 as the sdcard,
but because the 8th partition major number is different with others as
following:
root@linaro: cat proc/partitions
major minor #blocks name
179 0 7830528 mmcblk0
179 1 53248 mmcblk0p1
179 2 991232 mmcblk0p2
179 3 65536 mmcblk0p3
179 4 1 mmcblk0p4
179 5 2097152 mmcblk0p5
179 6 2097152 mmcblk0p6
179 7 2097152 mmcblk0p7
259 0 420864 mmcblk0p8
root@linaro:
the 8th partition cannot be used in android too.
So at the point now for us the only way is to reduce the partition
number of our sdcard.
Here are 3 idea I have thought, but I don't which should we use:
1. merge the master boot partition and target boot partition as one partition
Problem: the master boot partition will be modified each time the
target is deployed. This will increase the risk to destroy the master
boot
2. merge the master boot partition and master rootfs partition as one partition
Problems:
1)we need to change to use the vfat file system format for rootfs,
because the uboot need the vfat format
2)Need to change the master initrd.img and merge the two partitions
in our lava-create-master script
3. merge the target boot partition and target rootfs partition as one partition
Problems:
1)we need to change to use the vfat file system format for rootfs,
because the uboot need the vfat format
2)this is not the default partition layout that target image will
use, and will increase the failure risk in other thing.
4. leave the iMX and origen not supported until the sdmux
Any other ideas about how to deal with this problem?
Thanks,
Yongqin Liu
Hey folks,
I was checking today what we could do to optimize the job runs at
LAVA, and noticed that the time it takes to download and cache the
tarball is still the biggest bottleneck we have.
Checking job http://validation.linaro.org/lava-server/scheduler/job/23135,
you can see at the top of the serial log that it took almost one hour
just to download and cache the tarball. As I believe this could also
be related with the connection we have available at the lab, and as
we're caching the tarballs already, wouldn't it be possible to use
zsync or a similar tool to speed up the download?
For the pre-built images we're already generating the zsync meta-data
file together with the image itself, so would be nice to check if that
would really make any difference from the lab perspective.
Thanks,
--
Ricardo Salveti de Araujo
Hi all,
I have just fiddled with the appropriate crontabs to get staging updated
at 00:00 UTC (or possibly 00:00 UK time) to trunk branches of various
components. You can see a log of the updates at:
http://validation.linaro.org/staging-updates.txt
and I've updated the way the deployment report handles components in
branches:
http://validation.linaro.org/deployment-report.xhtml
(although as staging is running the tip versions of all branches now,
this isn't really apparent).
The way this works is that there is a checkout of trunk of various
branches in /srv/lava/branches (please DO NOT edit anything in these
brances -- I want them to remain pristine copies of trunk) and symlinks
from /srv/lava/instances/staging/code/current/local to each of these
components. These symlinks get copied forward to a new version of the
code directory if ldt upgrade staging creates one (i.e. if we commit a
new lava-manifest revision).
If you want to use a custom branch on staging, check it out somewhere on
control (not in /srv/lava/instances/staging/code/current/local!) and run
(as instance-manager) /srv/lava/instances/staging/bin/lava-develop-local
$path_to_your_branch. When you're done testing, please run
"/srv/lava/instances/staging/bin/lava-develop-local
/srv/lava/branches/$component" to get back to tracking tip. To see
changes on staging.validation.linaro.org you'll need to restart the
instance.
I'll work on automatically copying the production database to staging
next week I hope.
Cheers,
mwh
Michael,
I was testing out my new dispatcher code on staging.l.o and notice we
aren't getting any logging when you look at the job details. The
$VIRTUAL_EVN/etc/lava-dispatcher/lava-dispatcher.conf looks correct.
I also added a temporary "print" statement to spit out the logging level
when a job started and it was "20" which is correct.
I must be missing something subtle about python logging, but don't know
what?
-andy