On Tue, Feb 14, 2012 at 6:32 PM, Amit Kucheria <amit.kucheria(a)linaro.org>wrote:
> Hi Hongbo,
>
> Paul(cc'ed) and I like to integrate suspend/resume testing[1] into
> PM-QA test suite for LAVA.
>
> Can you look at the script and modify it to run on inside PM-QA on
> your ARM board?
>
> Ask me if you need more clarifications.
>
> /Amit
>
> [1]
> http://bazaar.launchpad.net/~checkbox-dev/checkbox/trunk/view/head:/scripts…
>
I'd be surprised if it doesn't "just work", but there's a danger here, that
if we don't manage to resume via the sys rtc wakup, it will just hang until
the job times out. We might want to be careful where we run this, maybe as
part of a special run for pm perhaps? not sure how to best handle that yet.
Thanks,
Paul Larson
Copying the validation team to see if they can help.
The pg_config command is a part of the postgresql-common package, so you
might be able to get around this with:
sudo apt-get install postgresql-common
If you are missing that, you might also be missing the server which you
also need:
sudo apt-get install postgresql-client-common postgresql-9.1
-andy
On 02/16/2012 01:41 PM, Mustafa Faraj wrote:
> Hi Andy,
>
> This is Mustafa from Gumstix.
> I am trying to deploy a full Lava setup on my local machine using the
> lava-deployment-tool, but I am facing some issues which have me thinking
> that I might be doing something wrong.
> I created a virtualenv and got the lava-deployment-tool source in it.
> When I try to run ./lava-deployment-tool setup, I was getting some
> errors (I managed to work around them, and I am happy to share the
> problems/workarounds with you guys if needed). In the next step, where I
> try to create an installation bundle using ./lava-deployment-tool bundle
> requirements-latest.txt, I am getting stuck at the postgresql setup. I
> will append the error log at the end of the email.
> I would greatly appreciate it if you can help me get around this issue
>
> Thanks a lot,
> -Mustafa
>
>
> "
> Downloading/unpacking lava-test (from -r /tmp/tmp.A3Am9ZfvoX (line 12))
> Using download cache from
> /srv/lava/.downloads/http%3A%2F%2Fpypi.python.org
> <http://2Fpypi.python.org>%2Fpackages%2Fsource%2Fl%2Flava-test%2Flava-test-0.3.4.tar.gz
> Running setup.py egg_info for package lava-test
> Installed
> /home/gumstix/Desktop/LAVA2/lava-deployment-tool/build-bundle4/lava-test/versiontools-1.8.3-py2.7.egg
> Running setup.py egg_info for package lava-test
> Downloading/unpacking psycopg2 (from -r /tmp/tmp.A3Am9ZfvoX (line 2))
> Using download cache from
> /srv/lava/.downloads/http%3A%2F%2Fpypi.python.org
> <http://2Fpypi.python.org>%2Fpackages%2Fsource%2Fp%2Fpsycopg2%2Fpsycopg2-2.4.4.tar.gz
> Running setup.py egg_info for package psycopg2
> Error: pg_config executable not found.
> Please add the directory containing pg_config to the PATH
> or specify the full executable path with the option:
> python setup.py build_ext --pg-config /path/to/pg_config build ...
> or with the pg_config option in 'setup.cfg'.
> Complete output from command python setup.py egg_info:
> running egg_info
>
> creating pip-egg-info/psycopg2.egg-info
>
> writing pip-egg-info/psycopg2.egg-info/PKG-INFO
>
> writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
>
> writing dependency_links to
> pip-egg-info/psycopg2.egg-info/dependency_links.txt
>
> writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
>
> warning: manifest_maker: standard file '-c' not found
>
>
>
> Error: pg_config executable not found.
>
>
>
> Please add the directory containing pg_config to the PATH
>
> or specify the full executable path with the option:
>
>
>
> python setup.py build_ext --pg-config /path/to/pg_config build ...
>
>
>
> or with the pg_config option in 'setup.cfg'.
>
> ----------------------------------------
> Command python setup.py egg_info failed with error code 1
> Storing complete log in /home/gumstix/.pip/pip.log
> + die 'Failed to create bundle'
> + echo 'Failed to create bundle'
> Failed to create bundle
> + exit 1
> (LAVA2)gumstix@hwlab:~/Desktop/LAVA2/lava-deployment-tool$
> "
>
Hi
I'd like to remind everyone that we should keep python 3+
compatibility on the roadmap. Python 3 has lots of improvements that
are never going to be backported to 2.7. I'd like to ensure that our
core libraries are python3 compatible today, if possible. Obviously
our web stack is stuck in 2.7 world as it is blocked by Django's
evolution but that may very soon change. I will be posting additional
messages on how we can easily test our code in python3 today. For the
moment, remember python3 is here to stay and python2 is really going
away for good.
Thanks
ZK
Hi Ricardo, I had a couple of questions about the enablement tests.
1. I was looking at
http://validation.linaro.org/lava-server/scheduler/job/12514 and it seems a
lot of times they fail the whole test step. We are trying to change things
such that it only really fails the step if the test completely fails to
run, but if individual tests fail, it should not cause the whole test run
to exit with failed status. Does that seem sensible to you? Do you see any
reason not to do it that way? LTP is giving us problems here too, but in
the case of linaro tests, we should probably try to pick something and be
consistent.
2. I noticed a lot of the graphics tests are failing, and looking at the
logs, I see that it couldn't find the display:
root@linaro:~# [rc=0]: su - linaro -c 'DISPLAY=:0 xhost local:'
su - linaro -c 'DISPLAY=:0 xhost local:'
xhost: unable to open display ":0"
I see that the graphics basic enablement test is running before this, so I
think it's maybe leaving the system in an inconsistent state here. We
*could* reorder things, but what might be nicer is if we could do some
cleanup after this test. Would it be sufficient to just do a cleanup at
the end with something like:
service lightdm stop
kill_xorg
set_lightdm_session default (we'd need to save this at some point I guess)
service lightdm start
Let me know what you think.
Thanks,
Paul Larson
at the end
Hey Guys,
I was thinking about adding some documentation for LAVA roughly based on
the presentation I did last week. Paul mentioned during the presentation
that my suggestions of using virtualenv and pip commands were no longer
needed and that you could use lava-deployment-tool.
I just started to play with lava-deployment-tool on my dev box, but
stopped. After patching it to support Ubuntu Precise, the tool wanted to
start installing lots of things(like apache) and make changes to my
system (like upstart jobs).
It seems to me lava-deployment-tool is really cool, but is more for
production and not for a more simple scenario where you just want to
prototype some changes.
Is my conclusion wrong, or do you guys think its still worth documenting
my scenario, where you use virtualenv and a few pip commands? I think
documentation like this would end with a "next steps: lava-deployment-tool".
-andy
Update (sorry to steal the thread, google mail is not nice in this regard).
Prompted by Paul I've decided to try a bigLITTLE config. My test did
not end up very good:
1) I've tried the RealView Express A15x1-A7x1 model
2) It has different options that can be passed to model shell, needles
to say they are not compatible. I tweaked my semihosting line to look
like this:
-C coretile.cluster0.cpu0.semihosting-cmd_line="--kernel $KERNEL --
console=ttyAMA0 mem=512M ip=dhcp root=/dev/nfs nfsroot=$NFSROOT,tcp rw
earlyprintk $@"
3) I did not get the kernel to boot at all, I suspect that our boot
wrapper is not compatible or I mis-configured the kernel or something
in between. I did not even get a X11 terminal window as I usually
always get.
4) I feel lame for not trying earlier
Ideas welcome
ZK
On Wed, Feb 15, 2012 at 12:31 AM, Michael Hudson-Doyle
<michael.hudson(a)canonical.com> wrote:
> On Tue, 14 Feb 2012 20:24:51 +0100, Zygmunt Krynicki <zygmunt.krynicki(a)linaro.org> wrote:
>> On Tue, Feb 14, 2012 at 3:26 AM, Michael Hudson-Doyle
>> <michael.hudson(a)canonical.com> wrote:
>> > On Mon, 13 Feb 2012 22:27:25 +0100, Zygmunt Krynicki <zygmunt.krynicki(a)linaro.org> wrote:
>> >> Hi.
>> >>
>> >> Fast model support is getting better. It seems that with the excellent
>> >> patches by Peter Maydell we can now boot some kernels (I've only tried
>> >> one tree, additional trees welcome :-). I'm currently building a
>> >> from-scratch environment to ensure everything is accounted for and I
>> >> understand how pieces interact.
>> >>
>> >> Having said that I'd like to summarize how LAVA handles fast models:
>> >>
>> >> Technically the solution is not unlike QEMU which you are all familiar
>> >> with. The key differences are:
>> >> 1) Only NFS boot makes sense. There are no other sensible method that
>> >> I know of. We may also use a SD card (virtual obviously) but it is
>> >> constrained to two gigabytes of data.
>> >
>> > As mentioned in the other thread, it would be good to at least let ARM
>> > know that removing this limit would help us (if we can figure out how to
>> > do this).
>>
>> We may figure out how to do this by reading the LISA source code that
>> came with the model. That's a big task though (maybe grepping for mmc0
>> is a low hanging fruit, I did not check)
>
> That's not what I was suggesting! We should try to persuade ARM to do
> that. It may be that they can't do it in a reasonable timeframe, or
> maybe it's simply not a need that has been explained to them yet and is
> something they can do in a week.
>
>> >> 2) The way we actually boot is complicated. There is no uboot, fast
>> >> model interpreter actually starts an .axf file that can do anything
>> >> (some examples include running tests and benchmarks without actually
>> >> setting the kernel or anything like that). There is no way to easily
>> >> load the kernel and pass a command line. To work around that we're
>> >> using a special axf file that uses fast model semihosting features to
>> >> load the kernel/initrd from a host filesystem as well as to setup the
>> >> command line that will be passed to the booting kernel. This allows us
>> >> to freely configure NFS services and point our virtual kernel at
>> >> appropriate IP addresses and pathnames.
>> >
>> > So I guess I'd like to understand how this works in a bit more detail.
>> > Can you brain dump on the topic for a few minutes? :) What is "fast
>> > model semihosting"?
>>
>> It's a way to have "syscalls" that connect the "bare hardware" (be it
>> physical or emulated) to an external debugger or other monitor. You
>> can find a short introduction in this blog [1]. For us it means we get
>> to write bare-metal assembly that does the equivalent of open(),
>> read(), write() and close(). The files are being opened are on the
>> machine that runs the fast model. You can also print debugging
>> statements straight to the console this way (we probably could write
>> semihosting console driver if there is no such code yet) to get all of
>> the output to the same tty that runs the model (model_shell). A more
>> detailed explanation of this topic can be found in [2]
>>
>> Fast model semihosting simply refers to using semihosting facilities
>> in a fast model interpreter.
>>
>> [1]: http://blogs.arm.com/software-enablement/418-semihosting-a-life-saver-durin…
>> [2]: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0471c/CHDJHH…
>
> Thanks for that. Sounds a tiny little bit like it's using a
> JTAG-for-fast-models type facility?
>
>> >
>> >> 3) After the machine starts up we immediately open a TPC/IP connection
>> >> to a local TCP socket. We know which port is being used so we can
>> >> easily allocate them up front. This port is now the traditional LAVA
>> >> serial console.
>> >
>> > I guess there is a risk here that we will miss early boot messages?
>> > This might not matter much.
>>
>> There are other options but currently this seems to work quite okay.
>
> Fair enough.
>
>> > Once we've found someone at ARM we can officially complain at about fast
>> > models, an option to have serial comms happen on the process
>> > stdin/stdout would be nice.
>>
>> I think the reason they don't happen in the console is that by default
>> we get four telnet ports to connect to (definitely more than one) so
>> the logical question they'll ask is "which port should we redirect".
>> Maybe there is an option buried somewhere to make that happen but so
>> far I have not found it.
>
> Again, I'm not saying that this is something we should do...
>
>> >> The rest of this looks like QEMU:
>> >> - you can access the filesystem easily (to gather results)
>> >> - we can use QEMU to chroot into the NFS root to install additional
>> >> software (emulation via a fast model is extremely slow)
>> >
>> > In my testing, the pip install bzr+lp:lava-test step did not really work
>> > under QEMU. Maybe it does now, or maybe we can install a tarball or
>> > something.
>>
>> I installed lava-test using release tarball. That has worked pretty well.
>
> OK. That makes sense.
>
>> In general I think that:
>>
>> 1) We need to reconsider how to do testing on very slow machines
>
> You mean a "drive from the outside" approach like lava-android-test uses
> may make sense?
>
>> 2) What can be invoked on the host (part of installation, unless that
>> wants to build stuff, result parsing and tracking)
>
> Yeah, this sort of thing is a grey area currently. More below.
>
>> 3) What has to be invoked on the target (test code, system probes)
>>
>> It's important to make the intent very clear. If we define that
>> cmd_install installs something while in "master image" on the "target"
>> then we should not break that.
>
> Well. We can't avoid breaking that if there *is no master image*.
>
> Currently the dispatcher has the concept of a "reliable session", which
> is meant to be a target-like environment where things like compilation
> are possible. For master image based deployments, this is "booted into
> the master image, chrooted into a mounted testrootfs". For qemu, it is
> currently "boot the test image and hope that works", but it could be
> "chrooted into the testrootfs mounted on the host with qemu-arm-static
> in the right place", but that was less reliable than the other approach
> when I was testing this.
>
>> I think that it would be sensible to add "host_chroot" mode that
>> applies nicely to qemu and fast models. Very slow things that don't
>> care about the architecture could be invoked in that mode without
>> sacrificing performance.
>
> This code exists already. See _chroot_into_rootfs_session in
> lava_dispatcher.client.qemu.LavaQEMUClient and surrounds. The problem
> is that qemu is some distance from perfect...
>
> Maybe we can limit the things that lava-test install does to things that
> work under qemu -- I guess installing via dpkg usually works (unless
> it's something like mono) and gcc probably works ok? Maybe we can do
> something like scratchbox where gcc is magically a cross compiler
> running directly on the host?
>
> Cheers,
> mwh
On Tue, Feb 14, 2012 at 3:26 AM, Michael Hudson-Doyle
<michael.hudson(a)canonical.com> wrote:
> On Mon, 13 Feb 2012 22:27:25 +0100, Zygmunt Krynicki <zygmunt.krynicki(a)linaro.org> wrote:
>> Hi.
>>
>> Fast model support is getting better. It seems that with the excellent
>> patches by Peter Maydell we can now boot some kernels (I've only tried
>> one tree, additional trees welcome :-). I'm currently building a
>> from-scratch environment to ensure everything is accounted for and I
>> understand how pieces interact.
>>
>> Having said that I'd like to summarize how LAVA handles fast models:
>>
>> Technically the solution is not unlike QEMU which you are all familiar
>> with. The key differences are:
>> 1) Only NFS boot makes sense. There are no other sensible method that
>> I know of. We may also use a SD card (virtual obviously) but it is
>> constrained to two gigabytes of data.
>
> As mentioned in the other thread, it would be good to at least let ARM
> know that removing this limit would help us (if we can figure out how to
> do this).
We may figure out how to do this by reading the LISA source code that
came with the model. That's a big task though (maybe grepping for mmc0
is a low hanging fruit, I did not check)
>> 2) The way we actually boot is complicated. There is no uboot, fast
>> model interpreter actually starts an .axf file that can do anything
>> (some examples include running tests and benchmarks without actually
>> setting the kernel or anything like that). There is no way to easily
>> load the kernel and pass a command line. To work around that we're
>> using a special axf file that uses fast model semihosting features to
>> load the kernel/initrd from a host filesystem as well as to setup the
>> command line that will be passed to the booting kernel. This allows us
>> to freely configure NFS services and point our virtual kernel at
>> appropriate IP addresses and pathnames.
>
> So I guess I'd like to understand how this works in a bit more detail.
> Can you brain dump on the topic for a few minutes? :) What is "fast
> model semihosting"?
It's a way to have "syscalls" that connect the "bare hardware" (be it
physical or emulated) to an external debugger or other monitor. You
can find a short introduction in this blog [1]. For us it means we get
to write bare-metal assembly that does the equivalent of open(),
read(), write() and close(). The files are being opened are on the
machine that runs the fast model. You can also print debugging
statements straight to the console this way (we probably could write
semihosting console driver if there is no such code yet) to get all of
the output to the same tty that runs the model (model_shell). A more
detailed explanation of this topic can be found in [2]
Fast model semihosting simply refers to using semihosting facilities
in a fast model interpreter.
[1]: http://blogs.arm.com/software-enablement/418-semihosting-a-life-saver-durin…
[2]: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0471c/CHDJHH…
>
>> 3) After the machine starts up we immediately open a TPC/IP connection
>> to a local TCP socket. We know which port is being used so we can
>> easily allocate them up front. This port is now the traditional LAVA
>> serial console.
>
> I guess there is a risk here that we will miss early boot messages?
> This might not matter much.
There are other options but currently this seems to work quite okay.
> Once we've found someone at ARM we can officially complain at about fast
> models, an option to have serial comms happen on the process
> stdin/stdout would be nice.
I think the reason they don't happen in the console is that by default
we get four telnet ports to connect to (definitely more than one) so
the logical question they'll ask is "which port should we redirect".
Maybe there is an option buried somewhere to make that happen but so
far I have not found it.
>
>> The rest of this looks like QEMU:
>> - you can access the filesystem easily (to gather results)
>> - we can use QEMU to chroot into the NFS root to install additional
>> software (emulation via a fast model is extremely slow)
>
> In my testing, the pip install bzr+lp:lava-test step did not really work
> under QEMU. Maybe it does now, or maybe we can install a tarball or
> something.
I installed lava-test using release tarball. That has worked pretty well.
In general I think that:
1) We need to reconsider how to do testing on very slow machines
2) What can be invoked on the host (part of installation, unless that
wants to build stuff, result parsing and tracking)
3) What has to be invoked on the target (test code, system probes)
It's important to make the intent very clear. If we define that
cmd_install installs something while in "master image" on the "target"
then we should not break that. I think that it would be sensible to
add "host_chroot" mode that applies nicely to qemu and fast models.
Very slow things that don't care about the architecture could be
invoked in that mode without sacrificing performance.
Thanks
ZK
Hi
I took the liberty to update the launchpad project page for
lava-project [1]. I plan to use that whenever people come asking about
bugs/features/documentation. You may want to review the description.
/me really wishes for rich text editing on launchpad projects,
especially if it could just display the README file, eh
[1]: https://launchpad.net/lava-project/
Hi, I wanted to sync up on where things are with this, and as I understand,
there's some confusion still about how we should get the available kernel
and/or images for testing fast models.
First off, it seems there is no way to just take a kernel .axf as we
thought because the boot args are wrapped into it as well. Is there really
no way to inject that stuff after build time?
Is it worth revisiting whether we should have proper hwpacks for fast
models? I know there's the 2G max sd size issue with that, but if that's
something arm can fix, or if we have another way around it, would that help?
Finally, I feel like we've chased a pretty messy route to get fast models
supported here, which will ultimately break completely when we try to get
android running also. Please correct me if I'm wrong here, but it seems
the "recommended" approach is currently:
lava takes as input: git tree for the kernel we want to build, defconfig to
use, and rootfs tarball
lava builds the kernel (lava doesn't do this currently, complicating things
for us quite a bit - would be better if this step could be done externally,
like in jenkins where we do the kernel ci builds)
lava pushes the build axf to another system where we have the fast models
software installed, provisions the rootfs for booting over nfs
lava boots the system under fast models on this other machine, runs tests,
gathers results, etc
Is that pretty close Zygmunt? Is there something more straightforward and
less fragile that we can do here?
Thanks,
Paul Larson