Hello everyone,
TL;DR: LAVA development and deployment infrastructure will be moving to
git over the course of this month. The expected cut off date is August
18th.
We are starting the migrating all of the LAVA source code from bzr to git
today. We planned this to happen in the least-desruptive way possible, so this
will be done in a phases.
The plan is the following:
----------------8<----------------8<----------------8<-----------------
1) mirror current bzr branches into git.
So the first step is to make the contents of the existing bzr repos on git.
This is done, and all LAVA components are available at git.linaro.org. Those
repositories contain the contents of their bzr counterparts as of today, and
will be updated again when we discontinue bzr.
This step is already done.
2) update deployment infrastructure to use git
lava-deployment-tool and lava-manifest will be branched already in their git
repositories updated to pull code for the other components from git instead of
bzr. This will be tested on staging, including upgrading from a bzr-based
deployment to a git-based one.
lava-lab (the repository where we maintain the salt configuration for the
Linaro Cambridge lab) will also be switched to git at this point.
Up until this phase is completed, bug fixes and updates to the official
production site will still be done from bzr.
3) final bzr cut off and move to git
When we are satisfied with the deployment from git branches, we will finally
discontinue bzr.
lava-deployment-tool and lava-manifest "deploy-from-git" branches will be
merged into the respective master branches.
All other components will be synced from bzr to git one last time.
At this point the bzr branches on Launchpad will be officially deprecated and
will probably not receive any more updates (so that getting the latest version
on bzr continues to be possible).
Development moves to git.linaro.org.
We will continue using Launchpad for bug management.
This is expected to be done by August the 18th.
----------------8<----------------8<----------------8<-----------------
Please feel free to voice any concerns or questions regarding this
change.
--
Antonio Terceiro
Software Engineer - Linaro
http://www.linaro.org
I would like to use lava-dispatch fot test this device.
I Have made wand.con end wand01.con.
You can see tham...
To make wand.conf I was started from panda.conf making a small change
#client_type = bootloader
client_type = master !!! IS THIS THE BEST CHOISE???
I would like to test a OE make file system for that I Have used:
wand3.json
that is configured with dummy_deploy.
class cmd_dummy_deploy(BaseAction):
parameters_schema = {
'type': 'object',
'properties': {
'target_type': {'type': 'string', 'enum': ['ubuntu', 'oe',
'android', 'fedora']},
},
'additionalProperties': False,
}
def run(self, target_type):
print("deploy.py::cmd_dummy_deploy::141::run-end")
device = self.client.target_device
#device.boot_master_image()
device.deployment_data = device.target_map[target_type]
print("deploy.py::cmd_dummy_deploy::141::run-end")
In deploy.py...
The problem I have is that In any case a soft reset of the board is made.
My question is: Is may configuration correct??
Best Regards
Novello G.
Hello All,
In the recent LEG update summary I found a mention about LAVA support for UEFI SCT tests.
I am quite keen to know the status of this.
Is the current support for only Models or can I expect this could work on TC2 also?
Keen to know the status and see if I could make use of this support internally.
Thanks
Basil Eljuse...
-- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2548782
I am trying to set up a Xen CI job on lava. The components are built
(Xen hypervisor, xen tools deb package, Xen-enabled kernel, standard
minimal rootfs). Now I'm working out what the lava job should look
like.
Xen requires a non-standard boot with the hypervisor loaded as
well as the kernel before booting (and rootfs unpacked).
Julien says we need to do (at least some of) the config on this page
before running XEN:
http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/Vexpress
which involves disabling the A7 (required), and one way of loading the
stuff in the right places, which there may be uboot/uefi-based
alternatives to?). This config is done in images.txt and board.txt
files.
Those files apparently live on the second, internal, microSD card. I
presume that deploy_linaro_image only writes to the main SD card.
Certainly the hwpack
(http://releases.linaro.org/13.08/ubuntu/vexpress/hwpack_linaro-vexpress_201…)
does not contain an images.txt or board.txt file)
The card can apparently be exposed as a fat filesystem over USB
(usb-storage). Is that cable plugged in? Is there a lava mechanism for
accessing this and updating it?
What firmware version (boot monitor) is in the machine in the lab?
What is currently in the images.txt, board.txt files it is booting
with? Is 'sys_flags' bringup already selected or not?
Who knows about this stuff?
If we can't change those files in lava jobs then I believe that the
xen CI task is currently blocked until one of these becomes true:
* Xen runs on the standard vexpress config
* Xen runs on arndale properly
* A mechanism for changing these files is developed
The Xen card:
https://cards.linaro.org/browse/VIRT-75
Wookey
--
Principal hats: Linaro, Emdebian, Wookware, Balloonboard, ARM
http://wookware.org/
Hi,
I was looking at the following test results summary:
https://validation.linaro.org/dashboard/image-reports/leg-java-armv8
and, taking build 451, I was wondering why the following tests are
listed:
mysql, phpinfo, phpmysql, pwrmgmt, sdkhelloc, sdkhellocxx
when they are not mentioned in the following job definition:
https://validation.linaro.org/scheduler/job/70985
Looking at what was explicitly listed I see:
$ grep '"testdef":' job_70985.json
"testdef": "openembedded/busybox.yaml"
"testdef": "openembedded/device-tree.yaml"
"testdef": "openembedded/ethernet.yaml"
"testdef": "openembedded/kernel-version.yaml"
"testdef": "openembedded/perf.yaml"
"testdef": "openembedded/toolchain.yaml"
"testdef": "openembedded/openjdk7-sanity.yaml"
"testdef": "openembedded/openjdk8-sanity.yaml"
"testdef": "openembedded/mauve.yaml"
"testdef": "openembedded/jtreg.yaml"
--
andy
Hullo
pls excuse any naive questions - I'm not super familiar with all of the Linaro work.
I am interested in using the Linaro board specifications as part of a CI/CD pipeline. Ideally, I'd run the simulations in qemu running in a headless VM, so that I can manage the configuration of the instrumentation using a tool such as Vagrant. My intention is to cover off mostly regression testing and injection of accelerated time and h/w / communications errors in an automated way.
Is this approach used much? I've tried to run the stock qemu + beagleboard images - to simply confirm that the approach could work, but these seemed to assume capabilities in the environment hosting qemu that weren't in my first vm (eg a video interface or some network interface to hang a vnc session off).
Am I pursuing a pointless approach, and is there any other work on-going with such an approach?
regards
Tim