Hi,
I've installed LAVA and created 'qemu' device type.
$ sudo lava-server manage add-device-type '*'
$ sudo lava-server manage add-device --device-type qemu qemu01
Then, I downloaded an example of yaml to submit a job for the qemu image.
$ wget
https://staging.validation.linaro.org/static/docs/v2/examples/test-jobs/qem…
./
$ lava-tool submit-job http://<name>@localhost qemu-pipeline-first-job.yaml
The error is found during running 'image.py'.
(http://woogyom.iptime.org/scheduler/job/15)
Traceback (most recent call last):
File "/usr/bin/lava", line 9, in <module>
load_entry_point('lava-tool==0.14', 'console_scripts', 'lava')()
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
153, in run
raise SystemExit(cls().dispatch(args))
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
143, in dispatch
return command.invoke()
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 216, in invoke
job_runner, job_data = self.parse_job_file(self.args.job_file,
oob_file)
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 265, in parse_job_file
env_dut=env_dut)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 165, in parse
test_action, counts[name])
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 66, in parse_action
Deployment.select(device, parameters)(pipeline, parameters)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/logical.py",
line 203, in select
willing = [c for c in candidates if c.accepts(device, parameters)]
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/image.py",
line 116, in accepts
if 'image' not in device['actions']['deploy']['methods']:
KeyError: 'actions'
Traceback (most recent call last):
File "/usr/bin/lava", line 9, in <module>
load_entry_point('lava-tool==0.14', 'console_scripts', 'lava')()
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
153, in run
raise SystemExit(cls().dispatch(args))
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
143, in dispatch
return command.invoke()
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 216, in invoke
job_runner, job_data = self.parse_job_file(self.args.job_file,
oob_file)
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 265, in parse_job_file
env_dut=env_dut)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 165, in parse
test_action, counts[name])
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 66, in parse_action
Deployment.select(device, parameters)(pipeline, parameters)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/logical.py",
line 203, in select
willing = [c for c in candidates if c.accepts(device, parameters)]
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/image.py",
line 116, in accepts
if 'image' not in device['actions']['deploy']['methods']:
KeyError: 'actions'
It seems no 'methods' is found under actions->deploy block on parsing
the yaml file but I'm not sure this error means wrong yaml usage or not.
Best regards,
Milo
Hi Williams,
I want to get lava job submitter by lava-tool.
And when I use command "lava-tool job-details", the submitter info is displayed as "submitter_id". How can I convert the submitter id to submitter username?
Thanks.
The gitweb (depend apache2) and lava is installed on same host.
But the 80 port is used by lava, so gitweb cannot be visited with browser.
So i want to change the lava‘s port to another one, such as 8088, but after
changed the file:
/etc/apache2/sites-enabled/lava-server.conf, lava can not works.
Does anyone know how to make lava-server use another port ?
Btw, i can not find out the "DocumentRoot" of lava-server . The config file
is defined the "DocumentRoot" is "/usr/share/lava-server/static/lava-server/",
but i can not see the default index.html. ( Only see the templates file
in /usr/lib/python2.7/dist-packages/lava_server/templates/index.html )
Could someone tell me where is the lava-server's default index page ?
--
王泽超
TEL: 13718389475
北京威控睿博科技有限公司 <http://www.ucrobotics.com>
Hi Everyone,
I am trying to set up a standalone Lava V2 Server by following the
instructions on the Linaro website and so far things have gone smoothly.
I have Lava installed, a superuser created and I can access the
application through a web browser. But, I have the following issues:
ISSUE #1:
- When I tried to submit a simple Lava V2 test job, I got an error
message stating that the "beaglebone black" device type is not
available.
- I found the directory where the .jinja2 files were stored including
the beaglebone-black.jinja2 file, but regardless of what I tried, I
couldn't get the web application to see the device type definitions.
- It seems like the application isn't pointing to the directory where
those device type files are stored.
- What do I need to do to make the Lava Server "see" those device type
files?
ISSUE #2:
- When I tried to submit a job, I pasted a small .yaml file and the
validation failed because it didn't recognize the data['run'] in the
job. I tried a few others and then I tried a V1 .json file and it
validated just fine.
- What do I have to do to allow Lava to accept V2 .yaml files? Am I
missing something simple?
As always, I greatly appreciate any feedback you may have to help me
out.
Thank you in advance!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hi Williams,
The submitted time is 8 hours behind my local time. How to change job submitted time displayed on LAVA web page?
I have tried to modify TIMEZONE of the file "/usr/lib/python2.7/dist-packages/lava_server/settings/common.py" and "/usr/lib/python2.7/dist-packages/django/conf/global_settings", and then restarted the LAVA server, but it seemed nothing changed.
Thanks.
Hello.
I have configured a LAVA server and I set up a local Django account to start configuring things:
sudo lava-server manage createsuperuser --username <user> --email=<mail>
Then I want to add LDAP support by adding the relevant fields to /etc/lava-server/settings.conf:
"AUTH_LDAP_SERVER_URI": "ldaps://server.domain.se:636",
"AUTH_LDAP_BIND_DN": "CN=company_ldap,OU=Service Accounts,OU=Resources,OU=Data,DC=domain,DC=se",
"AUTH_LDAP_BIND_PASSWORD": "thepwd",
"AUTH_LDAP_USER_ATTR_MAP": {
"first_name": "givenName",
"email": "mail"
},
"DISABLE_OPENID_AUTH": true
I have restarted both apache2 and lava-server.
I was expecting to get a Sign In page like this one:
https://validation.linaro.org/static/docs/v1/_images/ldap-user-login.png
Unfortunately I'm not familiar with neither django (and Web development) and LDAP and I don't know how to debug this. I have tried to grep for ldap|LDAP in /var/log/lava-server but nothing pops up.
Unfortunately I couldn't find a way to browse the mailing list for previous answers. GMANE search doesn't work today.
How should I proceed?
I have a multi-node test involving 13 roles that is no longer syncing properly after upgrading to 2016.11 this morning. It seems that 2 or 3 nodes end up waiting for a specific message while the other ones finish the message and move on to the next. Looking at the dispatcher log, I don't see any errors, but it's only logging that it's sending to some of the nodes. For example, I see a message like this for the nodes that work in a run:
2016-11-10 13:10:37,295 Sending wait messageID 'qa-network-info' to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tm
p/7620/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {},
"/var/lib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}
2016-11-10 13:10:37,295 Sending wait response to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"message": {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7620/
device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {}, "/var/l
ib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}, "response": "ack"}
For the nodes that get stuck, there is no message like above.
All of the nodes are qemu type, all on the same host. The nodes that fail are not consistent, but there seems to be always 2 or 3 that fail in every run I tried.
Is there anything I can look at here to figure out what is happening?
--
James Oakley
james.oakley(a)multapplied.net
[Moving to lava-users as suggested by Neil]
On 11/07/2016 03:20 PM, Neil Williams (Code Review) wrote:
> Neil Williams has posted comments on this change.
>
> Change subject: Add support for the depthcharge bootloader
> ......................................................................
>
>
>
> Patch Set 3:
>
> (1 comment)
>
> https://review.linaro.org/#/c/15203/3/lava_dispatcher/pipeline/actions/depl…
>
> File lava_dispatcher/pipeline/actions/deploy/tftp.py:
>
> Line 127: def _ensure_device_dir(self, device_dir):
>> Cannot say that I have fully understood it yet. Would it be correct
>> if the
>
> The Strategy classes must not set or modify anything. The accepts
> method does some very fast checks and returns True or False. Anything
> which the pipeline actions need to know must be specified in the job
> submission or the device configuration. So either this is restricted
> to specific device-types (so a setting goes into the template) or it
> has to be set for every job using this method (for situations where
> the support can be used or not used on the same hardware for
> different jobs).
>
> What is this per-device directory anyway and how is it meant to work
> with tftpd-hpa which does not support configuration modification
> without restarting itself? Jobs cannot require that daemons restart -
> other jobs could easily be using that daemon at the same time.
So each firmware image containing Depthcharge will also contain
hardcoded values for the IP of the TFTP server, and for the paths of a
cmdline.txt file and a FIT image. The FIT image containing a kernel and
a DTB file, and optionally a ramdisk.
Because the paths are set when the FW image is flashed, we cannot use
the per-job directory. Thus we add a parameter to the device that is to
be set in the device-specific template of Chrome devices. If that
parameter is present, then a directory in the root of the TFTP files
tree will be created with the value of that parameter.
The TFTP server doesn't need to be restarted because its configuration
is left unchanged, we just create a directory where depthcharge will
look for the files.
Thanks,
Tomeu
> I think this needs to move from IRC and gerrit to a thread on the
> lava-users mailing list where the principles can be checked through
> more easily.
>
>