Hi all!
I am working on writing LAVA test definitions and using the callback method
to send test results to kernelCI.
I noticed a sort of bug (?) when the test definitions are reporting a lot
of elements. The job finishes and the callback gets triggered before the
end of the log parsing. I think the callback method is not waiting for the
parser to finish before sending the event. The callback output is then
missing some test results.
I made a simple test script that reports 300 test elements and returns. I
can see in the LAVA log that they are all detected. But in the callback
object there is around 80 test results.
If I add a sleep (15 seconds) to hold the script before returning, the
callback has the 300 test results bundled in the json.
Did anyone experienced this before ?
Thanks !
--
Loys OLLIVIER
Baylibre
Hello Lava Team,
I have created some Lava jobs that use our proprietary Flasher, based on a DFU connection.
As our flasher is not a "standard" flasher, I have adapted the boot process to be able to use our flasher.
I use the boot method "minimal" to achieve this.
To call our flasher script, I have used the script called by the method "power_on". This is defined in the device configuration.
Find below an extract of the device content :
.......................................................................................
..
..
{% set hard_reset_command = '/usr/bin/pduclient --daemon localhost --hostname lava_pdu_01.lme.st.com --command reboot --port 1' %}
{% set power_off_command = '/usr/bin/pduclient --daemon localhost --hostname lava_pdu_01.lme.st.com --command off --port 1' %}
{% set power_on_command = '/root/git/lava-config/scripts/flash_stm32_programmer.sh -u lava_pdu_01.lme.st.com -p 1 -d usb1 -b ds378_2.lme.st.com -s 4_5_6 -f /tmp/test' %}
{% set connection_command = 'telnet localhost 2001' %}
..
..
.......................................................................................
This works correctly for a "static" configuration. The settings for the flasher are defined outside Lava by a script that configure the flashing parameters.
The "power_on" script reads these parameters, and launch the flashing on the board.
My problem now, is when I launch simultaneously jobs on several boards that requires different flashing binaries version.
I am unable to indicate to each boards which binary version to be used by our flasher.
The best way would be to pass parameters in the job to indicate which binary version has to be used by the flasher.
This could be done in the "deploy action" and pass to the "power_on" command, but I don't know how to implement it.
I don't know also if it is possible to do that easily ?
Find below my job definition.
###### Job definition ##############
actions:
- deploy:
timeout:
minutes: 5
to: ssh
os: oe
device:
- boot:
method: minimal
failure_retry: 2
auto_login:
login_prompt: 'login:'
username: root
prompts:
- 'root@stm32mp1'
timeout:
minutes: 10
transfer_overlay:
download_command: sync && sleep 15 && wget
unpack_command: tar -C / -xzf
- test: ... #############################
Thanks to support me.
BR
Philippe
When using LAVA inside a docker container, the LXC support adds lots
of unnecessary overhead since the docker images are already made to
include the necessary tools. So having another container is pointless.
Even worse, LXC doesn't work inside docker anyways.
The LXC support should be made optional for a given LAVA install.
Until LXC can be disabled, projects like lava-docker[1] simply cannot
support fastboot devices which is a major problem.
Kevin
[1] https://github.com/kernelci/lava-docker/
Hi,
I have lava-master and lava-slave v2018.1 installed, and a qemu device
added. Test job can be scheduler. Then I followed
https://validation.linaro.org/static/docs/v2/pipeline-server.html#using-zmq…
to enable ZMQ authentication.
Certificates were generated correctly, public certificates were copied
to master and slave respectively. With the following configs:
lava-master
```
MASTER_SOCKET="--master-socket tcp://*:5556"
LOGLEVEL="DEBUG"
ENCRYPT="--encrypt"
MASTER_CERT="--master-cert
/etc/lava-dispatcher/certificates.d/master.key_secret"
SLAVES_CERTS="--slaves-certs /etc/lava-dispatcher/certificates.d/"
```
lava-slave
```
MASTER_URL="tcp://192.168.11.214:5556"
LOGGER_URL="tcp://192.168.11.214:5555"
HOSTNAME="--hostname lava-slave1"
LOGLEVEL="DEBUG"
ENCRYPT="--encrypt"
MASTER_CERT="--master-cert /etc/lava-dispatcher/certificates.d/master.key"
SLAVE_CERT="--slave-cert /etc/lava-dispatcher/certificates.d/slave1.key_secret"
```
After lava-master and lava-slave restarted, I see the following logs.
Seems the connect was established, but lava-logs went offline.
lava-master
```
2018-01-30 11:05:50,260 DEBUG lava-slave1 => PING(20)
2018-01-30 11:05:52,086 DEBUG lava-master => PING(20)
2018-01-30 11:06:08,728 DEBUG lava-logs => PING(20)
2018-01-30 11:06:10,261 INFO scheduling health checks:
2018-01-30 11:06:10,270 DEBUG -> disabled on: lxc, qemu
2018-01-30 11:06:10,271 INFO scheduling jobs:
2018-01-30 11:06:10,272 DEBUG - lxc
2018-01-30 11:06:10,292 DEBUG - qemu
2018-01-30 11:06:10,332 DEBUG lava-slave1 => PING(20)
2018-01-30 11:06:12,115 DEBUG lava-master => PING(20)
2018-01-30 11:06:20,252 INFO [POLL] Received a signal, leaving
2018-01-30 11:06:20,254 INFO [CLOSE] Closing the controler socket
and dropping messages
2018-01-30 11:06:21,203 INFO [INIT] Dropping privileges
2018-01-30 11:06:21,204 DEBUG Switching to (lavaserver(114), lavaserver(119))
2018-01-30 11:06:21,204 INFO [INIT] Marking all workers as offline
2018-01-30 11:06:21,209 INFO [INIT] Starting encryption
2018-01-30 11:06:21,211 DEBUG [INIT] Opening master certificate:
/etc/lava-dispatcher/certificates.d/master.key_secret
2018-01-30 11:06:21,238 DEBUG [INIT] Using slaves certificates from:
/etc/lava-dispatcher/certificates.d/
2018-01-30 11:06:21,245 INFO [INIT] LAVA master has started.
2018-01-30 11:06:21,246 INFO [INIT] Using protocol version 2
2018-01-30 11:06:41,247 WARNING lava-logs is offline: can't schedule jobs
2018-01-30 11:07:01,255 WARNING lava-logs is offline: can't schedule jobs
2018-01-30 11:07:04,433 INFO lava-slave1 => HELLO
2018-01-30 11:07:04,433 WARNING New dispatcher <lava-slave1>
2018-01-30 11:07:09,450 DEBUG lava-slave1 => PING(20)
2018-01-30 11:07:21,260 WARNING lava-logs is offline: can't schedule jobs
2018-01-30 11:07:29,477 DEBUG lava-slave1 => PING(20)
2018-01-30 11:07:41,265 WARNING lava-logs is offline: can't schedule jobs
```
lava-slave
```
2018-01-30 11:06:10,283 DEBUG PING => master (last message 20s ago)
2018-01-30 11:06:10,335 DEBUG master => PONG(20)
2018-01-30 11:06:30,356 DEBUG PING => master (last message 20s ago)
2018-01-30 11:07:04,379 INFO [INIT] LAVA slave has started.
2018-01-30 11:07:04,380 INFO [INIT] Using protocol version 2
2018-01-30 11:07:04,390 INFO [INIT] Starting encryption
2018-01-30 11:07:04,390 DEBUG Opening slave certificate:
/etc/lava-dispatcher/certificates.d/slave1.key_secret
2018-01-30 11:07:04,413 DEBUG Opening master certificate:
/etc/lava-dispatcher/certificates.d/master.key
2018-01-30 11:07:04,414 INFO [INIT] Connecting to master as <lava-slave1>
2018-01-30 11:07:04,415 INFO [INIT] Greeting the master => 'HELLO'
2018-01-30 11:07:04,440 INFO [INIT] Connection with master established
2018-01-30 11:07:04,442 INFO Master is ONLINE
2018-01-30 11:07:04,443 INFO Waiting for instructions
2018-01-30 11:07:09,450 DEBUG PING => master (last message 5s ago)
2018-01-30 11:07:09,455 DEBUG master => PONG(20)
```
>From django admin console, I see lava-slave1 still is online, but
both lava-master and lava-logs workers went offline, and it stopped
scheduling test job. Have you guys ever see/hit this issue? Any advice
and suggestions would be appreciated.
Thanks,
Chase
Hi All,
We use IOZone test to measure performance of our DUT
Same command executed several time in a Lava session provide a diversity of results
iozone -az -i0 -i1 -I -e -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -f /mnt_emmc//tmp/iozone.tmp
Example1:
kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
102400 4 5053 4420 12471 12137
102400 16 5070 5218 22101 22207
102400 512 11733 12630 40799 40862
102400 1024 11446 11494 39982 39976
102400 16384 13839 14235 42093 42094
Example 2:
kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
102400 4 5088 5006 13496 13095
102400 16 5395 5549 17199 17220
102400 512 9203 10038 27819 27586
102400 1024 9382 8482 32514 32430
102400 16384 13569 13986 41992 42081
When same command is executed without LAVA with same DUT, results are more homogeneous (similar to exemple1) and permit to define clear targets
How to reduce interaction from Lava to measure performance of DUT ?
Is it possible to disable some checks / interactions with DUT during execution of each test (around 3 / 4 minutes) ?
Thanks in advance for your answer
Florence Rouger-Jung
Hi, our boards are powered via PDU and before flashing with pyocd, they need to be powered up first. I defined power_on_command in the device dictionary, but it is ignored.
It looks like an only limited set of methods add ResetDevice action to the pipeline and pyocd is not among them. Contrarily, if defined, power_off command always gets invoked because it is added by FinalizeAction to every test.
I'm wondering if there is any way to issue the power-on command without modifying the dispatcher pipeline source code.
Thanks,
Andrei Narkevitch, Cypress Semiconductor
This message and any attachments may contain confidential information from Cypress or its subsidiaries. If it has been received in error, please advise the sender and immediately delete this message.
Hi Everyone
Can anyone share sample Yaml File and Usecase using
"prepare-scp-overlay" using MultiNode.
Some calls can only be made against specific actions. Specifically, the
prepare-scp-overlay action needs the IP address of the host device to be
able to copy the LAVA overlay (containing the test definitions) onto the
device before connecting using ssh to start the test. This is a
complex configuration
to write.
--
Thanks & Regards
Chetan Sharma
Hello everyone,
I am using lava-tool to monitor my jobs. Previously I used:
$ lava-tool submit-job --block
Using version of lava-tool 0.23 I now have this message:
--> This kind of polling is deprecated and will be removed in the next
release. Please use "wait-for-job" command.
But "wait-for-job" doesn't exist.
There is a "wait-job-events" option though. I tried this one and it doesn't
return even once the job has finished. If I manually stop it and restart it
with the same job number I get as output:
--> Job already finished with status Complete.
Command I'm using:
$ lava-tool wait-job-events --job-id 20 http://user@lava-server
Is there anything I'm doing incorrectly ? Or are you aware of this bug ?
Thanks !
--
Loys OLLIVIER