On 10 November 2016 at 13:34, Neil Williams neil.williams@linaro.org wrote:
That many hosts on one machine could easily be consuming all resources for that machine and causing latency or timeouts. You should split the workload over 2 or more other machines by making new workers for the instance.
It was timing out when I had the dispatcher setting up it's virtual disks on the local disk storage, but I moved the dispatcher cache/tmp to a 48G ramdisk. The machine has a 24 cores and 96G of RAM, so contention is no longer an issue. Ultimately, I want to serve RBD COW images from my local Ceph cluster to speed up the image deployment, but I haven't looked into what's required for that yet.
Check the coordinator logs to see what messages were actually sent and whether all nodes managed to contact the coordinator. Only one node can make a connection to the coordinator at any one time and there is no queuing of connections to the coordinator.
I tried another run and only one node failed (job id 7624). It did appear to contact the coordinator:
2016-11-10 13:42:36,302 lavaSend handler in Coordinator received a messageID 'qa-network-info' for group '963dcade-1f0d-42bb-b537-05a4d6e55fb1' from /var/lib/lava/dispatcher/slave/tmp/7624/device.yaml
It did not seem to get the lava-wait-all for that job, however. Just ones from the other jobs:
2016-11-10 13:42:53,566 lava_wait_all: {u'blocksize': 4096, u'client_name': u'/var/lib/lava/dispatcher/slave/tmp/7631/device.yaml', u'hostname': u'newlava', u'request': u'lava_wait_all', u'group_name': u'963dcade-1f0d-42bb-b537-05a4d6e55fb1', u'host': u'lava.multapplied.net', u'role': u'remote-router1', u'messageID': u'qa-network-info', u'poll_delay': 3, u'port': 3079}
These specific tests have been running just fine for months under 2016.9, and the release before that. This problem began immediately after I upgraded to 2016.11 this morning.