Antonio Terceiro antonio.terceiro@linaro.org writes:
On Wed, Nov 27, 2013 at 01:56:42PM +1300, Michael Hudson-Doyle wrote:
Hi,
I've been looking at moving my ghetto multinode stuff over to proper LAVA multinode on and off for a while now, and have something that I'm still not sure how best to handle: result aggregation.
The motivating case here is having load generation distributed across various machines: to compute the req/s the server is actually able to manage I want to add up the number of requests each load generator made.
I can sort of see how to do this myself, basically something like this:
- store the data on each node
- arbitrarily pick one node to be the one that does the aggregation
- do tar | nc style things to get the data onto that node
- analyze it there and store the results using lava-test-case
but I was wondering if the LAVA team have any advice here. In particular, steps 2. and 3. seem like something it would be reasonable for LAVA to provide helpers to do.
For 2. I would use a specific device (such as a kvm) with a specific role of "data analysis node" and run my analysis code there. I can't see how LAVA could provide something useful for that (besides documenting this "Use a Separate Data Analysis Node" pattern).
Yeah, this had occurred to me and makes sense. Especially as an extrapolated version of my request might be to generate graphs out of the data, which would require installing packages such as matplotlib....
For 3. I think it would make sense to have an API call that you could use from your data analysis node that would retrieve a given directory from the other nodes. Something like
lava-collect PATH DEST
Collects the contents of PATH in all other devices that are part of the multinode job and store them at DEST locally. For example, the call `lava-collect /var/lib/foobar /var/tmp` would result in
/var/tmp node01/ var/lib/foobar (stuff) node02/ var/lib/foobar (stuff) (...)
Yeah, that's the sort of thing I was thinking of. I'll have a play at implementing it soon I think, I'll let you know how it goes.
Cheers, mwh