On Thu, Feb 9, 2012 at 2:54 PM, Zygmunt Krynicki zygmunt.krynicki@linaro.org wrote:
On Thu, Feb 9, 2012 at 8:24 AM, Paul Larson paul.larson@linaro.org wrote:
I noticed that v.l.o was being obnoxiously slow again and top revealed that uwsgi had gone on a memory eating binge again:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3799 lava-pro 20 0 33.3g 24g 28 D 1 82.5 8:20.37 uwsgi
touch /srv/lava/instances/production/etc/lava-server/uwsgi.reload and after a few minutes it had cleared up. It's clear the uwsgi changes that were made previously weren't helping though. Any ideas?
I have another idea what may be causing this. My changes actually don't affect the master (they only affect when slaves are re-started) so it's possible that uwsgi itself used any amount of memory without being recycled. Last time it was one of the workers, not the controlling process itself.
Update, it was just as before. Uwsgi did not leak. I cannot explain why the processes were not recycled though.
Is there any way I can see collectd memory usage graphs for the past 24 hours?
As for what can be happening here:
- Uwsgi can run in one of many many ways. One of the factors that
matters to us is when is the python runtime initialized: prior to forking or past forking 2) If we run all of python before forking and then just fork to handle a few K requests (in a worker process) then perhaps something in the master is adding up.
I'll check uwsgi docs and try to reconfigure it to post-fork init. This will have the side effect of not loading python at all before a worker is spawned. Since new workers have lower 'performance' factor they will be handed less jobs (requests to serve) for a few moments after starting. I don't think this will have any performance impact in practice.
Thanks for spotting this. ZK
PS: We could write a uwsgi plugin that watches memory usage and reports to raven :)