On Mon, 4 Jan 2016 20:17:55 +0000 Neil Williams codehelp@debian.org wrote:
On Mon, 04 Jan 2016 15:37:38 +0100 Sjoerd Simons sjoerd.simons@collabora.co.uk wrote:
Hey all,
Our lava instance is currently well over 120.000 jobs ran, which is great.
Unfortunately this means we've got quite a lot of historical data, the postgresql database is around 45 gigs and well over 600 gigs of job output data in the filesystem. We'd love to trim that down somewhat to more sensible sizes by pruning the older job information. Are there some guidelines on how to do that (and or tools available for)?
production is larger still, staging is about the same size as your instance. We've considered this problem a few times but we don't have clear answers right now. It's pending - we need the input from a DBA on just how to optimise the current database models, implement an archive / purge method and keep on top of the changes due in the refactoring.
Deleting bundles (and attachments) has implications for also removing files on the filesystem. TestJobs are the most obvious metric for size but you may get a more relevant metric from a count of TestResult. It is the bundles and dashboard testresult table which use up the most space.
$ sudo lava-server manage shell Python 2.7.11 (default, Dec 9 2015, 00:29:25) [GCC 5.3.1 20151205] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole)
from dashboard_app.models import TestResult TestResult.objects.count()
Or from postgres (where timing is available):
lava-staging=# select count(id) from dashboard_app_testresult; count
2625659 (1 row)
Time: 367.321 ms
Sjoerd: do have any performance issues with a database that size? A comparison of the commands above would be useful (along with information on the load on that machine / amount of RAM etc.).