On Thu, 31 Oct 2013 09:14:47 +0200
Ayman Hendawy <ayman.hendawy(a)gmail.com> wrote:
> Dear Neil,
Do not reply to individuals. Keep replies only to the list.
> Actually I wonder why it's not more open, why I can't get a real time
> access to the kit serial console, why debugger is not available,
> suppose I have an application over OS, I need to debug my code using
> a debugger, to get know the certain line causing the problem, why I
> don't have an access to some of the kit peripherals like USB port by
> some how.
>
> What I mean, such great effort of LAVA, what limit it to give there
> users more deeply access to there kits? why it's limited to posting
> jobs?
The simple answer is that this is to protect the use of the boards by
other users. Submitting a job puts the device into a test image where
the bugs in the test image are restricted to that test image. When the
test ends (for better or for worse), the board returns to a known,
working, state.
To do otherwise would make the admin burden unsustainable.
These are not general purpose debugging boards. These are test devices.
The hands-on debugging needs to be done in emulators or local boards -
preferably before the commits. LAVA is checking for side-effects of
developer changes, especially performance changes over time.
Access to the serial console of any LAVA device is restricted to the
lab admins. The devices do not belong to the developers, it isn't about
developers having access to "their" devices. The devices belong to LAVA
and are maintained as a service for all developers. Doing that requires
that LAVA imposes restrictions on what individual developers can do to
avoid individuals leaving the device in an unstable or unbootable state.
Many LAVA test jobs involve interim kernel builds - it is all too easy
to make a commit which gets turned into a LAVA job which leads to a
kernel panic in the test. If that was the main kernel for the device,
*someone* (i.e. the LAVA lab admins) would have to fix it. Restricting
tests to submitted jobs is that fix.
--
Neil Williams
=============
http://www.linux.codehelp.co.uk/
I have now resolved the issues mentioned in the previous email, and in doing so upgraded my LAVA instance to the latest version. This pulled in a commit to use 'ip route get' to detect which interface is connected to the lava dispatcher.
This has resulted in our TC2 jobs failing with the following error when attempting to boot the master image:
root@master [rc=0]# 2013-10-31 02:25:02 PM INFO: Waiting for network to come up
LC_ALL=C ping -W4 -c1 10.1.103.191
PING 10.1.103.191 (10.1.103.191) 56(84) bytes of data.
64 bytes from 10.1.103.191: icmp_req=1 ttl=62 time=0.370 ms
--- 10.1.103.191 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms
root@master [rc=0]#
ifconfig `ip route get 10.1.103.191 | cut -d ' ' -f3` | grep 'inet addr' |awk -F: '{split($2,a," "); print "<" a[1] ">"}'
10.1.99.1: error fetching interface information: Device not found
root@master [rc=0]#
2013-10-31 02:26:12 PM ERROR: Unable to determine target image IP address
2013-10-31 02:26:12 PM INFO: CriticalError
2013-10-31 02:26:12 PM WARNING: [ACTION-E] deploy_linaro_android_image is finished with error (Unable to determine target image IP address).
The reason for this being that the command used assumes that the interface will be in the 3rd field when doing the cut. However we are seeing that the output of ip route get 10.1.103.191 actually gives this:
$ ip route get 10.1.103.197
10.1.103.197 via 10.1.99.1 dev eth0 src 10.1.99.87
Cache
And the "cut -d ' ' -f3" bit then gives you 10.1.99.1, which ifconfig is unable to cope with. I suspect the issue is because our dispatcher is on a different subnet to our target devices, hence the "via 10.1.99.1" in the output.
I agree that it makes sense to use 'ip route get' to determine the interface, though I was wondering if you could provide us with a more flexible parsing of the output please? I can raise a launchpad bug for this if you would like?
Thanks
Dean
> -----Original Message-----
> From: Dean Arnold
> Sent: 30 October 2013 10:21
> To: Dean Arnold; 'linaro-validation(a)lists.linaro.org Validation'
> Cc: Basil Eljuse; Ian Spray
> Subject: RE: null value in column "admin_notifications" violates not-
> null constraint
>
> Hi All
>
> I was wondering if you could help me please?
>
> I have managed to get my instance of LAVA working again, however now I
> am seeing issues were every time a job is submitted, it is submitted
> twice and both jobs grab the test resource, meaning we are seeing some
> bizarre behaviour in the test run. See attached log.
>
> I am also seeing this when I attempt an upgrade:
>
> + set +x
> + lava-server manage syncdb --noinput
> WARNING:root:This instance will not use sentry as SENTRY_DSN is not
> configured
> + set +x
> + lava-server manage migrate --noinput
> WARNING:root:This instance will not use sentry as SENTRY_DSN is not
> configured
> Traceback (most recent call last):
> File "/srv/lava/instances/production/bin/lava-server", line 55, in
> <module>
> lava_server.manage.main()
> File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> 97c7da5/lava_server/manage.py", line 128, in main
> run_with_dispatcher_class(LAVAServerDispatcher)
> File "/srv/lava/.cache/eggs/lava_tool-0.7-
> py2.7.egg/lava_tool/dispatcher.py", line 45, in
> run_with_dispatcher_class
> raise cls.run()
> File "/srv/lava/.cache/eggs/lava_tool-0.7-
> py2.7.egg/lava/tool/dispatcher.py", line 147, in run
> raise SystemExit(cls().dispatch(args))
> File "/srv/lava/.cache/eggs/lava_tool-0.7-
> py2.7.egg/lava/tool/dispatcher.py", line 137, in dispatch
> return command.invoke()
> File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> 97c7da5/lava_server/manage.py", line 116, in invoke
> execute_manager(settings, ['lava-server'] + self.args.command)
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/core/management/__init__.py", line 459, in
> execute_manager
> utility.execute()
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/core/management/__init__.py", line 382, in execute
> self.fetch_command(subcommand).run_from_argv(self.argv)
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/core/management/base.py", line 196, in run_from_argv
> self.execute(*args, **options.__dict__)
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/core/management/base.py", line 232, in execute
> output = self.handle(*args, **options)
> File "/srv/lava/.cache/eggs/South-0.7.5-
> py2.7.egg/south/management/commands/migrate.py", line 107, in handle
> ignore_ghosts = ignore_ghosts,
> File "/srv/lava/.cache/eggs/South-0.7.5-
> py2.7.egg/south/migration/__init__.py", line 199, in migrate_app
> applied_all = check_migration_histories(applied_all, delete_ghosts,
> ignore_ghosts)
> File "/srv/lava/.cache/eggs/South-0.7.5-
> py2.7.egg/south/migration/__init__.py", line 88, in
> check_migration_histories
> raise exceptions.GhostMigrations(ghosts)
> south.exceptions.GhostMigrations:
>
> ! These migrations are in the database but not on disk:
> <lava_scheduler_app:
> 0033_auto__add_field_testjob_admin_notifications>
> ! I'm not trusting myself; either fix this yourself by fiddling
> ! with the south_migrationhistory table, or pass --delete-ghost-
> migrations
> ! to South to have it delete ALL of these records (this may not be
> good).
> + die 'Failed to run database migrations'
> + echo 'Failed to run database migrations'
> + exit 1
>
> I suspect that this is the underlying issue. Could you please
> recommend the best way to go about fixing a ghost migration issue
> please? It mentions to go fiddling in the database, but I would rather
> not hack away blindly :)
>
> Thanks
> Dean
>
>
> > -----Original Message-----
> > From: Dean Arnold
> > Sent: 25 October 2013 13:01
> > To: linaro-validation(a)lists.linaro.org Validation
> > Cc: Basil Eljuse; Ian Spray
> > Subject: null value in column "admin_notifications" violates not-null
> > constraint
> >
> > Hi All,
> >
> > I have recently carried out an upgrade of LAVA and I am now seeing an
> > issue, where I am unable to trigger any jobs. The error listed in
> > /srv/lava/instances/production/var/log/lava-scheduler.log can be seen
> > below.
> >
> > I have checked the database column in question (admin_notifications
> in
> > the lava_scheduler_app_testjob table?) and the contents is as it says
> > null. I have tried populating this column with a non-null string in
> > an attempt to make Django happy, but I am still seeing the problem.
> >
> > I am not sure where the corruption happened, I presume something went
> > wrong in the upgrade stage. Would it be possible to give me an
> example
> > of what should be in this column and I will add the data manually to
> > try and resolve the problem.
> >
> > Thanks
> > Dean
> >
> > ###############################
> >
> >
> > 2013-10-25 11:51:55,364 [ERROR]
> > [lava_scheduler_daemon.service.JobQueue] IntegrityError: null value
> in
> > column "admin_notifications" violates not-null constraint
> >
> > Traceback (most recent call last):
> > File "/usr/lib/python2.7/threading.py", line 524, in __bootstrap
> > self.__bootstrap_inner()
> > File "/usr/lib/python2.7/threading.py", line 551, in
> > __bootstrap_inner
> > self.run()
> > File "/usr/lib/python2.7/threading.py", line 504, in run
> > self.__target(*self.__args, **self.__kwargs)
> > --- <exception caught here> ---
> > File "/srv/lava/.cache/eggs/Twisted-12.1.0-py2.7-linux-
> > x86_64.egg/twisted/python/threadpool.py", line 167, in _worker
> > result = context.call(ctx, function, *args, **kwargs)
> > File "/srv/lava/.cache/eggs/Twisted-12.1.0-py2.7-linux-
> > x86_64.egg/twisted/python/context.py", line 118, in callWithContext
> > return self.currentContext().callWithContext(ctx, func, *args,
> > **kw)
> > File "/srv/lava/.cache/eggs/Twisted-12.1.0-py2.7-linux-
> > x86_64.egg/twisted/python/context.py", line 81, in callWithContext
> > return func(*args,**kw)
> > File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> > 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 70, in wrapper
> > return func(*args, **kw)
> > File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> > 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 242, in
> > getJobList_impl
> > job_list = self._assign_jobs(job_list)
> > File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> > 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 205, in
> > _assign_jobs
> > job_list = self._get_health_check_jobs()
> > File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> > 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 121, in
> > _get_health_check_jobs
> > job_list.append(self._getHealthCheckJobForBoard(device))
> > File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> > 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 286, in
> > _getHealthCheckJobForBoard
> > return TestJob.from_json_and_user(job_json, user, True)
> > File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> > 97c7da5/lava_scheduler_app/models.py", line 622, in
> from_json_and_user
> > job.save()
> > File "/srv/lava/.cache/eggs/django_restricted_resource-0.2.7-
> > py2.7.egg/django_restricted_resource/models.py", line 71, in save
> > return super(RestrictedResource, self).save(*args, **kwargs)
> > File "/srv/lava/.cache/eggs/Django-1.4.2-
> > py2.7.egg/django/db/models/base.py", line 463, in save
> > self.save_base(using=using, force_insert=force_insert,
> > force_update=force_update)
> > File "/srv/lava/.cache/eggs/Django-1.4.2-
> > py2.7.egg/django/db/models/base.py", line 551, in save_base
> > result = manager._insert([self], fields=fields,
> > return_id=update_pk, using=using, raw=raw)
> > File "/srv/lava/.cache/eggs/Django-1.4.2-
> > py2.7.egg/django/db/models/manager.py", line 203, in _insert
> > return insert_query(self.model, objs, fields, **kwargs)
> > File "/srv/lava/.cache/eggs/Django-1.4.2-
> > py2.7.egg/django/db/models/query.py", line 1593, in insert_query
> > return query.get_compiler(using=using).execute_sql(return_id)
> > File "/srv/lava/.cache/eggs/Django-1.4.2-
> > py2.7.egg/django/db/models/sql/compiler.py", line 910, in execute_sql
> > cursor.execute(sql, params)
> > File "/srv/lava/.cache/eggs/Django-1.4.2-
> > py2.7.egg/django/db/backends/postgresql_psycopg2/base.py", line 52,
> in
> > execute
> > return self.cursor.execute(query, args)
> > django.db.utils.IntegrityError: null value in column
> > "admin_notifications" violates not-null constraint
> >
> >
> > 2013-10-25 11:51:55,365 [ERROR] [sentry.errors] No servers
> configured,
> > and sentry not installed. Cannot send message
-- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2548782
Hi Tyler / Fathi / Ryan,
Hope one of you would be able to help us out with this question.
Dean has got a local LAVA instance setup at our end for the 64-bit Base model testing based on the deployment from your end. On our live setup we had an issue w.r.t database upgrade which Dean has addressed in a separate thread. It would be good if we could get help in resolving that issue.
We have got now demonstrated LAVA runs on AEMv8 , Cortex_Base_Model4x4 and Cortex_Base_Model1x1 following Linaro instructions.
One of the key usecases we have is for the ability to swap out the kernel or the firmware binaries from previously released hardware pack, and test the release candidate binaries using LAVA setup.
Q:: Can you advice how best to go about within LAVA setup to swap out the contents of the hardware pack for testing?
When we did the OOB testing we could see that the hardware pack contents get eventually unpacked to a 'fastmodel' folder.
In LAVA I can see some logs referring to
`/tmp/tmpilF6yY/rootfs/boot/Image-3.12.0-1-linaro-vexpress64' -> `/tmp/tmpilF6yY/boot-disc/Image-3.12.0-1-linaro-vexpress64'
`/tmp/tmpilF6yY/boot-disc/fvp/BL2_AP_Trusted_RAM.bin' -> `/srv/lava/instances/fastmodel-trial/var/www/lava-server/images/tmpb9uqba/BL2_AP_Trusted_RAM.bin'
`/tmp/tmpilF6yY/boot-disc/fvp/BL2_AP_Trusted_RAM.bin' -> `/tmp/tmpilF6yY/boot-disc/BL2_AP_Trusted_RAM.bin'
`/tmp/tmpilF6yY/boot-disc/fvp/BL31_AP_Trusted_RAM.bin' -> `/srv/lava/instances/fastmodel-trial/var/www/lava-server/images/tmpb9uqba/BL31_AP_Trusted_RAM.bin'
`/tmp/tmpilF6yY/boot-disc/fvp/BL31_AP_Trusted_RAM.bin' -> `/tmp/tmpilF6yY/boot-disc/BL31_AP_Trusted_RAM.bin'
Can you help us decipher these?
Is there an option where somehow I can specify via test definition, a location for the binaries which have to be replaced in the hw-pack 'unpacked location'?
Please let us know a suggested way forward in this case?
Thanks
Basil Eljuse...
-- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2548782
Hi All
I was wondering if you could help me please?
I have managed to get my instance of LAVA working again, however now I am seeing issues were every time a job is submitted, it is submitted twice and both jobs grab the test resource, meaning we are seeing some bizarre behaviour in the test run. See attached log.
I am also seeing this when I attempt an upgrade:
+ set +x
+ lava-server manage syncdb --noinput
WARNING:root:This instance will not use sentry as SENTRY_DSN is not configured
+ set +x
+ lava-server manage migrate --noinput
WARNING:root:This instance will not use sentry as SENTRY_DSN is not configured
Traceback (most recent call last):
File "/srv/lava/instances/production/bin/lava-server", line 55, in <module>
lava_server.manage.main()
File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-97c7da5/lava_server/manage.py", line 128, in main
run_with_dispatcher_class(LAVAServerDispatcher)
File "/srv/lava/.cache/eggs/lava_tool-0.7-py2.7.egg/lava_tool/dispatcher.py", line 45, in run_with_dispatcher_class
raise cls.run()
File "/srv/lava/.cache/eggs/lava_tool-0.7-py2.7.egg/lava/tool/dispatcher.py", line 147, in run
raise SystemExit(cls().dispatch(args))
File "/srv/lava/.cache/eggs/lava_tool-0.7-py2.7.egg/lava/tool/dispatcher.py", line 137, in dispatch
return command.invoke()
File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-97c7da5/lava_server/manage.py", line 116, in invoke
execute_manager(settings, ['lava-server'] + self.args.command)
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/core/management/__init__.py", line 459, in execute_manager
utility.execute()
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/srv/lava/.cache/eggs/South-0.7.5-py2.7.egg/south/management/commands/migrate.py", line 107, in handle
ignore_ghosts = ignore_ghosts,
File "/srv/lava/.cache/eggs/South-0.7.5-py2.7.egg/south/migration/__init__.py", line 199, in migrate_app
applied_all = check_migration_histories(applied_all, delete_ghosts, ignore_ghosts)
File "/srv/lava/.cache/eggs/South-0.7.5-py2.7.egg/south/migration/__init__.py", line 88, in check_migration_histories
raise exceptions.GhostMigrations(ghosts)
south.exceptions.GhostMigrations:
! These migrations are in the database but not on disk:
<lava_scheduler_app: 0033_auto__add_field_testjob_admin_notifications>
! I'm not trusting myself; either fix this yourself by fiddling
! with the south_migrationhistory table, or pass --delete-ghost-migrations
! to South to have it delete ALL of these records (this may not be good).
+ die 'Failed to run database migrations'
+ echo 'Failed to run database migrations'
+ exit 1
I suspect that this is the underlying issue. Could you please recommend the best way to go about fixing a ghost migration issue please? It mentions to go fiddling in the database, but I would rather not hack away blindly :)
Thanks
Dean
> -----Original Message-----
> From: Dean Arnold
> Sent: 25 October 2013 13:01
> To: linaro-validation(a)lists.linaro.org Validation
> Cc: Basil Eljuse; Ian Spray
> Subject: null value in column "admin_notifications" violates not-null
> constraint
>
> Hi All,
>
> I have recently carried out an upgrade of LAVA and I am now seeing an
> issue, where I am unable to trigger any jobs. The error listed in
> /srv/lava/instances/production/var/log/lava-scheduler.log can be seen
> below.
>
> I have checked the database column in question (admin_notifications in
> the lava_scheduler_app_testjob table?) and the contents is as it says
> null. I have tried populating this column with a non-null string in
> an attempt to make Django happy, but I am still seeing the problem.
>
> I am not sure where the corruption happened, I presume something went
> wrong in the upgrade stage. Would it be possible to give me an example
> of what should be in this column and I will add the data manually to
> try and resolve the problem.
>
> Thanks
> Dean
>
> ###############################
>
>
> 2013-10-25 11:51:55,364 [ERROR]
> [lava_scheduler_daemon.service.JobQueue] IntegrityError: null value in
> column "admin_notifications" violates not-null constraint
>
> Traceback (most recent call last):
> File "/usr/lib/python2.7/threading.py", line 524, in __bootstrap
> self.__bootstrap_inner()
> File "/usr/lib/python2.7/threading.py", line 551, in
> __bootstrap_inner
> self.run()
> File "/usr/lib/python2.7/threading.py", line 504, in run
> self.__target(*self.__args, **self.__kwargs)
> --- <exception caught here> ---
> File "/srv/lava/.cache/eggs/Twisted-12.1.0-py2.7-linux-
> x86_64.egg/twisted/python/threadpool.py", line 167, in _worker
> result = context.call(ctx, function, *args, **kwargs)
> File "/srv/lava/.cache/eggs/Twisted-12.1.0-py2.7-linux-
> x86_64.egg/twisted/python/context.py", line 118, in callWithContext
> return self.currentContext().callWithContext(ctx, func, *args,
> **kw)
> File "/srv/lava/.cache/eggs/Twisted-12.1.0-py2.7-linux-
> x86_64.egg/twisted/python/context.py", line 81, in callWithContext
> return func(*args,**kw)
> File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 70, in wrapper
> return func(*args, **kw)
> File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 242, in
> getJobList_impl
> job_list = self._assign_jobs(job_list)
> File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 205, in
> _assign_jobs
> job_list = self._get_health_check_jobs()
> File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 121, in
> _get_health_check_jobs
> job_list.append(self._getHealthCheckJobForBoard(device))
> File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> 97c7da5/lava_scheduler_daemon/dbjobsource.py", line 286, in
> _getHealthCheckJobForBoard
> return TestJob.from_json_and_user(job_json, user, True)
> File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-
> 97c7da5/lava_scheduler_app/models.py", line 622, in from_json_and_user
> job.save()
> File "/srv/lava/.cache/eggs/django_restricted_resource-0.2.7-
> py2.7.egg/django_restricted_resource/models.py", line 71, in save
> return super(RestrictedResource, self).save(*args, **kwargs)
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/db/models/base.py", line 463, in save
> self.save_base(using=using, force_insert=force_insert,
> force_update=force_update)
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/db/models/base.py", line 551, in save_base
> result = manager._insert([self], fields=fields,
> return_id=update_pk, using=using, raw=raw)
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/db/models/manager.py", line 203, in _insert
> return insert_query(self.model, objs, fields, **kwargs)
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/db/models/query.py", line 1593, in insert_query
> return query.get_compiler(using=using).execute_sql(return_id)
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/db/models/sql/compiler.py", line 910, in execute_sql
> cursor.execute(sql, params)
> File "/srv/lava/.cache/eggs/Django-1.4.2-
> py2.7.egg/django/db/backends/postgresql_psycopg2/base.py", line 52, in
> execute
> return self.cursor.execute(query, args)
> django.db.utils.IntegrityError: null value in column
> "admin_notifications" violates not-null constraint
>
>
> 2013-10-25 11:51:55,365 [ERROR] [sentry.errors] No servers configured,
> and sentry not installed. Cannot send message
-- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2548782
Hi All,
I have recently carried out an upgrade of LAVA and I am now seeing an issue, where I am unable to trigger any jobs. The error listed in /srv/lava/instances/production/var/log/lava-scheduler.log can be seen below.
I have checked the database column in question (admin_notifications in the lava_scheduler_app_testjob table?) and the contents is as it says null. I have tried populating this column with a non-null string in an attempt to make Django happy, but I am still seeing the problem.
I am not sure where the corruption happened, I presume something went wrong in the upgrade stage. Would it be possible to give me an example of what should be in this column and I will add the data manually to try and resolve the problem.
Thanks
Dean
###############################
2013-10-25 11:51:55,364 [ERROR] [lava_scheduler_daemon.service.JobQueue] IntegrityError: null value in column "admin_notifications" violates not-null constraint
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 524, in __bootstrap
self.__bootstrap_inner()
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
--- <exception caught here> ---
File "/srv/lava/.cache/eggs/Twisted-12.1.0-py2.7-linux-x86_64.egg/twisted/python/threadpool.py", line 167, in _worker
result = context.call(ctx, function, *args, **kwargs)
File "/srv/lava/.cache/eggs/Twisted-12.1.0-py2.7-linux-x86_64.egg/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/srv/lava/.cache/eggs/Twisted-12.1.0-py2.7-linux-x86_64.egg/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-97c7da5/lava_scheduler_daemon/dbjobsource.py", line 70, in wrapper
return func(*args, **kw)
File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-97c7da5/lava_scheduler_daemon/dbjobsource.py", line 242, in getJobList_impl
job_list = self._assign_jobs(job_list)
File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-97c7da5/lava_scheduler_daemon/dbjobsource.py", line 205, in _assign_jobs
job_list = self._get_health_check_jobs()
File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-97c7da5/lava_scheduler_daemon/dbjobsource.py", line 121, in _get_health_check_jobs
job_list.append(self._getHealthCheckJobForBoard(device))
File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-97c7da5/lava_scheduler_daemon/dbjobsource.py", line 286, in _getHealthCheckJobForBoard
return TestJob.from_json_and_user(job_json, user, True)
File "/srv/lava/.cache/git-cache/exports/lava-server/2013-10-17-97c7da5/lava_scheduler_app/models.py", line 622, in from_json_and_user
job.save()
File "/srv/lava/.cache/eggs/django_restricted_resource-0.2.7-py2.7.egg/django_restricted_resource/models.py", line 71, in save
return super(RestrictedResource, self).save(*args, **kwargs)
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 463, in save
self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 551, in save_base
result = manager._insert([self], fields=fields, return_id=update_pk, using=using, raw=raw)
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/db/models/manager.py", line 203, in _insert
return insert_query(self.model, objs, fields, **kwargs)
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/db/models/query.py", line 1593, in insert_query
return query.get_compiler(using=using).execute_sql(return_id)
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/db/models/sql/compiler.py", line 910, in execute_sql
cursor.execute(sql, params)
File "/srv/lava/.cache/eggs/Django-1.4.2-py2.7.egg/django/db/backends/postgresql_psycopg2/base.py", line 52, in execute
return self.cursor.execute(query, args)
django.db.utils.IntegrityError: null value in column "admin_notifications" violates not-null constraint
2013-10-25 11:51:55,365 [ERROR] [sentry.errors] No servers configured, and sentry not installed. Cannot send message
-- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2548782
Hello all,
We have some scheduled downtime on snapshots.linaro.org.
This has been been planned for Friday 11th October at 3PM (UTC +1).
Downtime should be around 1 hour, hopefully shorter.
Will announce a email once maintenance is complete.
Thanks
Ben Copeland
Hi Linaro Validation team, Ana, Shawn, Anton, Avi, Andreas, Henrik, Dexter,
Andrew, Andre, Francesco, Adam, John, Chad, Marco, Steve, David, Thibaut,
Jonathan and Anthony,
Your projects (LAVA dispatcher, Dexy, Checkfort, n3d, BaculaFS, Pymbolic,
Pycircuit, cuisine_sweet, notch.agent, BigJob, pyexeccontrol, BladeRunner,
kforgeinstall, bishop, hydrat, DoFler, SFLvault, Stylus2x, deployer, and
pIDLy) all use the Pexpect module in one way or another.
Pexpect development has been dormant for a long time, but Jeff Quast and I
have just taken over its maintenance, and we're planning to make a new
release, adding a unicode API and Python 3 support. Although we're giving
this a new major version number, we do intend to keep backwards
compatibility.
If you've got a few minutes to spare, please could you download the latest
beta release, and exercise it with the parts of your code that use pexpect?
You can get the beta from:
https://github.com/pexpect/pexpect/releases/
If you see any regressions, please drop me an e-mail or file an issue at:
https://github.com/pexpect/pexpect/issues
We hope to push the release out in the next couple of weeks, but we'll do
our best to fix any problems people find before the release.
We've also moved the documentation of Pexpect to Readthedocs:
http://pexpect.readthedocs.org/
Thank-you,
Thomas Kluyver
On Tue, 8 Oct 2013 17:48:18 +0100
Dean Arnold <Dean.Arnold(a)arm.com> wrote:
> Hi Neil,
*Please* keep the list in the loop. I am not the sole point of contact
for this issue.
> The reason the scheduler log wasn't present is because my scheduler
> is crashing when I try to run it. Unfortunately the upstart commands
> didn't seem to want to output the failure to me.
It's a daemon, stdout and stderr are closed for all daemons - this
isn't confined to upstart. That is why I advised running the command
manually....
> When I attempted to launch the scheduler manually I was able to
> detect from the command line output that the initial problem was due
> to the postgres database not accepting TCP/IP connections on port
> 5432.
If you had an older version of postgresql ever installed at the same
time as a new version, postgresql will change that to 5433, then 5434
and so on for each one. This is standard postgresql behaviour and
nothing to do with LAVA.
> WARNING:root:This instance will not use sentry as SENTRY_DSN is not
> configured execvp: No such file or directory
> 2013-10-08 16:26:48,742 [ERROR]
> [lava_scheduler_daemon.job.SchedulerMonitorPP] scheduler monitor for
> pdswlava-vetc2-04 crashed: [Failure instance: Traceback (failure with
> no frames): <class 'twisted.internet.error.ProcessTerminated'>: A
> process has ended with a probable error condition: process ended with
> exit code 1. ] 2013-10-08 16:26:48,864 [ERROR] [sentry.errors] No
> servers configured, and sentry not installed. Cannot send message No
> servers configured, and sentry not installed. Cannot send message
Looks like a django error - your database connection is still not
correct.
> Is this something you have seen before?
No. I just googled SENTRY_DSN.
> Did you need to install any
> extra packages outside of what the lava-deployment-tool provides when
> running the setupworker/installworker commands?
No - however, if you have a postgresql server installed on the worker,
it is not required.
> Could I have missed
> a configuration step somewhere?
The initial use of setup instead of setupworker could have messed up
the database configuration on the worker. It just looks like the worker
cannot find the database.
--
Neil Williams
=============
http://www.linux.codehelp.co.uk/
Hi Dave,
I am hoping that probably you might be working on this ticket https://cards.linaro.org/browse/CARD-832 (got from Kanta, although I get permission denied when accessing the card), if not please anyone in the wider validation email group please redirect me appropriately.
With 13.09 we do have the new Base model variants released and with the above ticket we expect to have LAVA support for these models also.
Since we do have a local LAVA instance, I would like to figure out how we can pick the upgrades and apply to our local instance. Could anyone please advice when we could expect this upgrade to LAVA for supporting 64-bit base models, coming our way?
I want to get Dean a heads-up in terms of his plan to get our setup upgraded.
Thanks
Basil Eljuse...
-- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England & Wales, Company No: 2548782