On Thu, 2014-04-03 at 14:32 +0100, Jon Medhurst (Tixy) wrote: [...]
It looks like updates to hmp_target_mask can lag or lead idle-pull attempts. I don't know if it's worth trying to investigate if there's something that can or should be done to avoid that situation occurring, or just cope with it by adding something like...
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7984458..38bc70b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3735,7 +3735,8 @@ static struct sched_entity *hmp_get_heaviest_task( hmp = hmp_faster_domain(cpu_of(se->cfs_rq->rq)); hmp_target_mask = &hmp->cpus; if (target_cpu >= 0) {
BUG_ON(!cpumask_test_cpu(target_cpu, hmp_target_mask));
if (!cpumask_test_cpu(target_cpu, hmp_target_mask))
hmp_target_mask = cpumask_of(target_cpu); } /* The currently running task is not on the runqueue */return 0;
@@ -7255,7 +7256,7 @@ static unsigned int hmp_idle_pull(int this_cpu) /* check if heaviest eligible task on this * CPU is heavier than previous task */
if (hmp_task_eligible_for_up_migration(curr) &&
if (curr && hmp_task_eligible_for_up_migration(curr) && curr->avg.load_avg_ratio > ratio && cpumask_test_cpu(this_cpu, tsk_cpus_allowed(task_of(curr)))) {
Assuming I don't hear otherwise soon, I plan on going with the above change. It seems to me that BUG_ON is excessive anyway, for something we can safely and simply deal with.