Energy aware scheduling sets tipping point when any CPU in the system is overutilized. So there have several occasions to set root domain's overutilized flag to indicate system is over tipping point, like scheduler tick, load balance, enqueue task, on the other hand the scheduler only utilize load balance's function update_sg_lb_stats() to iterate all CPUs to make sure all CPUs are not overutilized and then clear this flag after system is under tipping point,
For idle CPU, it will keep stale utilization value and this value will not be updated until the CPU is waken up. In some worse case, the CPU may stay in idle states for very long time (even may in second level), so before the CPU enter idle state it has quite high utilization value this will let scheduler always think the CPU is "overutilized" and will not switch to state for under tipping point. As result, a very small task stays on big core for long time due system cannot go back to energy aware path.
This patch is to check CPU idle state in function update_sg_lb_stats(), so if CPU is in idle state then will simply consider the CPU is not overutilized. So avoid to set tipping point by idle CPUs.
Signed-off-by: Leo Yan leo.yan@linaro.org --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 937eca2..43eae09 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7409,7 +7409,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, if (!nr_running && idle_cpu(i)) sgs->idle_cpus++;
- if (cpu_overutilized(i)) { + if (cpu_overutilized(i) && !idle_cpu(i)) { *overutilized = true; if (!sgs->group_misfit_task && rq->misfit_task) sgs->group_misfit_task = capacity_of(i); -- 1.9.1