Usually after system is over tipping point, the optimal result should be filtering out big tasks and migrate them onto big core; on the other hand the scheduler should keep small tasks onto LITTLE cores. But in current code, after system is over tipping point and set "overutilized" flag for root domain, the wakeup balance will totally disable energy aware scheduling totally. So even the waken task is a small task, it still has no chance to use energy aware algorithm to select a proper CPU for power saving and it's possible to migrate small tasks onto big core after fall back to traditional load balance.
This patch adds checking for wakeup path, if any low capacity CPU has spare capacity for the waken task, it will force to execute energy aware scheduling, in this case it does not care about if over tipping point or not. Another situation also will always run energy aware scheduling is for task with boosted margin, this means the task migration should be decided by schedTune PE filter; so for this kind tasks are mandatory to execute energy aware scheduling function.
Signed-off-by: Leo Yan leo.yan@linaro.org --- kernel/sched/fair.c | 37 +++++++++++++++++++++++++++++++++++-- 1 file changed, 35 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ec43670..429a10b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5442,6 +5442,39 @@ static bool cpu_overutilized(int cpu) return (capacity_of(cpu) * 1024) < (cpu_util(cpu) * capacity_margin); }
+static bool can_task_run_low_capacity_cpu(struct task_struct *p) +{ + int cpu = task_cpu(p); + int origin_max_cap = capacity_orig_of(cpu); + int target_max_cap = cpu_rq(cpu)->rd->max_cpu_capacity.val; + int i; + + for_each_cpu_and(i, tsk_cpus_allowed(p), cpu_online_mask) { + if (capacity_orig_of(i) < target_max_cap && + task_fits_spare(p, i)) + target_max_cap = capacity_orig_of(i); + } + + if (capacity_orig_of(smp_processor_id()) > target_max_cap) + return 1; + + if (origin_max_cap > target_max_cap) + return 1; + + return 0; +} + +static bool should_run_energy_aware_path(int cpu, struct task_struct *p) +{ + unsigned long margin = schedtune_task_margin(p); + + if (margin) + return 1; + + return !cpu_rq(cpu)->rd->overutilized || + can_task_run_low_capacity_cpu(p); +} + #ifdef CONFIG_SCHED_TUNE
static long @@ -6049,8 +6082,8 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f }
if (!sd) { - if (energy_aware() && !cpu_rq(cpu)->rd->overutilized) - new_cpu = energy_aware_wake_cpu(p, prev_cpu, sync); + if (energy_aware() && should_run_energy_aware_path(prev_cpu, p)) + new_cpu = energy_aware_wake_cpu(p, cpu, sync); else if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */ new_cpu = select_idle_sibling(p, new_cpu);
-- 1.9.1