The idle candidate CPUs utilization may have quite different values, as result this may introduce high CPU OPP selection if place the waken task on the candidate CPU which has high utilization value.
This commit introduces the variable 'best_idle_max_spare_cap' to select the maximum spare capacity within the idle CPUs with the same idle state. This results in the candidate idle CPU has lowest utilization, and later it compares energy with the candidate active CPU which also has lowest utilization from all active CPUs
This commit is highly inspired by Joonwoo Park's patch "sched/fair: take CPU energy cost into consideration for placement"; Joonwoo's patch uses cpu_util(), which may has stale utilization value for waken up task, and cpu_util() value is hard to take account into RT/DL pressure. This commit uses variable 'new_util' to calculate the maximum spare capacity for idle CPUs, which has fixed up the waken task util and can more smoothly to considered RT/DL pressure by sequential optimization.
Signed-off-by: Leo Yan leo.yan@linaro.org --- kernel/sched/fair.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 062a3b7..4b17a08 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6643,6 +6643,7 @@ static inline int find_best_target(struct task_struct *p, int *backup_cpu, unsigned long target_capacity = ULONG_MAX; unsigned long min_wake_util = ULONG_MAX; unsigned long target_max_spare_cap = 0; + unsigned long best_idle_max_spare_cap = 0; unsigned long target_util = ULONG_MAX; unsigned long best_active_util = ULONG_MAX; int best_idle_cstate = INT_MAX; @@ -6897,15 +6898,27 @@ static inline int find_best_target(struct task_struct *p, int *backup_cpu, * if they are also less energy efficient. * IOW, prefer a deep IDLE LITTLE CPU vs a * shallow idle big CPU. + * + * Within same idle state CPUs, select the + * maximum spare capacity CPU. */ - if (sysctl_sched_cstate_aware && - best_idle_cstate <= idle_idx) - continue; + if (sysctl_sched_cstate_aware) { + if (best_idle_cstate < idle_idx) + continue; + + if (best_idle_cstate == idle_idx && + (capacity_orig - new_util) < best_idle_max_spare_cap) + continue; + } else { + if ((capacity_orig - new_util) < best_idle_max_spare_cap) + continue; + }
/* Keep track of best idle CPU */ best_idle_min_cap_orig = capacity_orig; best_idle_cstate = idle_idx; best_idle_cpu = i; + best_idle_max_spare_cap = capacity_orig - new_util; continue; }
-- 1.9.1