Function group_smaller_cpu_capacity() checks if one schedule group has smaller capacity than another one:
return sg->sgc->max_capacity + capacity_margin - SCHED_LOAD_SCALE < ref->sgc->max_capacity;
The value (capacity_margin - SCHED_LOAD_SCALE) is an absolute value for difference checking, so it's easily broken if two scheduler groups have no much difference (like CA53.Fast+CA53.Slow system).
When this function is invalid the schedule group with misfit tasks will be wrongly cleared flag so misfit task has no chance to migrate to higher capacity CPU.
This patch is to directly check schedule group maximum_capacity and has a minor fix for maximum_capacity assignment with original capacity.
Signed-off-by: Leo Yan leo.yan@linaro.org --- kernel/sched/fair.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2ae55f6..f5fb04f 100755 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6923,6 +6923,8 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu) raw_spin_unlock_irqrestore(&mcc->lock, flags);
skip_unlock: __attribute__ ((unused)); + sdg->sgc->max_capacity = capacity; + capacity *= scale_rt_capacity(cpu); capacity >>= SCHED_CAPACITY_SHIFT;
@@ -6931,7 +6933,6 @@ skip_unlock: __attribute__ ((unused));
cpu_rq(cpu)->cpu_capacity = capacity; sdg->sgc->capacity = capacity; - sdg->sgc->max_capacity = capacity; }
void update_group_capacity(struct sched_domain *sd, int cpu) @@ -7103,8 +7104,7 @@ group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs) static inline bool group_smaller_cpu_capacity(struct sched_group *sg, struct sched_group *ref) { - return sg->sgc->max_capacity + capacity_margin - SCHED_LOAD_SCALE < - ref->sgc->max_capacity; + return sg->sgc->max_capacity < ref->sgc->max_capacity; }
static enum group_type group_classify(struct lb_env *env, -- 2.7.4