Init this_load to max value instead of 0 in find_idlest_group. If the local group is skipped because it doesn't have allowed CPUs, this_load stays to 0, no idlest group will be returned and the selected CPU will be a not allowed one (which will be replaced in select_fallback_rq by a random one). With a default value set to max, we will use the idlest group even if we skip the local_group.
Signed-off-by: Vincent Guittot vincent.guittot@linaro.org --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f9b03c1..2d9f782 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3532,7 +3532,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu, int sd_flag) { struct sched_group *idlest = NULL, *group = sd->groups; - unsigned long min_load = ULONG_MAX, this_load = 0; + unsigned long min_load = ULONG_MAX, this_load = ULONG_MAX; int load_idx = sd->forkexec_idx; int imbalance = 100 + (sd->imbalance_pct-100)/2;