On Fri, Oct 31, 2014 at 09:47:29AM +0100, Vincent Guittot wrote:
@@ -6414,11 +6399,12 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s */ if (busiest->group_type == group_overloaded && local->group_type == group_overloaded) {
load_above_capacity =
(busiest->sum_nr_running - busiest->group_capacity_factor);
load_above_capacity *= (SCHED_LOAD_SCALE * SCHED_CAPACITY_SCALE);
load_above_capacity /= busiest->group_capacity;
load_above_capacity = busiest->sum_nr_running *
SCHED_LOAD_SCALE;
if (load_above_capacity > busiest->group_capacity)
load_above_capacity -= busiest->group_capacity;
else
}load_above_capacity = ~0UL;
/*
It seems to me we no longer have need to assume each task contributes SCHED_LOAD_SCALE, do we?
But as it stands I tihnk this patch already does too much -- it could do with a splitting, but let me stare at is a wee bit more.