On Mon, Jul 04, 2016 at 12:54:41PM +0100, Morten Rasmussen wrote:
On Thu, Jun 23, 2016 at 09:43:10PM +0800, Leo Yan wrote:
If there have runnable tasks number is bigger than CPU number in one schedule group, then this sched_group load_avg signal will be under estimated due the CPU load_avg value cannot accumulate all running tasks load value.
On the other hand, if another sched_group CPU has less tasks number than CPU number. As the result, the first sched_group's per_task_load will be much less than second CPU's value.
So this patch is to consider this situation and set imbn to 1 to reduce imbalance bar.
Signed-off-by: Leo Yan leo.yan@linaro.org
kernel/sched/fair.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 16e1fe2b..7d18c7d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7537,6 +7537,9 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds) local->load_per_task = cpu_avg_load_per_task(env->dst_cpu); else if (busiest->load_per_task > local->load_per_task) imbn = 1;
- else if (busiest->sum_nr_running >= busiest->group_weight &&
local->sum_nr_running < local->group_weight)
imbn = 1;
Have you seen any actual effect of this patch?
Be honest, this patch has no obvious effect for performance improvement :)
fix_small_imbalance() is generally as mess of heuristics that, to the best of my knowledge, nobody can explain in detail anymore.
If you want to balance tasks/cpu why not just set:
env->imbalance = busiest->load_per_task;
and just return?
This is based on patches 6/7 which have changed semantics for avg_load. So here wants to loose the condtion for imbalance if busiest group have more tasks than its cpu number. Eventually want to use below code to return back env->imbalance = busiest->load_per_task:
if (busiest->avg_load + scaled_busy_load_per_task >= local->avg_load + (scaled_busy_load_per_task * imbn)) { env->imbalance = busiest->load_per_task; return; }
But as Vincent reviewed for patch 6/7, I should drop them so I think can take your suggestion.
Thanks, Leo Yan