On Thu, Feb 20, 2014 at 05:32:53AM +0000, Alex Shi wrote:
On 02/19/2014 06:12 PM, Alex Shi wrote:
Yes, if wake_affine prefer current cpu not prev, I can understand to set wake_idx 0 for nothing bias and heavier prev_eff_load later. But why we prefer this cpu not prev?
I track down to this commit: 4ae7d5cefd4aa 'improve affine wakeups'. And wake_affine was changed many times with different reason. Anyway I wanna not to touch it unless some benchmark pop up.
So, the following is updated patch. I followed the current logical more closely.
========= From 9a56491e701dcf6aaa12bd61963e81774a908ccd Mon Sep 17 00:00:00 2001 From: Alex Shi alex.shi@linaro.org Date: Sat, 23 Nov 2013 23:18:09 +0800 Subject: [PATCH 04/11] sched: unify imbalance bias for target group
Old code considers the bias in source/target_load already. but still use imbalance_pct as last check in idlest/busiest group finding. It is also a kind of redundant job. If we bias imbalance in source/target_load, we'd better not use imbalance_pct again.
After cpu_load array removed, it is nice time to unify the target bias consideration. So I remove the imbalance_pct from last check and add the live bias using.
On wake_affine, since all archs' wake_idx is 0, current logical is just want to prefer current cpu. so we follows this logical. but rename the target_load to source_load to show clear our bias. Thanks for reminding from Morten!
Signed-off-by: Alex Shi alex.shi@linaro.org
kernel/sched/fair.c | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eeffe75..7b910cf 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1016,7 +1016,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, static unsigned long weighted_cpuload(const int cpu); static unsigned long source_load(int cpu); -static unsigned long target_load(int cpu); +static unsigned long target_load(int cpu, int imbalance_pct); static unsigned long power_of(int cpu); static long effective_load(struct task_group *tg, int cpu, long wl, long wg); @@ -3967,7 +3967,7 @@ static unsigned long source_load(int cpu)
- Return a high guess at the load of a migration-target cpu weighted
- according to the scheduling class and "nice" value.
*/ -static unsigned long target_load(int cpu) +static unsigned long target_load(int cpu, int imbalance_pct) { struct rq *rq = cpu_rq(cpu); unsigned long total = weighted_cpuload(cpu); @@ -3975,6 +3975,11 @@ static unsigned long target_load(int cpu) if (!sched_feat(LB_BIAS)) return total;
- /*
* Bias target load with imbalance_pct.
*/
- total = total * imbalance_pct / 100;
- return max(rq->cpu_load, total);
} @@ -4191,7 +4196,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) this_cpu = smp_processor_id(); prev_cpu = task_cpu(p); load = source_load(prev_cpu);
- this_load = target_load(this_cpu);
- this_load = source_load(this_cpu);
It looks a bit odd that both source and destination cpu loads are found using a function name source_load(). IMHO, it would be clearer if you got rid of source_load() and target_load() completely, and just used weighted_cpuload() instead. You only use target_load() twice (further down) anyway.
/* * If sync wakeup then subtract the (maximum possible) @@ -4247,7 +4252,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) if (balanced || (this_load <= load &&
this_load + target_load(prev_cpu) <= tl_per_task)) {
/*this_load + source_load(prev_cpu) <= tl_per_task)) {
- This domain has SD_WAKE_AFFINE and
- p is cache cold in this domain, and
@@ -4293,7 +4298,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) if (local_group) load = source_load(i); else
load = target_load(i);
load = target_load(i, imbalance);
Here you could easily use weighted_cpuload() instead and apply the bias as before (below).
avg_load += load; } @@ -4309,7 +4314,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) } } while (group = group->next, group != sd->groups);
- if (!idlest || 100*this_load < imbalance*min_load)
- if (!idlest || this_load < min_load)
This change would go away if you used weighted_cpuload().
return NULL;
return idlest; } @@ -5745,6 +5750,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, { unsigned long load; int i;
- int bias = 100 + (env->sd->imbalance_pct - 100) / 2;
memset(sgs, 0, sizeof(*sgs)); @@ -5752,8 +5758,8 @@ static inline void update_sg_lb_stats(struct lb_env *env, struct rq *rq = cpu_rq(i); /* Bias balancing toward cpus of our domain */
if (local_group)
load = target_load(i);
if (local_group && env->idle != CPU_IDLE)
load = target_load(i, bias);
Could be weighted_cpuload() instead, but you would have to keep the lines you delete below.
else load = source_load(i);
@@ -6193,14 +6199,6 @@ static struct sched_group *find_busiest_group(struct lb_env *env) if ((local->idle_cpus < busiest->idle_cpus) && busiest->sum_nr_running <= busiest->group_weight) goto out_balanced;
- } else {
/*
* In the CPU_NEWLY_IDLE, CPU_NOT_IDLE cases, use
* imbalance_pct to be conservative.
*/
if (100 * busiest->avg_load <=
env->sd->imbalance_pct * local->avg_load)
}goto out_balanced;
force_balance:
I think it clearer now what this patch set does. It rips out cpu_load[] completely and changes all it users to use weighted_cpuload() (cfs.runnable_load_avg) instead. The longer term view provided by the cpu_load[] indexes is not replaced. Whether that is a loss, I'm not sure.
Morten