On Mon, Feb 24, 2014 at 02:58:35AM +0000, Alex Shi wrote:
@@ -4191,7 +4196,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) this_cpu = smp_processor_id(); prev_cpu = task_cpu(p); load = source_load(prev_cpu);
- this_load = target_load(this_cpu);
- this_load = source_load(this_cpu);
It looks a bit odd that both source and destination cpu loads are found using a function name source_load(). IMHO, it would be clearer if you got rid of source_load() and target_load() completely, and just used weighted_cpuload() instead. You only use target_load() twice (further down) anyway.
Yes, weighted_cpuload has better meaning.
/* * If sync wakeup then subtract the (maximum possible) @@ -4247,7 +4252,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) if (balanced || (this_load <= load &&
this_load + target_load(prev_cpu) <= tl_per_task)) {
/*this_load + source_load(prev_cpu) <= tl_per_task)) {
- This domain has SD_WAKE_AFFINE and
- p is cache cold in this domain, and
@@ -4293,7 +4298,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) if (local_group) load = source_load(i); else
load = target_load(i);
load = target_load(i, imbalance);
Here you could easily use weighted_cpuload() instead and apply the bias as before (below).
avg_load += load; } @@ -4309,7 +4314,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) } } while (group = group->next, group != sd->groups);
- if (!idlest || 100*this_load < imbalance*min_load)
- if (!idlest || this_load < min_load)
This change would go away if you used weighted_cpuload().
Yes, but seem better to left the bias unified in target_load.
My point is that you could get rid of target_load() completely. source_load() is already gone in your patches.
return NULL;
return idlest; } @@ -5745,6 +5750,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, { unsigned long load; int i;
- int bias = 100 + (env->sd->imbalance_pct - 100) / 2;
memset(sgs, 0, sizeof(*sgs)); @@ -5752,8 +5758,8 @@ static inline void update_sg_lb_stats(struct lb_env *env, struct rq *rq = cpu_rq(i); /* Bias balancing toward cpus of our domain */
if (local_group)
load = target_load(i);
if (local_group && env->idle != CPU_IDLE)
load = target_load(i, bias);
Could be weighted_cpuload() instead, but you would have to keep the lines you delete below.
On current logical target_load may seek before, and do bias when idx is busy or idle. I am afraid weighted_cpuload is not good here. and I prefer to keep bias in a uniform mode, not spread in larger scope.
Right, using weighted_cpuload() here would remove the local group versus other group biasing completely. The new bias is different from the existing one though. You have already discarded the bias from source_load(). The new bias is just a scaling factor, so it isn't exactly comparable to the existing one.
Now that source_load() is gone, would it then make sense to rename target_load() to biased_load() or something?
Morten