On tickless system, one CPU runs load balance for all idle CPUs. The cpu_load of this CPU is updated before starting the load balance of each other idle CPUs. We should instead update the cpu_load of the balance_cpu.
Signed-off-by: Vincent Guittot vincent.guittot@linaro.org --- kernel/sched/fair.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1ca4fe4..9ae3a5b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4794,14 +4794,15 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle) if (need_resched()) break;
- raw_spin_lock_irq(&this_rq->lock); - update_rq_clock(this_rq); - update_idle_cpu_load(this_rq); - raw_spin_unlock_irq(&this_rq->lock); + rq = cpu_rq(balance_cpu); + + raw_spin_lock_irq(&rq->lock); + update_rq_clock(rq); + update_idle_cpu_load(rq); + raw_spin_unlock_irq(&rq->lock);
rebalance_domains(balance_cpu, CPU_IDLE);
- rq = cpu_rq(balance_cpu); if (time_after(this_rq->next_balance, rq->next_balance)) this_rq->next_balance = rq->next_balance; }
On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
On tickless system, one CPU runs load balance for all idle CPUs. The cpu_load of this CPU is updated before starting the load balance of each other idle CPUs. We should instead update the cpu_load of the balance_cpu.
Signed-off-by: Vincent Guittot vincent.guittot@linaro.org
kernel/sched/fair.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1ca4fe4..9ae3a5b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4794,14 +4794,15 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle) if (need_resched()) break;
raw_spin_lock_irq(&this_rq->lock);
update_rq_clock(this_rq);
update_idle_cpu_load(this_rq);
raw_spin_unlock_irq(&this_rq->lock);
rq = cpu_rq(balance_cpu);
raw_spin_lock_irq(&rq->lock);
update_rq_clock(rq);
update_idle_cpu_load(rq);
raw_spin_unlock_irq(&rq->lock);
rebalance_domains(balance_cpu, CPU_IDLE);
if (time_after(this_rq->next_balance, rq->next_balance)) this_rq->next_balance = rq->next_balance; }rq = cpu_rq(balance_cpu);
Ew, banging locks and updating clocks to what good end?
-Mike
On Thu, 2012-09-13 at 08:49 +0200, Mike Galbraith wrote:
On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
On tickless system, one CPU runs load balance for all idle CPUs. The cpu_load of this CPU is updated before starting the load balance of each other idle CPUs. We should instead update the cpu_load of the balance_cpu.
Signed-off-by: Vincent Guittot vincent.guittot@linaro.org
kernel/sched/fair.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1ca4fe4..9ae3a5b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4794,14 +4794,15 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle) if (need_resched()) break;
raw_spin_lock_irq(&this_rq->lock);
update_rq_clock(this_rq);
update_idle_cpu_load(this_rq);
raw_spin_unlock_irq(&this_rq->lock);
rq = cpu_rq(balance_cpu);
raw_spin_lock_irq(&rq->lock);
update_rq_clock(rq);
update_idle_cpu_load(rq);
raw_spin_unlock_irq(&rq->lock);
rebalance_domains(balance_cpu, CPU_IDLE);
if (time_after(this_rq->next_balance, rq->next_balance)) this_rq->next_balance = rq->next_balance; }rq = cpu_rq(balance_cpu);
Ew, banging locks and updating clocks to what good end?
Well, updating the load statistics on the cpu you're going to balance seems like a good end to me.. ;-) No point updating the local statistics N times and leaving the ones you're going to balance stale for god knows how long.
On Thu, 2012-09-13 at 10:19 +0200, Peter Zijlstra wrote:
On Thu, 2012-09-13 at 08:49 +0200, Mike Galbraith wrote:
On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
On tickless system, one CPU runs load balance for all idle CPUs. The cpu_load of this CPU is updated before starting the load balance of each other idle CPUs. We should instead update the cpu_load of the balance_cpu.
Signed-off-by: Vincent Guittot vincent.guittot@linaro.org
kernel/sched/fair.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1ca4fe4..9ae3a5b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4794,14 +4794,15 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle) if (need_resched()) break;
raw_spin_lock_irq(&this_rq->lock);
update_rq_clock(this_rq);
update_idle_cpu_load(this_rq);
raw_spin_unlock_irq(&this_rq->lock);
rq = cpu_rq(balance_cpu);
raw_spin_lock_irq(&rq->lock);
update_rq_clock(rq);
update_idle_cpu_load(rq);
raw_spin_unlock_irq(&rq->lock);
rebalance_domains(balance_cpu, CPU_IDLE);
if (time_after(this_rq->next_balance, rq->next_balance)) this_rq->next_balance = rq->next_balance; }rq = cpu_rq(balance_cpu);
Ew, banging locks and updating clocks to what good end?
Well, updating the load statistics on the cpu you're going to balance seems like a good end to me.. ;-) No point updating the local statistics N times and leaving the ones you're going to balance stale for god knows how long.
Sure, the goal is fine, but I wonder about the price vs payoff. I was thinking perhaps the redundant updates should go away instead, unless stats are shown to be causing real world pain.
-Mike
On 9/13/12, Peter Zijlstra peterz@infradead.org wrote:
On Thu, 2012-09-13 at 08:49 +0200, Mike Galbraith wrote:
On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
Well, updating the load statistics on the cpu you're going to balance seems like a good end to me.. ;-) No point updating the local statistics N times and leaving the ones you're going to balance stale for god knows how long.
Don't you think this patch has a very poor subject line?
Thanks, Rakib
On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
On tickless system, one CPU runs load balance for all idle CPUs. The cpu_load of this CPU is updated before starting the load balance of each other idle CPUs. We should instead update the cpu_load of the balance_cpu.
Signed-off-by: Vincent Guittot vincent.guittot@linaro.org
kernel/sched/fair.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1ca4fe4..9ae3a5b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4794,14 +4794,15 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle) if (need_resched()) break;
raw_spin_lock_irq(&this_rq->lock);
update_rq_clock(this_rq);
update_idle_cpu_load(this_rq);
raw_spin_unlock_irq(&this_rq->lock);
rq = cpu_rq(balance_cpu);
raw_spin_lock_irq(&rq->lock);
update_rq_clock(rq);
update_idle_cpu_load(rq);
raw_spin_unlock_irq(&rq->lock);
D'0h good spotting..
rebalance_domains(balance_cpu, CPU_IDLE);
if (time_after(this_rq->next_balance, rq->next_balance)) this_rq->next_balance = rq->next_balance; }rq = cpu_rq(balance_cpu);