Hi Chris,
Any comments are appreciated! :)
On 04/04/2014 10:49 PM, Alex Shi wrote:
Hi Chris, I very appreciate for your so detailed explanations! And looks like your patch do a very excellent improvement on this issue.
I am just wondering if we could use a bit simple way to resolve this problem like the following patch. let the task moving destination cpu do active load balance instead of source cpu. So it could give the time to source cpu when destination waking. and don't need let destination keepalive. What's your opinions on this?
iff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7984458..f30e598 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6275,7 +6275,7 @@ more_balance: raw_spin_unlock_irqrestore(&busiest->lock, flags); if (active_balance) {
stop_one_cpu_nowait(cpu_of(busiest),
stop_one_cpu_nowait(busiest->push_cpu, active_load_balance_cpu_stop, busiest, &busiest->active_balance_work); }
@@ -7198,7 +7198,7 @@ static void hmp_force_up_migration(int this_cpu) raw_spin_unlock_irqrestore(&target->lock, flags); if (force)
stop_one_cpu_nowait(cpu_of(target),
stop_one_cpu_nowait(target->push_cpu, hmp_active_task_migration_cpu_stop, target, &target->active_balance_work); }
@@ -7295,7 +7295,7 @@ static unsigned int hmp_idle_pull(int this_cpu) if (force) { /* start timer to keep us awake */ hmp_cpu_keepalive_trigger();
stop_one_cpu_nowait(cpu_of(target),
stop_one_cpu_nowait(target->push_cpu, hmp_active_task_migration_cpu_stop, target, &target->active_balance_work); }
So do you have data show the trade off is worthy? like, the res interrupt cost, cpu alive cost VS go to idle and be wakeup cost. or benchmark data to show we get benefit from performance/power.
I have traces which show the resulting improvement but it's so small that it is lost in the noise in all the benchmarks we have. Most of the benchmarks do not actually involve that much migration between clusters
- typically the 'benchmark' app tasks start heavy processing and
continue until complete, and with the HMP thresholds we use our lighter workloads generally migrate once or twice per operation.
Is your statistic too sensitive to share in linaro-kernel ml? If not, I will be very glad to see your data. :)