On 16 December 2011 01:58, Suresh Siddha suresh.b.siddha@intel.com wrote:
On Thu, 2011-12-15 at 05:36 -0800, Vincent Guittot wrote:
I'm using cyclictest to easily reproduce the problem on my dual cortex-A9
So does the cyclictest itself exhibit the problem or running cyclictest with another workload showed the problem? In other words, what numbers of the workload did you see change with this patch?
Using a cyclictest -q -t 5 -D 4 on my dual cortex-A9 shows the fact that the softirqs timer and sched are not called very often and the cpu_power is nearly never updated.
I have also used the following sequence :
cyclictest -q -t 5 -D 4 & sleep 2 cyclictest -q -t 3 --affinity=0 -p 99 -D 2
The cpu_power of cpu0 should start to decrease when the rt threads are started. Without the patch, we must wait for the next sched softirq for starting to update the cpu_power and we have no guarantee of the maximum interval. With the patch, the cpu_power is updated regularly using the balance_interval value.
Vincent
Then again, its probably easier to keep update_group_power on this_cpu than to allow a remote update of your cpu_power.
This additional path for updating the cpu_power will only be used by this_cpu because it is called by idle_balance. But we still have a call to update_group_power by a remote cpu when nohz_idle_balance is called.
As Vincent mentioned, the current mainline kernel already updates the remote cpu's group_power in the nohz idle load balancing patch.
Also with all the recent nohz idle load balancing using kick, on a dual-core system there may not be any nohz idle load balancing if multiple tasks wakeup, run for short time and go back to idle before the next tick. We rely on the wakeup balance to get it right in this case.
thanks, suresh