On 01/09/2013 11:14 AM, Preeti U Murthy wrote:
Here comes the point of making both load balancing and wake up balance(select_idle_sibling) co operative. How about we always schedule the woken up task on the prev_cpu? This seems more sensible considering load balancing considers blocked load as being a part of the load of cpu2.
Hi Preeti,
I'm not sure that we want such steady state at cores level because we take advantage of migrating wake up tasks between cores that share their cache as Matthew demonstrated. But I agree that reaching such steady state at cluster and CPU level is interesting.
IMHO, you're right that taking the blocked load into consideration should minimize tasks migration between cluster but it should no prevent fast task migration between cores that share their cache
True Vincent.But I think the one disadvantage even at cpu or cluster level is that when we consider blocked load, we might prevent any more tasks from being scheduled on that cpu during periodic load balance if the blocked load is too much.This is very poor cpu utilization
The blocked load of a cluster will be high if the blocked tasks have run recently. The contribution of a blocked task will be divided by 2 each 32ms, so it means that a high blocked load will be made of recent running tasks and the long sleeping tasks will not influence the load balancing. The load balance period is between 1 tick (10ms for idle load balance on ARM) and up to 256 ms (for busy load balance) so a high blocked load should imply some tasks that have run recently otherwise your blocked load will be small and will not have a large influence on your load balance
Just tried using cfs's runnable_load_avg + blocked_load_avg in weighted_cpuload() with my v3 patchset, aim9 shared workfile testing show the performance dropped 70% more on the NHM EP machine. :(