On Mon, Sep 15, 2014 at 03:07:44PM -0400, Nicolas Pitre wrote:
On Mon, 15 Sep 2014, Peter Zijlstra wrote:
Let's suppose a task running on a 1GHz CPU producing a load of 100.
The same task on a 100MHz CPU would produce a load of 1000 because that CPU is 10x slower. So to properly evaluate the load of a task when moving it around, we want to normalize its load based on the CPU performance. In this case the correction factor would be 0.1.
Given those normalized loads, we need to scale CPU capacity as well. If the 1GHz CPU can handle 50 of those tasks it has a capacity of 5000.
In theory the 100MHz CPU could handle only 5 of those tasks, meaning it has a normalized capacity of 500, but only if the load metric is already normalized as well.
Or am I completely missing the point here?
So I was thinking of the usage as per the next patch; where we decide if a cpu is 'full' or not based on the utilization measure. For this measure we're not interested in inter CPU relations at all, and any use of capacity scaling simply doesn't make sense.
But I think you asking this question shows a 'bigger' problem in that the Changelogs are entirely failing at describing the actual problem and proposed solution. Because if that were clear, I don't think we would be having this particular discussion.