On Wed, Sep 17, 2014 at 07:45:27PM +0100, Morten Rasmussen wrote:
On Mon, Sep 15, 2014 at 09:01:59PM +0100, Peter Zijlstra wrote:
On Mon, Sep 15, 2014 at 03:07:44PM -0400, Nicolas Pitre wrote:
On Mon, 15 Sep 2014, Peter Zijlstra wrote:
Let's suppose a task running on a 1GHz CPU producing a load of 100.
The same task on a 100MHz CPU would produce a load of 1000 because that CPU is 10x slower. So to properly evaluate the load of a task when moving it around, we want to normalize its load based on the CPU performance. In this case the correction factor would be 0.1.
Given those normalized loads, we need to scale CPU capacity as well. If the 1GHz CPU can handle 50 of those tasks it has a capacity of 5000.
In theory the 100MHz CPU could handle only 5 of those tasks, meaning it has a normalized capacity of 500, but only if the load metric is already normalized as well.
Or am I completely missing the point here?
So I was thinking of the usage as per the next patch; where we decide if a cpu is 'full' or not based on the utilization measure. For this measure we're not interested in inter CPU relations at all, and any use of capacity scaling simply doesn't make sense.
Right. You don't need to scale capacity to determine whether a cpu is full or not if you don't have DVFS, but I don't think it hurts if it is done right. We need the scaling to figure out how much capacity is available.
But I think you asking this question shows a 'bigger' problem in that the Changelogs are entirely failing at describing the actual problem and proposed solution. Because if that were clear, I don't think we would be having this particular discussion.
Yes, the bigger problem of scaling things with DVFS and taking big.LITTLE into account is not addressed in this patch set. This is the scale-invariance problem that we discussed at Ksummit.
big.LITTLE is factored in this patch set, but DVFS is not. My bad.
Morten