On 21 April 2015 at 18:58, Michael Turquette mturquette@linaro.org wrote:
Quoting Juri Lelli (2015-04-16 09:46:47)
On 16/04/15 06:29, Michael Turquette wrote:
+#define UP_THRESHOLD 95
Is this a leftover? In the changelog you say that you moved away from thresholds. Anyway, since we scale utilization by freq, I'm not sure we can live without some sort of up_threshold. The problem is that if you are running a task flat out on a CPU at a certain freq, let's say the lower one, you'll always get a usage for that CPU that corresponds to the current capacity of that CPU at that freq. As you use the usage signal to decide when to ramp up, you will never ramp up in this situation because the signal won't cross the capacity at the lower frequency.
Juri & Morten,
Yes, the UP_THRESHOLD constant is a leftover.
We discussed the issue of usage being capped at the current capacity in our call yesterday but I have some doubts. Let's forget big.little for a moment and talk about an SMP system. On my pandaboard I clearly see usage values taken directly from get_cpu_usgae that scale up and down through the whole range (and as a result the cpu frequencies selected cover the whole range).
My current testing involves short running tasks that are quickly queued and dequeued, not a long running task as you suggest. Is there a different behavior in the way cfs.utilization_load_avg is used depending on task length?
Can you please explain why you feel that the return value of get_cpu_usage will not exceed the current capacity? I do not observe this behavior. Do you see this when testing only my branch? Or do you see it when merging my branch with the eas v3 series?
Vincent,
The value of cfs.utilization_load_avg is already normalized against the max possible capacity, right? I do not believe that the return value of get_cpu_usage is capped at the current capacity, but please let me know if I have a misunderstanding.
You're right. get_cpu_usage is only capped bu max capacity. Nevertheless, with frequency invariance patches, the utilization of a sched entity is capped by the current capacity so the usage which is a sum of sched_entity utilization will be capped by current capacity
We could solve this problem by putting the up threshold back. As soon as you cross it you go to max, and then adapt, choosing the right capacity for the actual, non capped, utilization of the task.
Juri,
In my testing so far I have not seen a reason to add a threshold back in. I'm OK to do so but I need to be convinced. I did not exactly understand your point on the call yesterday so maybe we can figure it out here on the list.
Thanks a lot, Mike