Hi Patrick,
On Thu, Jun 23, 2016 at 09:54:13AM +0100, Patrick Bellasi wrote:
On 14-Jun 21:42, Leo Yan wrote:
Hi Patrick,
Hi Leo,
[ + eas-dev ]
Here have a common question for how to define schedTune threshold array for payoff. So basically I want check below questions:
When every CGroup has its own perf_boost_idx for PB region and perf_constrain_idx for PC region. So do you have suggestion or guideline to define these index?
And for difference CGroup like "backgroud", "foreground" or "performance" every CGroup will have its dedicated index or the platform can share the same index value?
How to define the array value for "threshold_gains"?
IIUC this array is platform dependency, but what's the reasonable method to generate this table? Here have some suggested testing for generating this table?
Or my understanding is wrong so this array is fixed, then just need ajust perf_boost_idx/perf_constrain_idx for platform is enough?
So far we cannot set these payoff parameters (including perf_boost_idx/perf_constrain_idx and threshold_gains) from sysfs dynamically, so how we can initilizae these value for platform specific? Suppose now we can only set these value when kernel's init flow, right?
I think all these questions at the end they boil down to a single one: is the threshold_params a platform dependent or independent array?
Well, my original view was for this array to be NOT platform dependent at all. It is actually defined just as an "implementation detail" to speed-up the __schedtune_accept_deltas function.
Let's consider that you do not have this array for a moment. In this case the PE space is still defined as well as the PB and PC cuts. These cuts are just in a continuous space but still, from a conceptual standpoint: the more you boost a task the more you accept to pay a bigger energy for a smaller amount of performances.
Be honest, upper sentence is very good explanation for understanding PB cuts. But I still struggle to understand PC cuts :) I usually think (PC) region is used to make sure performance will not downgrade too much if we cannot see significant power saving.
That's the basic idea, which translates than in some implementation considerations: does it make sense to distinguish between tasks boosted 61% or 62%? Probably no, thus the PB and PC spaces can be discretized to have a faster check for a schedule candidate to be over or below the cat. An easy way to define a finite and simple set of cuts was to just consider points which are just 1 unit apart in the Perf and Energy axes. That's how that table has been defined. Again there are no platform related consideration on building that table.
Thanks for clear explanation. I have no more question from design perspective but still have concern for how to use it on SoC. Let's see more detailed implementation:
If boost = 0 then PE will be a vertical cut, so will only keep (O) and (PC) regions. If boost = 5% or 10%, with current threshold_params it will cut both (PB) and (PC) regions; this is hard to understand.
I think more reasonable method is likely to shift the cut gradient from vertical to right side a little so we can keep almost region for (PC) region and also give some chance for (PB) region. if so, then threshold_params should be:
static struct threshold_params threshold_gains[] = { { 1, 5 }, /* >= 0% */ { 2, 5 }, /* >= 10% */ { 3, 5 }, /* >= 20% */ { 4, 5 }, /* >= 30% */ { 5, 5 }, /* >= 40% */ { 5, 4 }, /* >= 50% */ { 5, 3 }, /* >= 60% */ { 5, 2 }, /* >= 70% */ { 5, 1 }, /* >= 80% */ { 5, 0 } /* >= 90% */ };
And perf_boost_idx and perf_constrain_idx will have same value so have same gradient so the cut will step by step to shift from high gradient to low grandient. How about you think for this?
However, you can argue that the optimal boost value for a task is something which is somehow platform dependent. I would say that it is more use-case dependent. Thus, probably boosting with the same value all the foreground tasks is not the best way to go. Right, but that's the reason why we support the possibility to defined multiple CGroups.
Each cgroup can be used to defined the boost value which has been found to be optimal for certain use-cases running on a specific platform.
These are the idea at the base of the original design, but if you have a different view let's talk about. Maybe that some more specific example/use-case can help on describing the need for a different approach.
Agree. payoff is not to resolve optimal issue but this is done by CGroup. Case is important and I will try to gather related info if possible.
Thanks, Leo Yan
Cheers Patrick
-- #include <best/regards.h>
Patrick Bellasi