On 14-Jun 21:42, Leo Yan wrote:
Hi Patrick,
Hi Leo,
[ + eas-dev ]
Here have a common question for how to define schedTune threshold array for payoff. So basically I want check below questions:
When every CGroup has its own perf_boost_idx for PB region and perf_constrain_idx for PC region. So do you have suggestion or guideline to define these index?
And for difference CGroup like "backgroud", "foreground" or "performance" every CGroup will have its dedicated index or the platform can share the same index value?
How to define the array value for "threshold_gains"?
IIUC this array is platform dependency, but what's the reasonable method to generate this table? Here have some suggested testing for generating this table?
Or my understanding is wrong so this array is fixed, then just need ajust perf_boost_idx/perf_constrain_idx for platform is enough?
So far we cannot set these payoff parameters (including perf_boost_idx/perf_constrain_idx and threshold_gains) from sysfs dynamically, so how we can initilizae these value for platform specific? Suppose now we can only set these value when kernel's init flow, right?
I think all these questions at the end they boil down to a single one: is the threshold_params a platform dependent or independent array?
Well, my original view was for this array to be NOT platform dependent at all. It is actually defined just as an "implementation detail" to speed-up the __schedtune_accept_deltas function.
Let's consider that you do not have this array for a moment. In this case the PE space is still defined as well as the PB and PC cuts. These cuts are just in a continuous space but still, from a conceptual standpoint: the more you boost a task the more you accept to pay a bigger energy for a smaller amount of performances.
That's the basic idea, which translates than in some implementation considerations: does it make sense to distinguish between tasks boosted 61% or 62%? Probably no, thus the PB and PC spaces can be discretized to have a faster check for a schedule candidate to be over or below the cat. An easy way to define a finite and simple set of cuts was to just consider points which are just 1 unit apart in the Perf and Energy axes. That's how that table has been defined. Again there are no platform related consideration on building that table.
However, you can argue that the optimal boost value for a task is something which is somehow platform dependent. I would say that it is more use-case dependent. Thus, probably boosting with the same value all the foreground tasks is not the best way to go. Right, but that's the reason why we support the possibility to defined multiple CGroups.
Each cgroup can be used to defined the boost value which has been found to be optimal for certain use-cases running on a specific platform.
These are the idea at the base of the original design, but if you have a different view let's talk about. Maybe that some more specific example/use-case can help on describing the need for a different approach.
Thanks, Leo Yan
Cheers Patrick
-- #include <best/regards.h>
Patrick Bellasi