On 27-Jun 15:34, Leo Yan wrote:
On Fri, Jun 24, 2016 at 04:21:03PM +0100, Patrick Bellasi wrote:
[...]
I think all these questions at the end they boil down to a single one: is the threshold_params a platform dependent or independent array?
Well, my original view was for this array to be NOT platform dependent at all. It is actually defined just as an "implementation detail" to speed-up the __schedtune_accept_deltas function.
Let's consider that you do not have this array for a moment. In this case the PE space is still defined as well as the PB and PC cuts. These cuts are just in a continuous space but still, from a conceptual standpoint: the more you boost a task the more you accept to pay a bigger energy for a smaller amount of performances.
Be honest, upper sentence is very good explanation for understanding PB cuts. But I still struggle to understand PC cuts :) I usually think (PC) region is used to make sure performance will not downgrade too much if we cannot see significant power saving.
You are right. Thus, in the previous example: the more you boost a task the more you have to save energy to impact performances. Can you see that as a description of a PC?
Yes. Three factors to understand payoff: boost margin, performance and energy.
That's the basic idea, which translates than in some implementation considerations: does it make sense to distinguish between tasks boosted 61% or 62%? Probably no, thus the PB and PC spaces can be discretized to have a faster check for a schedule candidate to be over or below the cat. An easy way to define a finite and simple set of cuts was to just consider points which are just 1 unit apart in the Perf and Energy axes. That's how that table has been defined. Again there are no platform related consideration on building that table.
Thanks for clear explanation. I have no more question from design perspective but still have concern for how to use it on SoC. Let's see more detailed implementation:
If boost = 0 then PE will be a vertical cut, so will only keep (O) and (PC) regions. If boost = 5% or 10%, with current threshold_params it will cut both (PB) and (PC) regions; this is hard to understand.
The most recent version of the cuts we are using is: http://www.linux-arm.org/git?p=linux-pb.git%3Ba=blob%3Bf=kernel/sched/tune.c...
Why this code base don't apply the patch we found have some issues for PE filter [1]? I just want to confirm we are discussing on the same code base.
[1] https://lists.linaro.org/pipermail/eas-dev/2016-May/000428.html
You right, internally we use a code based which includes the patch from this discussion. The link I've provided before was just what we release as v5.2, which did not included at that time the PE filter patch. The link was just to point you to a version of the thresholds_gains table we are using.
static struct threshold_params threshold_gains[] = { { 0, 4 }, /* >= 0% */ { 0, 4 }, /* >= 10% */ { 1, 4 }, /* >= 20% */ { 2, 4 }, /* >= 30% */ { 3, 4 }, /* >= 40% */ { 4, 3 }, /* >= 50% */ { 4, 2 }, /* >= 60% */ { 4, 1 }, /* >= 70% */ { 4, 0 }, /* >= 80% */ { 4, 0 } /* >= 90% */ };
This table defines a vertical cut up to 19% boost values. In other words, up to 19% boost value we are in a boost "dead zone" where we bias only OPP selections without allowing any increase in energy consumption. The same table defines that above 79% boost value we are in a sort of "accept all" zone where we accept all the scheduling candidate which provides a capacity increase, without caring about energy variations. All the boost values between 20% and 79% defined different performance-energy trade-offs and with PB and PC regions cut with the same gradient.
Just reminding, if with below code I think PB and PC gradient is _NOT_ same.
578 int 579 sysctl_sched_cfs_boost_handler(struct ctl_table *table, int write, 580 void __user *buffer, size_t *lenp, 581 loff_t *ppos) 582 { 583 int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 584 585 if (ret || !write) 586 return ret; 587 588 /* Performance Boost (B) region threshold params */ 589 perf_boost_idx = sysctl_sched_cfs_boost; 590 perf_boost_idx /= 10; 591 592 /* Performance Constraint (C) region threshold params */ 593 perf_constrain_idx = 100 - sysctl_sched_cfs_boost;
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dont remember why this ended up to be like this... but this seems completely broken! :-(
There are two main issues: 1) constrains are not with the same gradient, e.g. for a boost of 20% perf_boost_idx = 2 ==> (nrg_gain: 1, cap_gain: 4) perf_constrain_idx = 8 ==> (nrg_gain: 4, cap_gain: 0)
2) we overflow the threshold_gains array for boost=100
The second issue is due to a check I've forgot to port in one of the many rewrite...
The fist issue is much more worst, it's quite likely an implementation error. I've always considered the two margin with the same gradient, with the exact behavior you described in the plot you shared in attachment.
Right now for boost=0 we are basically in a situation to accept only candidates in the O region. Which is NOT what we would like: we would like that for boost=0 we behave as a "standard" EAS which optimize just for energy reduction without constrains on performance impacts.
594 perf_constrain_idx /= 10; 595 596 return 0; 597 }
I think more reasonable method is likely to shift the cut gradient from vertical to right side a little so we can keep almost region for (PC) region and also give some chance for (PB) region. if so, then
I cannot really get this point. What's the goal?
For more easily to demonstrate my idea, please see the plot I draw for the PE filter region:
https://people.linaro.org/~leo.yan/PE_Filter_Regions.png
For boost=0, means PE filter region is left cut of X axis. If boost=5, then that means the cut gradient will rotate to right then will enable part of PB region and remove some of PC region. If boost=100, that means the cut gradient finally rotate to horizontal level and will totally enable PB region and remove whole PC region.
That's exactly the original idea based on which the PE regions cuts should work. Thanks for sharing these plots, they are quite useful. I would like to add something similar in LISA...
Essentially below table wants to implement this idea properly. So Loose or restrict specific margin is not my purpose, but I want to figure out if can define more ordinary trend for PE filter region.
Not sure updating the table is enough, it's basically "just" increasing the granularity of cuts near the 0% and 100% boost values...
threshold_params should be:
static struct threshold_params threshold_gains[] = { { 1, 5 }, /* >= 0% */
For boost=0 we would have:
/* Performance Boost (B) region threshold params */ perf_boost_idx = sysctl_sched_cfs_boost; perf_boost_idx /= 10;
==> perf_boost_idx = 0 ====> nrg_gain: 1 ====> cap_gain: 5
/* Performance Constraint (C) region threshold params */ perf_constrain_idx = 100 - sysctl_sched_cfs_boost; perf_constrain_idx /= 10;
==> perf_constrain_idx = 10 (overflow) ==> perf_constrain_idx = 9 (once fixed for boundaries checks) ====> nrg_gain: 5 ====> cap_gain: 0
Thus, for example, a scheduling candidate which corresponds to a 50% decrease in both energy and capacity would return:
__schedtune_accept_deltas(int nrg_delta, int cap_delta, int perf_boost_idx, int perf_constrain_idx)
gain_idx = perf_constrain_idx ==> 9
payoff = cap_delta * threshold_gains[gain_idx].nrg_gain; ==> -50 * 5
payoff -= nrg_delta * threshold_gains[gain_idx].cap_gain; ==> -250 - (-50 * 0)
==> payoff: -250 ==> REJECT
And that is wrong, because we would expect to accept a candidate which reduces energy by 50%, regardless of the 50% impact on performances.
I think that the current solution can have some bad impacts both on lower boost values, by forbidding to spread tasks, as well as on higher boost values, by allowing to save small amount of energy wile impacting a lot performances.
{ 2, 5 }, /* >= 10% */ { 3, 5 }, /* >= 20% */ { 4, 5 }, /* >= 30% */ { 5, 5 }, /* >= 40% */ { 5, 4 }, /* >= 50% */ { 5, 3 }, /* >= 60% */ { 5, 2 }, /* >= 70% */ { 5, 1 }, /* >= 80% */ { 5, 0 } /* >= 90% */
};
And perf_boost_idx and perf_constrain_idx will have same value so have same gradient so the cut will step by step to shift from high gradient to low grandient. How about you think for this?
AFAIU this new table is basically: a) removing the "dead-zone" up to 19% b) reduce the "accept all" region to boost values >90%
Sound like we should run some experiments and benchmarks to check how much this setup is different than the previous one. At first glance it seems to be a little bit more aggressive on low boost values and more conservative in high boost values.
However, you can argue that the optimal boost value for a task is something which is somehow platform dependent. I would say that it is more use-case dependent. Thus, probably boosting with the same value all the foreground tasks is not the best way to go. Right, but that's the reason why we support the possibility to defined multiple CGroups.
Each cgroup can be used to defined the boost value which has been found to be optimal for certain use-cases running on a specific platform.
These are the idea at the base of the original design, but if you have a different view let's talk about. Maybe that some more specific example/use-case can help on describing the need for a different approach.
Agree. payoff is not to resolve optimal issue but this is done by CGroup. Case is important and I will try to gather related info if possible.
We need to setup an evaluation exercise with reproducible benchmarks to proper evaluate these variations.
-- #include <best/regards.h>
Patrick Bellasi
-- #include <best/regards.h>
Patrick Bellasi