On 11-Oct 21:37, Leo Yan wrote:
On Tue, Oct 11, 2016 at 12:35:26PM +0100, Patrick Bellasi wrote:
On 10-Oct 16:35, Leo Yan wrote:
Energy aware scheduling sets tipping point when any CPU in the system is overutilized. So there have several occasions to set root domain's overutilized flag to indicate system is over tipping point, like scheduler tick, load balance, enqueue task, on the other hand the scheduler only utilize load balance's function update_sg_lb_stats() to iterate all CPUs to make sure all CPUs are not overutilized and then clear this flag after system is under tipping point,
For idle CPU, it will keep stale utilization value and this value will not be updated until the CPU is waken up. In some worse case, the CPU may stay in idle states for very long time (even may in second level), so before the CPU enter idle state it has quite high utilization value this will let scheduler always think the CPU is "overutilized" and will not switch to state for under tipping point. As result, a very small task stays on big core for long time due system cannot go back to energy aware path.
What happen instead if a busy CPU has just entered idle but it's likely to exit quite soon?
For the case if CPUs can update their utilization timely, the small task usually can migrate back to LITTLE core quickly.
I was referring mainly to the case for example of an 80% task with a short period (e.g. 16ms), which has just gone to sleep. Such a task has a PELT signals which varies in [800:860] once stable.
In this case the CPU can appear to be IDLE but as soon as the task wakeup we will not have dacayed its utilization below the tipping point.
This patch is to check CPU idle state in function update_sg_lb_stats(), so if CPU is in idle state then will simply consider the CPU is not overutilized. So avoid to set tipping point by idle CPUs.
Maybe it's possible, just for idle CPUs marked as overutilized, to trigger at this point an update_cfs_rq_load_avg and than verify if the utilization signal has decayed enough for the CPU to be considered not overutilized anymore?
Yes. Essentially this issue is caused by the util_avg signal has not been updated timely, so it's better to find method to update idle CPU's utilization properly.
IIRC Morten before in one email reminded me if want to update idle CPU's load_avg, should acquire rq's lock.
True, but that's an idle CPU, thus it should not be a main issue. Unless we already have a lock for another CPU, but I don't think that's our case (see after).
Another thing should consider is, idle CPU's stale utilization is not only impacted for this case, it also introduce misunderstanding in wakenup path for CPU's selection, so I sent another patch before to update idle CPU's utilization with function update_blocked_averages [1].
[1] https://lists.linaro.org/pipermail/eas-dev/2016-September/000567.html
Exactly, does it makes sense to use the same update_blocked_averages() for an idle CPU at this point?
But I think the patch [1] is hard to be accepted due it introduces race condition for rq's locks. So this is why I go back to use most simple method to resolve part of the issue in this patch.
AFAIKS, at least on a 3.18 kernel, these are the calls which ends-up to update_sg_lb_stats:
idle_balance /* release this_cpu lock */ rebalance_domanins /* does not hold any RQ lock */ load_balance find_busiest_group update_sg_lb_stats
To me it seems that we always enter the update_sg_lb_stats without holding any RQ lock. Am I missing something?
Otherwise, what are the races we expect if we try to get the RQ lock for an idle CPU from within update_sg_lb_stats?
What's your suggestion for this?
Maybe Dietmar's observations regarding [1] was related to the fact that in that patch we was trying to update the blocked loads from the wakeup path?
Whereas here we are in the balance path, thus quite likely the overheads introduced to update idle CPUs utilization can be considered acceptable, especially considering that otherwise we miss optimization opportunities.
Cheers Patrick
Signed-off-by: Leo Yan leo.yan@linaro.org
kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 937eca2..43eae09 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7409,7 +7409,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, if (!nr_running && idle_cpu(i)) sgs->idle_cpus++;
if (cpu_overutilized(i)) {
if (cpu_overutilized(i) && !idle_cpu(i)) { *overutilized = true; if (!sgs->group_misfit_task && rq->misfit_task) sgs->group_misfit_task = capacity_of(i);
-- 1.9.1
-- #include <best/regards.h>
Patrick Bellasi
-- #include <best/regards.h>
Patrick Bellasi