In function for tick_{pelt|walt}, neither of them has considered the schedTune boost margin when set CPU frequency. E.g. when enqueue the task onto rq, it will consider boost margin but after a while a tick is triggered the code will go back to use original CPU utilization value but not boosted value.
Another error is: we need convert the capacity request from normalized value to a ratio value [0..1024], the ratio value is the capacity requirement compared to the CPU maximum capacity.
So this patch is to fix these two errors. Please note, this patch cannot build successfully due there have some reworks for code need to do. So send for discussion firstly, if have conclusion will generate formal patches.
Signed-off-by: Leo Yan leo.yan@linaro.org --- kernel/sched/core.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 10f36e2..6f9433e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2947,28 +2947,32 @@ unsigned long sum_capacity_reqs(unsigned long cfs_cap,
static void sched_freq_tick_pelt(int cpu) { - unsigned long cpu_utilization = capacity_max; + unsigned long cpu_utilization = boosted_cpu_util(cpu); unsigned long capacity_curr = capacity_curr_of(cpu); struct sched_capacity_reqs *scr; + unsigned long req_cap;
scr = &per_cpu(cpu_sched_capacity_reqs, cpu); if (sum_capacity_reqs(cpu_utilization, scr) < capacity_curr) return;
+ req_cap = cpu_utilization * SCHED_CAPACITY_SCALE / capacity_orig_of(cpu); + /* * To make free room for a task that is building up its "real" * utilization and to harm its performance the least, request * a jump to a higher OPP as soon as the margin of free capacity * is impacted (specified by capacity_margin). */ - set_cfs_cpu_capacity(cpu, true, cpu_utilization); + set_cfs_cpu_capacity(cpu, true, req_cap); }
#ifdef CONFIG_SCHED_WALT static void sched_freq_tick_walt(int cpu) { - unsigned long cpu_utilization = cpu_util(cpu); + unsigned long cpu_utilization = boosted_cpu_util(cpu); unsigned long capacity_curr = capacity_curr_of(cpu); + unsigned long req_cap;
if (walt_disabled || !sysctl_sched_use_walt_cpu_util) return sched_freq_tick_pelt(cpu); @@ -2983,12 +2987,14 @@ static void sched_freq_tick_walt(int cpu) if (cpu_utilization <= capacity_curr) return;
+ req_cap = cpu_utilization * SCHED_CAPACITY_SCALE / capacity_orig_of(cpu); + /* * It is likely that the load is growing so we * keep the added margin in our request as an * extra boost. */ - set_cfs_cpu_capacity(cpu, true, cpu_utilization); + set_cfs_cpu_capacity(cpu, true, req_cap);
} #define _sched_freq_tick(cpu) sched_freq_tick_walt(cpu) -- 1.9.1