Since the recent remote cpufreq callback work, its possible that a cpufreq update is triggered from a remote CPU. For single policies however, the current code uses the local CPU when trying to determine if the remote sg_cpu entered idle or is busy. This is incorrect. To remedy this, compare with the nohz tick idle_calls counter of the remote CPU.
Acked-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Joel Fernandes joelaf@google.com --- include/linux/tick.h | 1 + kernel/sched/cpufreq_schedutil.c | 2 +- kernel/time/tick-sched.c | 13 +++++++++++++ 3 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/include/linux/tick.h b/include/linux/tick.h index f442d1a42025..7cc35921218e 100644 --- a/include/linux/tick.h +++ b/include/linux/tick.h @@ -119,6 +119,7 @@ extern void tick_nohz_idle_exit(void); extern void tick_nohz_irq_exit(void); extern ktime_t tick_nohz_get_sleep_length(void); extern unsigned long tick_nohz_get_idle_calls(void); +extern unsigned long tick_nohz_get_idle_calls_cpu(int cpu); extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time); extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time); #else /* !CONFIG_NO_HZ_COMMON */ diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 2f52ec0f1539..d6717a3331a1 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -244,7 +244,7 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util, #ifdef CONFIG_NO_HZ_COMMON static bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { - unsigned long idle_calls = tick_nohz_get_idle_calls(); + unsigned long idle_calls = tick_nohz_get_idle_calls_cpu(sg_cpu->cpu); bool ret = idle_calls == sg_cpu->saved_idle_calls;
sg_cpu->saved_idle_calls = idle_calls; diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index 99578f06c8d4..77555faf6fbc 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -985,6 +985,19 @@ ktime_t tick_nohz_get_sleep_length(void) return ts->sleep_length; }
+/** + * tick_nohz_get_idle_calls_cpu - return the current idle calls counter value + * for a particular CPU. + * + * Called from the schedutil frequency scaling governor in scheduler context. + */ +unsigned long tick_nohz_get_idle_calls_cpu(int cpu) +{ + struct tick_sched *ts = tick_get_tick_sched(cpu); + + return ts->idle_calls; +} + /** * tick_nohz_get_idle_calls - return the current idle calls counter value * -- 2.15.1.504.g5279b80103-goog
Since the remote cpufreq callback work, the cpufreq_update_util call can happen from remote CPUs. The comment about local CPUs is thus obsolete. Update it accordingly.
Reviewed-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Joel Fernandes joelaf@google.com --- kernel/sched/fair.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2fe3aa853e4d..1b10821d8380 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3020,9 +3020,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq) /* * There are a few boundary cases this might miss but it should * get called often enough that that should (hopefully) not be - * a real problem -- added to that it only calls on the local - * CPU, so if we enqueue remotely we'll miss an update, but - * the next tick/schedule should update. + * a real problem. * * It will not get called when we go idle, because the idle * thread is a different class (!fair), nor will the utilization -- 2.15.1.504.g5279b80103-goog
On Fri, Dec 15, 2017 at 07:39:43AM -0800, Joel Fernandes wrote:
Since the remote cpufreq callback work, the cpufreq_update_util call can happen from remote CPUs. The comment about local CPUs is thus obsolete. Update it accordingly.
Reviewed-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Joel Fernandes joelaf@google.com
Sure, ACK
kernel/sched/fair.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2fe3aa853e4d..1b10821d8380 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3020,9 +3020,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq) /* * There are a few boundary cases this might miss but it should * get called often enough that that should (hopefully) not be
* a real problem -- added to that it only calls on the local
* CPU, so if we enqueue remotely we'll miss an update, but
* the next tick/schedule should update.
* a real problem.
- It will not get called when we go idle, because the idle
- thread is a different class (!fair), nor will the utilization
-- 2.15.1.504.g5279b80103-goog
Commit-ID: 9783be2c0e90bbaceec3c471c4fb017bff7293ba Gitweb: https://git.kernel.org/tip/9783be2c0e90bbaceec3c471c4fb017bff7293ba Author: Joel Fernandes joelaf@google.com AuthorDate: Fri, 15 Dec 2017 07:39:43 -0800 Committer: Ingo Molnar mingo@kernel.org CommitDate: Wed, 10 Jan 2018 11:30:30 +0100
sched/fair: Correct obsolete comment about cpufreq_update_util()
Since the remote cpufreq callback work, the cpufreq_update_util() call can happen from remote CPUs. The comment about local CPUs is thus obsolete. Update it accordingly.
Signed-off-by: Joel Fernandes joelaf@google.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Viresh Kumar viresh.kumar@linaro.org Cc: Android Kernel kernel-team@android.com Cc: Atish Patra atish.patra@oracle.com Cc: Chris Redpath Chris.Redpath@arm.com Cc: Dietmar Eggemann dietmar.eggemann@arm.com Cc: EAS Dev eas-dev@lists.linaro.org Cc: Frederic Weisbecker fweisbec@gmail.com Cc: Josef Bacik jbacik@fb.com Cc: Juri Lelli juri.lelli@arm.com Cc: Len Brown lenb@kernel.org Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Morten Ramussen morten.rasmussen@arm.com Cc: Patrick Bellasi patrick.bellasi@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Rafael J. Wysocki rjw@rjwysocki.net Cc: Rohit Jain rohit.k.jain@oracle.com Cc: Saravana Kannan skannan@quicinc.com Cc: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com Cc: Steve Muckle smuckle@google.com Cc: Steven Rostedt rostedt@goodmis.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Vikram Mulukutla markivx@codeaurora.org Cc: Vincent Guittot vincent.guittot@linaro.org Link: http://lkml.kernel.org/r/20171215153944.220146-2-joelaf@google.com Signed-off-by: Ingo Molnar mingo@kernel.org --- kernel/sched/fair.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3e7606d..59e66a5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3020,9 +3020,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq) /* * There are a few boundary cases this might miss but it should * get called often enough that that should (hopefully) not be - * a real problem -- added to that it only calls on the local - * CPU, so if we enqueue remotely we'll miss an update, but - * the next tick/schedule should update. + * a real problem. * * It will not get called when we go idle, because the idle * thread is a different class (!fair), nor will the utilization
find_idlest_group_cpu goes through CPUs of a group previous selected by find_idlest_group. find_idlest_group returns NULL if the local group is the selected one and doesn't execute find_idlest_group_cpu if the group to which 'cpu' belongs to is chosen. So we're always guaranteed to call find_idlest_group_cpu with a group to which cpu is non-local. This makes one of the conditions in find_idlest_group_cpu an impossible one, which we can get rid off.
Reviewed-by: Brendan Jackman brendan.jackman@arm.com Reviewed-by: Vincent Guittot vincent.guittot@linaro.org Signed-off-by: Joel Fernandes joelaf@google.com --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1b10821d8380..44407e703b5f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5948,7 +5948,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this } } else if (shallowest_idle_cpu == -1) { load = weighted_cpuload(cpu_rq(i)); - if (load < min_load || (load == min_load && i == this_cpu)) { + if (load < min_load) { min_load = load; least_loaded_cpu = i; } -- 2.15.1.504.g5279b80103-goog
Commit-ID: 18cec7e0ddd5e28b7722f7049d715873373be3e9 Gitweb: https://git.kernel.org/tip/18cec7e0ddd5e28b7722f7049d715873373be3e9 Author: Joel Fernandes joelaf@google.com AuthorDate: Fri, 15 Dec 2017 07:39:44 -0800 Committer: Ingo Molnar mingo@kernel.org CommitDate: Wed, 10 Jan 2018 11:30:30 +0100
sched/fair: Remove impossible condition from find_idlest_group_cpu()
find_idlest_group_cpu() goes through CPUs of a group previous selected by find_idlest_group(). find_idlest_group() returns NULL if the local group is the selected one and doesn't execute find_idlest_group_cpu if the group to which 'cpu' belongs to is chosen. So we're always guaranteed to call find_idlest_group_cpu() with a group to which 'cpu' is non-local.
This makes one of the conditions in find_idlest_group_cpu() an impossible one, which we can get rid off.
Signed-off-by: Joel Fernandes joelaf@google.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Brendan Jackman brendan.jackman@arm.com Reviewed-by: Vincent Guittot vincent.guittot@linaro.org Cc: Android Kernel kernel-team@android.com Cc: Atish Patra atish.patra@oracle.com Cc: Chris Redpath Chris.Redpath@arm.com Cc: Dietmar Eggemann dietmar.eggemann@arm.com Cc: EAS Dev eas-dev@lists.linaro.org Cc: Frederic Weisbecker fweisbec@gmail.com Cc: Josef Bacik jbacik@fb.com Cc: Juri Lelli juri.lelli@arm.com Cc: Len Brown lenb@kernel.org Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Morten Ramussen morten.rasmussen@arm.com Cc: Patrick Bellasi patrick.bellasi@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Rafael J. Wysocki rjw@rjwysocki.net Cc: Rohit Jain rohit.k.jain@oracle.com Cc: Saravana Kannan skannan@quicinc.com Cc: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com Cc: Steve Muckle smuckle@google.com Cc: Steven Rostedt rostedt@goodmis.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Vikram Mulukutla markivx@codeaurora.org Cc: Viresh Kumar viresh.kumar@linaro.org Link: http://lkml.kernel.org/r/20171215153944.220146-3-joelaf@google.com Signed-off-by: Ingo Molnar mingo@kernel.org --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6e775ac..3e7606d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5950,7 +5950,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this } } else if (shallowest_idle_cpu == -1) { load = weighted_cpuload(cpu_rq(i)); - if (load < min_load || (load == min_load && i == this_cpu)) { + if (load < min_load) { min_load = load; least_loaded_cpu = i; }