Hi Rafael,
As suggested by you yesterday, I have updated the governor core to keep per-cpu timers and a shared work for the entire policy.
More details are present in the changelogs, hope they are somewhat better this time.
I have tested it with the test-suite, that I created sometime back while fixing locking issues in governors.. Tried all kind of stuff in parallel that could have broken it (those are the testcases that separate people reported over time, around governors). It works fine.
The first one is a bug fix really, which I noticed today only :), next three are minor cleanups to prepare for the big change. Fourth one is the main patch that does the conversion and the final one is cherry-picked from the last series as that was still relevant.
This is rebased over: pm/bleeding-edge + "[PATCH V4] cpufreq: governor: Quit work-handlers early if governor is stopped", which I sent separately this morning with the changelog you suggested.
Please see if this meets your expectations or not :)
Viresh Kumar (6): cpufreq: ondemand: Update sampling rate only for concerned policies cpufreq: ondemand: Work is guaranteed to be pending cpufreq: governor: Pass policy as argument to ->gov_dbs_timer() cpufreq: governor: initialize/destroy timer_mutex with 'shared' cpufreq: governor: replace per-cpu delayed work with timers cpufreq: ondemand: update update_sampling_rate() to make it more efficient
drivers/cpufreq/cpufreq_conservative.c | 6 +- drivers/cpufreq/cpufreq_governor.c | 145 +++++++++++++++++++-------------- drivers/cpufreq/cpufreq_governor.h | 23 ++++-- drivers/cpufreq/cpufreq_ondemand.c | 66 ++++++++++----- 4 files changed, 147 insertions(+), 93 deletions(-)
We are comparing policy->governor against cpufreq_gov_ondemand to make sure that we update sampling rate only for the concerned CPUs. But that isn't enough.
In case of governor_per_policy, there can be multiple instances of ondemand governor and we will always end up updating all of them with current code. What we rather need to do, is to compare dbs_data with poilcy->governor_data, which will match only for the policies governed by dbs_data.
This code is also racy as the governor might be getting stopped at that time and we may end up scheduling work for a policy, which we have just disabled.
Fix that by protecting the entire function with &od_dbs_cdata.mutex, which will prevent against races with policy START/STOP/etc.
After these locks are in place, we can safely get the policy via per-cpu dbs_info.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq_ondemand.c | 40 ++++++++++++++++++++++++++++---------- 1 file changed, 30 insertions(+), 10 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 03ac6ce54042..0800a937607b 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -247,25 +247,43 @@ static void update_sampling_rate(struct dbs_data *dbs_data, unsigned int new_rate) { struct od_dbs_tuners *od_tuners = dbs_data->tuners; + struct od_cpu_dbs_info_s *dbs_info; + struct cpu_dbs_info *cdbs; + struct cpufreq_policy *policy; + struct cpu_common_dbs_info *shared; + unsigned long next_sampling, appointed_at; int cpu;
od_tuners->sampling_rate = new_rate = max(new_rate, dbs_data->min_sampling_rate);
+ /* + * Lock governor so that governor start/stop can't execute in parallel. + */ + mutex_lock(&od_dbs_cdata.mutex); + for_each_online_cpu(cpu) { - struct cpufreq_policy *policy; - struct od_cpu_dbs_info_s *dbs_info; - unsigned long next_sampling, appointed_at; + dbs_info = &per_cpu(od_cpu_dbs_info, cpu); + cdbs = &dbs_info->cdbs; + shared = cdbs->shared;
- policy = cpufreq_cpu_get(cpu); - if (!policy) + /* + * A valid shared and shared->policy means governor hasn't + * stopped or exited yet. + */ + if (!shared || !shared->policy) continue; - if (policy->governor != &cpufreq_gov_ondemand) { - cpufreq_cpu_put(policy); + + policy = shared->policy; + + /* + * Update sampling rate for CPUs whose policy is governed by + * dbs_data. In case of governor_per_policy, only a single + * policy will be governed by dbs_data, otherwise there can be + * multiple policies that are governed by the same dbs_data. + */ + if (dbs_data != policy->governor_data) continue; - } - dbs_info = &per_cpu(od_cpu_dbs_info, cpu); - cpufreq_cpu_put(policy);
if (!delayed_work_pending(&dbs_info->cdbs.dwork)) continue; @@ -281,6 +299,8 @@ static void update_sampling_rate(struct dbs_data *dbs_data,
} } + + mutex_unlock(&od_dbs_cdata.mutex); }
static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
On Thursday, October 29, 2015 05:57:20 PM Viresh Kumar wrote:
We are comparing policy->governor against cpufreq_gov_ondemand to make sure that we update sampling rate only for the concerned CPUs. But that isn't enough.
In case of governor_per_policy, there can be multiple instances of ondemand governor and we will always end up updating all of them with current code. What we rather need to do, is to compare dbs_data with poilcy->governor_data, which will match only for the policies governed by dbs_data.
This code is also racy as the governor might be getting stopped at that time and we may end up scheduling work for a policy, which we have just disabled.
Fix that by protecting the entire function with &od_dbs_cdata.mutex, which will prevent against races with policy START/STOP/etc.
After these locks are in place, we can safely get the policy via per-cpu dbs_info.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
drivers/cpufreq/cpufreq_ondemand.c | 40 ++++++++++++++++++++++++++++---------- 1 file changed, 30 insertions(+), 10 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 03ac6ce54042..0800a937607b 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -247,25 +247,43 @@ static void update_sampling_rate(struct dbs_data *dbs_data, unsigned int new_rate) { struct od_dbs_tuners *od_tuners = dbs_data->tuners;
- struct od_cpu_dbs_info_s *dbs_info;
- struct cpu_dbs_info *cdbs;
- struct cpufreq_policy *policy;
- struct cpu_common_dbs_info *shared;
- unsigned long next_sampling, appointed_at; int cpu;
od_tuners->sampling_rate = new_rate = max(new_rate, dbs_data->min_sampling_rate);
- /*
* Lock governor so that governor start/stop can't execute in parallel.
*/
- mutex_lock(&od_dbs_cdata.mutex);
- for_each_online_cpu(cpu) {
struct cpufreq_policy *policy;
struct od_cpu_dbs_info_s *dbs_info;
unsigned long next_sampling, appointed_at;
Why are you moving the definitions of variables away from here?
If there's any technical reason, it should be mentioned in the changelog, but if it's just aesthetics or something similar, it doesn't belong to this patch.
dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
cdbs = &dbs_info->cdbs;
shared = cdbs->shared;
policy = cpufreq_cpu_get(cpu);
if (!policy)
/*
* A valid shared and shared->policy means governor hasn't
* stopped or exited yet.
*/
if (!shared || !shared->policy) continue;
if (policy->governor != &cpufreq_gov_ondemand) {
cpufreq_cpu_put(policy);
policy = shared->policy;
/*
* Update sampling rate for CPUs whose policy is governed by
* dbs_data. In case of governor_per_policy, only a single
* policy will be governed by dbs_data, otherwise there can be
* multiple policies that are governed by the same dbs_data.
*/
if (dbs_data != policy->governor_data) continue;
}
dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
cpufreq_cpu_put(policy);
if (!delayed_work_pending(&dbs_info->cdbs.dwork)) continue; @@ -281,6 +299,8 @@ static void update_sampling_rate(struct dbs_data *dbs_data, } }
- mutex_unlock(&od_dbs_cdata.mutex);
} static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
Thanks, Rafael
We are guaranteed to have works scheduled for policy->cpus, as the policy isn't stopped yet. And so there is no need to check that again. Drop it.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq_ondemand.c | 3 --- 1 file changed, 3 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 0800a937607b..edab71528b8b 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -285,9 +285,6 @@ static void update_sampling_rate(struct dbs_data *dbs_data, if (dbs_data != policy->governor_data) continue;
- if (!delayed_work_pending(&dbs_info->cdbs.dwork)) - continue; - next_sampling = jiffies + usecs_to_jiffies(new_rate); appointed_at = dbs_info->cdbs.dwork.timer.expires;
Pass 'policy' as argument to ->gov_dbs_timer() instead of cdbs and dbs_data.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq_conservative.c | 6 +++--- drivers/cpufreq/cpufreq_governor.c | 2 +- drivers/cpufreq/cpufreq_governor.h | 3 +-- drivers/cpufreq/cpufreq_ondemand.c | 5 ++--- 4 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c index 1fa1deb6e91f..606ad74abe6e 100644 --- a/drivers/cpufreq/cpufreq_conservative.c +++ b/drivers/cpufreq/cpufreq_conservative.c @@ -115,13 +115,13 @@ static void cs_check_cpu(int cpu, unsigned int load) } }
-static unsigned int cs_dbs_timer(struct cpu_dbs_info *cdbs, - struct dbs_data *dbs_data, bool modify_all) +static unsigned int cs_dbs_timer(struct cpufreq_policy *policy, bool modify_all) { + struct dbs_data *dbs_data = policy->governor_data; struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
if (modify_all) - dbs_check_cpu(dbs_data, cdbs->shared->policy->cpu); + dbs_check_cpu(dbs_data, policy->cpu);
return delay_for_sampling_rate(cs_tuners->sampling_rate); } diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index b260576ddb12..cdcb56a49b28 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -253,7 +253,7 @@ static void dbs_timer(struct work_struct *work) if (!need_load_eval(cdbs->shared, sampling_rate)) modify_all = false;
- delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all); + delay = dbs_data->cdata->gov_dbs_timer(policy, modify_all); gov_queue_work(dbs_data, policy, delay, modify_all);
unlock: diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index 5621bb03e874..0c7589016b6c 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -209,8 +209,7 @@ struct common_dbs_data {
struct cpu_dbs_info *(*get_cpu_cdbs)(int cpu); void *(*get_cpu_dbs_info_s)(int cpu); - unsigned int (*gov_dbs_timer)(struct cpu_dbs_info *cdbs, - struct dbs_data *dbs_data, + unsigned int (*gov_dbs_timer)(struct cpufreq_policy *policy, bool modify_all); void (*gov_check_cpu)(int cpu, unsigned int load); int (*init)(struct dbs_data *dbs_data, bool notify); diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index edab71528b8b..0848c8ac6847 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -191,10 +191,9 @@ static void od_check_cpu(int cpu, unsigned int load) } }
-static unsigned int od_dbs_timer(struct cpu_dbs_info *cdbs, - struct dbs_data *dbs_data, bool modify_all) +static unsigned int od_dbs_timer(struct cpufreq_policy *policy, bool modify_all) { - struct cpufreq_policy *policy = cdbs->shared->policy; + struct dbs_data *dbs_data = policy->governor_data; unsigned int cpu = policy->cpu; struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
timer_mutex is required to be initialized only while memory for 'shared' is allocated and in a similar way it is required to be destroyed only when memory for 'shared' is freed.
There is no need to do the same every time we start/stop the governor. Move code to initialize/destroy timer_mutex to the relevant places.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq_governor.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index cdcb56a49b28..999e1f6addf9 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -287,6 +287,7 @@ static int alloc_common_dbs_info(struct cpufreq_policy *policy, for_each_cpu(j, policy->related_cpus) cdata->get_cpu_cdbs(j)->shared = shared;
+ mutex_init(&shared->timer_mutex); return 0; }
@@ -297,6 +298,8 @@ static void free_common_dbs_info(struct cpufreq_policy *policy, struct cpu_common_dbs_info *shared = cdbs->shared; int j;
+ mutex_destroy(&shared->timer_mutex); + for_each_cpu(j, policy->cpus) cdata->get_cpu_cdbs(j)->shared = NULL;
@@ -433,7 +436,6 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy,
shared->policy = policy; shared->time_stamp = ktime_get(); - mutex_init(&shared->timer_mutex);
for_each_cpu(j, policy->cpus) { struct cpu_dbs_info *j_cdbs = cdata->get_cpu_cdbs(j); @@ -493,8 +495,6 @@ static int cpufreq_governor_stop(struct cpufreq_policy *policy, mutex_unlock(&shared->timer_mutex);
gov_cancel_work(dbs_data, policy); - - mutex_destroy(&shared->timer_mutex); return 0; }
cpufreq governors evaluate load at sampling rate and based on that they update frequency for a group of CPUs belonging to the same cpufreq policy.
This is required to be done in a single thread for all policy->cpus, but because we don't want to wakeup idle CPUs to do just that, we use deferrable work for this. If we would have used a single delayed deferrable work for the entire policy, there were chances that the CPU required to run the handler can be in idle and we might end up not changing the frequency for the entire group with load variations.
And so we were forced to keep per-cpu works, and only the one that expires first need to do the real work and others are rescheduled for next sampling time.
We have been using the more complex solution until now, where we used a delayed deferrable work for this, which is a combination of a timer and a work.
This could be made lightweight by keeping per-cpu deferred timers with a single work item, which is scheduled by the first timer that expires.
This patch does just that and here are important changes: - The timer handler will run in irq context and so we need to use a spin_lock instead of the timer_mutex. And so a separate timer_lock is created. This also makes the use of the mutex and lock quite clear, as we know what exactly they are protecting. - A new field 'skip_work' is added to track when the timer handlers can queue a work. More comments present in code.
Suggested-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq_governor.c | 139 +++++++++++++++++++++---------------- drivers/cpufreq/cpufreq_governor.h | 20 ++++-- drivers/cpufreq/cpufreq_ondemand.c | 8 +-- 3 files changed, 98 insertions(+), 69 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index 999e1f6addf9..a3f9bc9b98e9 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -158,47 +158,52 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) } EXPORT_SYMBOL_GPL(dbs_check_cpu);
-static inline void __gov_queue_work(int cpu, struct dbs_data *dbs_data, - unsigned int delay) +void gov_add_timers(struct cpufreq_policy *policy, unsigned int delay) { - struct cpu_dbs_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); - - mod_delayed_work_on(cpu, system_wq, &cdbs->dwork, delay); -} - -void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, - unsigned int delay, bool all_cpus) -{ - int i; + struct dbs_data *dbs_data = policy->governor_data; + struct cpu_dbs_info *cdbs; + int cpu;
- if (!all_cpus) { - /* - * Use raw_smp_processor_id() to avoid preemptible warnings. - * We know that this is only called with all_cpus == false from - * works that have been queued with *_work_on() functions and - * those works are canceled during CPU_DOWN_PREPARE so they - * can't possibly run on any other CPU. - */ - __gov_queue_work(raw_smp_processor_id(), dbs_data, delay); - } else { - for_each_cpu(i, policy->cpus) - __gov_queue_work(i, dbs_data, delay); + for_each_cpu(cpu, policy->cpus) { + cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); + cdbs->timer.expires = jiffies + delay; + add_timer_on(&cdbs->timer, cpu); } } -EXPORT_SYMBOL_GPL(gov_queue_work); +EXPORT_SYMBOL_GPL(gov_add_timers);
-static inline void gov_cancel_work(struct dbs_data *dbs_data, - struct cpufreq_policy *policy) +static inline void gov_cancel_timers(struct cpufreq_policy *policy) { + struct dbs_data *dbs_data = policy->governor_data; struct cpu_dbs_info *cdbs; int i;
for_each_cpu(i, policy->cpus) { cdbs = dbs_data->cdata->get_cpu_cdbs(i); - cancel_delayed_work_sync(&cdbs->dwork); + del_timer_sync(&cdbs->timer); } }
+void gov_cancel_work(struct cpu_common_dbs_info *shared) +{ + unsigned long flags; + + /* + * No work will be queued from timer handlers after skip_work is + * updated. And so we can safely cancel the work first and then the + * timers. + */ + spin_lock_irqsave(&shared->timer_lock, flags); + shared->skip_work++; + spin_unlock_irqrestore(&shared->timer_lock, flags); + + cancel_work_sync(&shared->work); + + gov_cancel_timers(shared->policy); + + shared->skip_work = 0; +} + /* Will return if we need to evaluate cpu load again or not */ static bool need_load_eval(struct cpu_common_dbs_info *shared, unsigned int sampling_rate) @@ -217,29 +222,21 @@ static bool need_load_eval(struct cpu_common_dbs_info *shared, return true; }
-static void dbs_timer(struct work_struct *work) +static void dbs_work_handler(struct work_struct *work) { - struct cpu_dbs_info *cdbs = container_of(work, struct cpu_dbs_info, - dwork.work); - struct cpu_common_dbs_info *shared = cdbs->shared; + struct cpu_common_dbs_info *shared = container_of(work, struct + cpu_common_dbs_info, work); struct cpufreq_policy *policy; struct dbs_data *dbs_data; unsigned int sampling_rate, delay; - bool modify_all = true; - - mutex_lock(&shared->timer_mutex); + bool eval_load;
policy = shared->policy; - - /* - * Governor might already be disabled and there is no point continuing - * with the work-handler. - */ - if (!policy) - goto unlock; - dbs_data = policy->governor_data;
+ /* Kill all timers */ + gov_cancel_timers(policy); + if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
@@ -250,14 +247,44 @@ static void dbs_timer(struct work_struct *work) sampling_rate = od_tuners->sampling_rate; }
- if (!need_load_eval(cdbs->shared, sampling_rate)) - modify_all = false; + eval_load = need_load_eval(shared, sampling_rate);
- delay = dbs_data->cdata->gov_dbs_timer(policy, modify_all); - gov_queue_work(dbs_data, policy, delay, modify_all); + /* + * Make sure cpufreq_governor_limits() isn't evaluating load in + * parallel. + */ + mutex_lock(&shared->timer_mutex); + delay = dbs_data->cdata->gov_dbs_timer(policy, eval_load); + mutex_unlock(&shared->timer_mutex); + + shared->skip_work--; + gov_add_timers(policy, delay); +} + +static void dbs_timer_handler(unsigned long data) +{ + struct cpu_dbs_info *cdbs = (struct cpu_dbs_info *)data; + struct cpu_common_dbs_info *shared = cdbs->shared; + struct cpufreq_policy *policy; + unsigned long flags; + + spin_lock_irqsave(&shared->timer_lock, flags); + policy = shared->policy; + + /* + * Timer handler isn't allowed to queue work at the moment, because: + * - Another timer handler has done that + * - We are stopping the governor + * - Or we are updating the sampling rate of ondemand governor + */ + if (shared->skip_work) + goto unlock; + + shared->skip_work++; + queue_work(system_wq, &shared->work);
unlock: - mutex_unlock(&shared->timer_mutex); + spin_unlock_irqrestore(&shared->timer_lock, flags); }
static void set_sampling_rate(struct dbs_data *dbs_data, @@ -288,6 +315,8 @@ static int alloc_common_dbs_info(struct cpufreq_policy *policy, cdata->get_cpu_cdbs(j)->shared = shared;
mutex_init(&shared->timer_mutex); + spin_lock_init(&shared->timer_lock); + INIT_WORK(&shared->work, dbs_work_handler); return 0; }
@@ -452,7 +481,9 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy, if (ignore_nice) j_cdbs->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
- INIT_DEFERRABLE_WORK(&j_cdbs->dwork, dbs_timer); + __setup_timer(&j_cdbs->timer, dbs_timer_handler, + (unsigned long)j_cdbs, + TIMER_DEFERRABLE | TIMER_IRQSAFE); }
if (cdata->governor == GOV_CONSERVATIVE) { @@ -470,8 +501,7 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy, od_ops->powersave_bias_init_cpu(cpu); }
- gov_queue_work(dbs_data, policy, delay_for_sampling_rate(sampling_rate), - true); + gov_add_timers(policy, delay_for_sampling_rate(sampling_rate)); return 0; }
@@ -485,16 +515,9 @@ static int cpufreq_governor_stop(struct cpufreq_policy *policy, if (!shared || !shared->policy) return -EBUSY;
- /* - * Work-handler must see this updated, as it should not proceed any - * further after governor is disabled. And so timer_mutex is taken while - * updating this value. - */ - mutex_lock(&shared->timer_mutex); + gov_cancel_work(shared); shared->policy = NULL; - mutex_unlock(&shared->timer_mutex);
- gov_cancel_work(dbs_data, policy); return 0; }
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index 0c7589016b6c..76742902491e 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -132,12 +132,20 @@ static void *get_cpu_dbs_info_s(int cpu) \ struct cpu_common_dbs_info { struct cpufreq_policy *policy; /* - * percpu mutex that serializes governor limit change with dbs_timer - * invocation. We do not want dbs_timer to run when user is changing - * the governor or limits. + * Per policy mutex that serializes load evaluation from limit-change + * and work-handler. */ struct mutex timer_mutex; + + /* + * Per policy lock that serializes access to queuing work from timer + * handlers. + */ + spinlock_t timer_lock; + ktime_t time_stamp; + unsigned int skip_work; + struct work_struct work; };
/* Per cpu structures */ @@ -152,7 +160,7 @@ struct cpu_dbs_info { * wake-up from idle. */ unsigned int prev_load; - struct delayed_work dwork; + struct timer_list timer; struct cpu_common_dbs_info *shared; };
@@ -268,11 +276,11 @@ static ssize_t show_sampling_rate_min_gov_pol \
extern struct mutex cpufreq_governor_lock;
+void gov_add_timers(struct cpufreq_policy *policy, unsigned int delay); +void gov_cancel_work(struct cpu_common_dbs_info *shared); void dbs_check_cpu(struct dbs_data *dbs_data, int cpu); int cpufreq_governor_dbs(struct cpufreq_policy *policy, struct common_dbs_data *cdata, unsigned int event); -void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, - unsigned int delay, bool all_cpus); void od_register_powersave_bias_handler(unsigned int (*f) (struct cpufreq_policy *, unsigned int, unsigned int), unsigned int powersave_bias); diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 0848c8ac6847..9e0293c23285 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -285,13 +285,11 @@ static void update_sampling_rate(struct dbs_data *dbs_data, continue;
next_sampling = jiffies + usecs_to_jiffies(new_rate); - appointed_at = dbs_info->cdbs.dwork.timer.expires; + appointed_at = dbs_info->cdbs.timer.expires;
if (time_before(next_sampling, appointed_at)) { - cancel_delayed_work_sync(&dbs_info->cdbs.dwork); - - gov_queue_work(dbs_data, policy, - usecs_to_jiffies(new_rate), true); + gov_cancel_work(shared); + gov_add_timers(policy, usecs_to_jiffies(new_rate));
} }
Hi Viresh,
On 29 October 2015 at 08:27, Viresh Kumar viresh.kumar@linaro.org wrote:
cpufreq governors evaluate load at sampling rate and based on that they update frequency for a group of CPUs belonging to the same cpufreq policy.
This is required to be done in a single thread for all policy->cpus, but because we don't want to wakeup idle CPUs to do just that, we use deferrable work for this. If we would have used a single delayed deferrable work for the entire policy, there were chances that the CPU required to run the handler can be in idle and we might end up not changing the frequency for the entire group with load variations.
And so we were forced to keep per-cpu works, and only the one that expires first need to do the real work and others are rescheduled for next sampling time.
We have been using the more complex solution until now, where we used a delayed deferrable work for this, which is a combination of a timer and a work.
This could be made lightweight by keeping per-cpu deferred timers with a single work item, which is scheduled by the first timer that expires.
Single shared work item - would perhaps make it a bit more clear.
This patch does just that and here are important changes:
- The timer handler will run in irq context and so we need to use a spin_lock instead of the timer_mutex. And so a separate timer_lock is created. This also makes the use of the mutex and lock quite clear, as we know what exactly they are protecting.
- A new field 'skip_work' is added to track when the timer handlers can queue a work. More comments present in code.
Suggested-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
drivers/cpufreq/cpufreq_governor.c | 139 +++++++++++++++++++++---------------- drivers/cpufreq/cpufreq_governor.h | 20 ++++-- drivers/cpufreq/cpufreq_ondemand.c | 8 +-- 3 files changed, 98 insertions(+), 69 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index 999e1f6addf9..a3f9bc9b98e9 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c
[..]
+void gov_cancel_work(struct cpu_common_dbs_info *shared) +{
unsigned long flags;
/*
* No work will be queued from timer handlers after skip_work is
* updated. And so we can safely cancel the work first and then the
* timers.
*/
spin_lock_irqsave(&shared->timer_lock, flags);
shared->skip_work++;
spin_unlock_irqrestore(&shared->timer_lock, flags);
cancel_work_sync(&shared->work);
gov_cancel_timers(shared->policy);
shared->skip_work = 0;
Why doesnt this require the spin_lock protection?
+}
/* Will return if we need to evaluate cpu load again or not */ static bool need_load_eval(struct cpu_common_dbs_info *shared, unsigned int sampling_rate) @@ -217,29 +222,21 @@ static bool need_load_eval(struct cpu_common_dbs_info *shared, return true; }
-static void dbs_timer(struct work_struct *work) +static void dbs_work_handler(struct work_struct *work) {
struct cpu_dbs_info *cdbs = container_of(work, struct cpu_dbs_info,
dwork.work);
struct cpu_common_dbs_info *shared = cdbs->shared;
struct cpu_common_dbs_info *shared = container_of(work, struct
cpu_common_dbs_info, work); struct cpufreq_policy *policy; struct dbs_data *dbs_data; unsigned int sampling_rate, delay;
bool modify_all = true;
mutex_lock(&shared->timer_mutex);
bool eval_load; policy = shared->policy;
/*
* Governor might already be disabled and there is no point continuing
* with the work-handler.
*/
if (!policy)
goto unlock;
dbs_data = policy->governor_data;
/* Kill all timers */
gov_cancel_timers(policy);
if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
@@ -250,14 +247,44 @@ static void dbs_timer(struct work_struct *work) sampling_rate = od_tuners->sampling_rate; }
if (!need_load_eval(cdbs->shared, sampling_rate))
modify_all = false;
eval_load = need_load_eval(shared, sampling_rate);
delay = dbs_data->cdata->gov_dbs_timer(policy, modify_all);
gov_queue_work(dbs_data, policy, delay, modify_all);
/*
* Make sure cpufreq_governor_limits() isn't evaluating load in
* parallel.
*/
mutex_lock(&shared->timer_mutex);
delay = dbs_data->cdata->gov_dbs_timer(policy, eval_load);
mutex_unlock(&shared->timer_mutex);
shared->skip_work--;
Ditto.
gov_add_timers(policy, delay);
+}
+static void dbs_timer_handler(unsigned long data) +{
struct cpu_dbs_info *cdbs = (struct cpu_dbs_info *)data;
struct cpu_common_dbs_info *shared = cdbs->shared;
struct cpufreq_policy *policy;
unsigned long flags;
spin_lock_irqsave(&shared->timer_lock, flags);
policy = shared->policy;
/*
* Timer handler isn't allowed to queue work at the moment, because:
* - Another timer handler has done that
* - We are stopping the governor
* - Or we are updating the sampling rate of ondemand governor
*/
if (shared->skip_work)
goto unlock;
shared->skip_work++;
queue_work(system_wq, &shared->work);
So, IIUC, in the event that this function gets called back to back and the first Work hasn't dequeued yet, then this queue_work() will not really enqueue, since queue_work_on() will return False? If so, then does it mean we're skipping more recent CPU freq requests? Should we cancel past Work if it hasn't been serviced?
Regards, Ashwin.
Hi Ashwin,
On 30-10-15, 16:46, Ashwin Chaugule wrote:
On 29 October 2015 at 08:27, Viresh Kumar viresh.kumar@linaro.org wrote:
This could be made lightweight by keeping per-cpu deferred timers with a single work item, which is scheduled by the first timer that expires.
Single shared work item - would perhaps make it a bit more clear.
Okay, in case that I need to repost this, I will reword it.
+void gov_cancel_work(struct cpu_common_dbs_info *shared) +{
unsigned long flags;
/*
* No work will be queued from timer handlers after skip_work is
* updated. And so we can safely cancel the work first and then the
* timers.
*/
spin_lock_irqsave(&shared->timer_lock, flags);
shared->skip_work++;
spin_unlock_irqrestore(&shared->timer_lock, flags);
cancel_work_sync(&shared->work);
gov_cancel_timers(shared->policy);
shared->skip_work = 0;
Why doesnt this require the spin_lock protection?
Because there is no race here. We have already removed all queued-timers and the shared work.
-static void dbs_timer(struct work_struct *work) +static void dbs_work_handler(struct work_struct *work) {
mutex_lock(&shared->timer_mutex);
delay = dbs_data->cdata->gov_dbs_timer(policy, eval_load);
mutex_unlock(&shared->timer_mutex);
shared->skip_work--;
Ditto.
Again, there is no race here. We have already removed the queued-timers for the entire policy. The only other user is the gov_cancel_work() thread (which is called while stopping the governor or updating the sampling rate), which doesn't depend on this being decremented as that will wait for the work to finish.
gov_add_timers(policy, delay);
+}
+static void dbs_timer_handler(unsigned long data) +{
struct cpu_dbs_info *cdbs = (struct cpu_dbs_info *)data;
struct cpu_common_dbs_info *shared = cdbs->shared;
struct cpufreq_policy *policy;
unsigned long flags;
spin_lock_irqsave(&shared->timer_lock, flags);
policy = shared->policy;
/*
* Timer handler isn't allowed to queue work at the moment, because:
* - Another timer handler has done that
* - We are stopping the governor
* - Or we are updating the sampling rate of ondemand governor
*/
if (shared->skip_work)
goto unlock;
shared->skip_work++;
queue_work(system_wq, &shared->work);
So, IIUC, in the event that this function gets called back to back and the first Work hasn't dequeued yet, then this queue_work() will not really enqueue, since queue_work_on() will return False?
In that case we wouldn't reach queue_work() in the first place as skip_work will be incremented on the first call and the second call will simply return early.
If so, then does it mean we're skipping more recent CPU freq requests? Should we cancel past Work if it hasn't been serviced?
It doesn't matter. Its only the work handler that is going to do some useful work, and there is no difference in the first or the second request.
Hi Viresh,
On 30 October 2015 at 22:36, Viresh Kumar viresh.kumar@linaro.org wrote:
Hi Ashwin,
On 30-10-15, 16:46, Ashwin Chaugule wrote:
On 29 October 2015 at 08:27, Viresh Kumar viresh.kumar@linaro.org wrote:
This could be made lightweight by keeping per-cpu deferred timers with a single work item, which is scheduled by the first timer that expires.
Single shared work item - would perhaps make it a bit more clear.
Okay, in case that I need to repost this, I will reword it.
Thanks.
+void gov_cancel_work(struct cpu_common_dbs_info *shared) +{
unsigned long flags;
/*
* No work will be queued from timer handlers after skip_work is
* updated. And so we can safely cancel the work first and then the
* timers.
*/
spin_lock_irqsave(&shared->timer_lock, flags);
shared->skip_work++;
spin_unlock_irqrestore(&shared->timer_lock, flags);
cancel_work_sync(&shared->work);
gov_cancel_timers(shared->policy);
shared->skip_work = 0;
Why doesnt this require the spin_lock protection?
Because there is no race here. We have already removed all queued-timers and the shared work.
I see.
-static void dbs_timer(struct work_struct *work) +static void dbs_work_handler(struct work_struct *work) {
mutex_lock(&shared->timer_mutex);
delay = dbs_data->cdata->gov_dbs_timer(policy, eval_load);
mutex_unlock(&shared->timer_mutex);
shared->skip_work--;
Ditto.
Again, there is no race here. We have already removed the queued-timers for the entire policy.
Makes sense.
The only other user is the gov_cancel_work() thread (which is called while stopping the governor or updating the sampling rate), which doesn't depend on this being decremented as that will wait for the work to finish.
gov_add_timers(policy, delay);
+}
+static void dbs_timer_handler(unsigned long data) +{
struct cpu_dbs_info *cdbs = (struct cpu_dbs_info *)data;
struct cpu_common_dbs_info *shared = cdbs->shared;
struct cpufreq_policy *policy;
unsigned long flags;
spin_lock_irqsave(&shared->timer_lock, flags);
policy = shared->policy;
/*
* Timer handler isn't allowed to queue work at the moment, because:
* - Another timer handler has done that
* - We are stopping the governor
* - Or we are updating the sampling rate of ondemand governor
*/
if (shared->skip_work)
goto unlock;
shared->skip_work++;
queue_work(system_wq, &shared->work);
So, IIUC, in the event that this function gets called back to back and the first Work hasn't dequeued yet, then this queue_work() will not really enqueue, since queue_work_on() will return False?
In that case we wouldn't reach queue_work() in the first place as skip_work will be incremented on the first call and the second call will simply return early.
If so, then does it mean we're skipping more recent CPU freq requests? Should we cancel past Work if it hasn't been serviced?
It doesn't matter. Its only the work handler that is going to do some useful work, and there is no difference in the first or the second request.
Yea, thanks for clarifying. I think I missed the del_timer_sync() which had raised my doubts.
Reviewed-by: Ashwin Chaugule ashwin.chaugule@linaro.org
Regards, Ashwin.
Am Donnerstag, den 29.10.2015, 17:57 +0530 schrieb Viresh Kumar:
cpufreq governors evaluate load at sampling rate and based on that they update frequency for a group of CPUs belonging to the same cpufreq policy.
This is required to be done in a single thread for all policy->cpus, but because we don't want to wakeup idle CPUs to do just that, we use deferrable work for this. If we would have used a single delayed deferrable work for the entire policy, there were chances that the CPU required to run the handler can be in idle and we might end up not changing the frequency for the entire group with load variations.
And so we were forced to keep per-cpu works, and only the one that expires first need to do the real work and others are rescheduled for next sampling time.
We have been using the more complex solution until now, where we used a delayed deferrable work for this, which is a combination of a timer and a work.
This could be made lightweight by keeping per-cpu deferred timers with a single work item, which is scheduled by the first timer that expires.
This patch does just that and here are important changes:
- The timer handler will run in irq context and so we need to use a spin_lock instead of the timer_mutex. And so a separate timer_lock is created. This also makes the use of the mutex and lock quite clear, as we know what exactly they are protecting.
- A new field 'skip_work' is added to track when the timer handlers can queue a work. More comments present in code.
I don't want to block this patch on that, but maybe as a thought for further consideration: Wouldn't it make sense to use a single unbound deferrable work item for this? There was some work to make this possible already: "timer: make deferrable cpu unbound timers really not bound to a cpu"
Regards, Lucas
Suggested-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
drivers/cpufreq/cpufreq_governor.c | 139 +++++++++++++++++++++---------------- drivers/cpufreq/cpufreq_governor.h | 20 ++++-- drivers/cpufreq/cpufreq_ondemand.c | 8 +-- 3 files changed, 98 insertions(+), 69 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index 999e1f6addf9..a3f9bc9b98e9 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -158,47 +158,52 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) } EXPORT_SYMBOL_GPL(dbs_check_cpu); -static inline void __gov_queue_work(int cpu, struct dbs_data *dbs_data,
unsigned int delay)
+void gov_add_timers(struct cpufreq_policy *policy, unsigned int delay) {
- struct cpu_dbs_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu);
- mod_delayed_work_on(cpu, system_wq, &cdbs->dwork, delay);
-}
-void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
unsigned int delay, bool all_cpus)
-{
- int i;
- struct dbs_data *dbs_data = policy->governor_data;
- struct cpu_dbs_info *cdbs;
- int cpu;
- if (!all_cpus) {
/*
* Use raw_smp_processor_id() to avoid preemptible warnings.
* We know that this is only called with all_cpus == false from
* works that have been queued with *_work_on() functions and
* those works are canceled during CPU_DOWN_PREPARE so they
* can't possibly run on any other CPU.
*/
__gov_queue_work(raw_smp_processor_id(), dbs_data, delay);
- } else {
for_each_cpu(i, policy->cpus)
__gov_queue_work(i, dbs_data, delay);
- for_each_cpu(cpu, policy->cpus) {
cdbs = dbs_data->cdata->get_cpu_cdbs(cpu);
cdbs->timer.expires = jiffies + delay;
}add_timer_on(&cdbs->timer, cpu);
} -EXPORT_SYMBOL_GPL(gov_queue_work); +EXPORT_SYMBOL_GPL(gov_add_timers); -static inline void gov_cancel_work(struct dbs_data *dbs_data,
struct cpufreq_policy *policy)
+static inline void gov_cancel_timers(struct cpufreq_policy *policy) {
- struct dbs_data *dbs_data = policy->governor_data; struct cpu_dbs_info *cdbs; int i;
for_each_cpu(i, policy->cpus) { cdbs = dbs_data->cdata->get_cpu_cdbs(i);
cancel_delayed_work_sync(&cdbs->dwork);
}del_timer_sync(&cdbs->timer);
} +void gov_cancel_work(struct cpu_common_dbs_info *shared) +{
- unsigned long flags;
- /*
* No work will be queued from timer handlers after skip_work is
* updated. And so we can safely cancel the work first and then the
* timers.
*/
- spin_lock_irqsave(&shared->timer_lock, flags);
- shared->skip_work++;
- spin_unlock_irqrestore(&shared->timer_lock, flags);
- cancel_work_sync(&shared->work);
- gov_cancel_timers(shared->policy);
- shared->skip_work = 0;
+}
/* Will return if we need to evaluate cpu load again or not */ static bool need_load_eval(struct cpu_common_dbs_info *shared, unsigned int sampling_rate) @@ -217,29 +222,21 @@ static bool need_load_eval(struct cpu_common_dbs_info *shared, return true; } -static void dbs_timer(struct work_struct *work) +static void dbs_work_handler(struct work_struct *work) {
- struct cpu_dbs_info *cdbs = container_of(work, struct cpu_dbs_info,
dwork.work);
- struct cpu_common_dbs_info *shared = cdbs->shared;
- struct cpu_common_dbs_info *shared = container_of(work, struct
struct cpufreq_policy *policy; struct dbs_data *dbs_data; unsigned int sampling_rate, delay;cpu_common_dbs_info, work);
- bool modify_all = true;
- mutex_lock(&shared->timer_mutex);
- bool eval_load;
policy = shared->policy;
- /*
* Governor might already be disabled and there is no point continuing
* with the work-handler.
*/
- if (!policy)
goto unlock;
- dbs_data = policy->governor_data;
- /* Kill all timers */
- gov_cancel_timers(policy);
- if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
@@ -250,14 +247,44 @@ static void dbs_timer(struct work_struct *work) sampling_rate = od_tuners->sampling_rate; }
- if (!need_load_eval(cdbs->shared, sampling_rate))
modify_all = false;
- eval_load = need_load_eval(shared, sampling_rate);
- delay = dbs_data->cdata->gov_dbs_timer(policy, modify_all);
- gov_queue_work(dbs_data, policy, delay, modify_all);
- /*
* Make sure cpufreq_governor_limits() isn't evaluating load in
* parallel.
*/
- mutex_lock(&shared->timer_mutex);
- delay = dbs_data->cdata->gov_dbs_timer(policy, eval_load);
- mutex_unlock(&shared->timer_mutex);
- shared->skip_work--;
- gov_add_timers(policy, delay);
+}
+static void dbs_timer_handler(unsigned long data) +{
- struct cpu_dbs_info *cdbs = (struct cpu_dbs_info *)data;
- struct cpu_common_dbs_info *shared = cdbs->shared;
- struct cpufreq_policy *policy;
- unsigned long flags;
- spin_lock_irqsave(&shared->timer_lock, flags);
- policy = shared->policy;
- /*
* Timer handler isn't allowed to queue work at the moment, because:
* - Another timer handler has done that
* - We are stopping the governor
* - Or we are updating the sampling rate of ondemand governor
*/
- if (shared->skip_work)
goto unlock;
- shared->skip_work++;
- queue_work(system_wq, &shared->work);
unlock:
- mutex_unlock(&shared->timer_mutex);
- spin_unlock_irqrestore(&shared->timer_lock, flags);
} static void set_sampling_rate(struct dbs_data *dbs_data, @@ -288,6 +315,8 @@ static int alloc_common_dbs_info(struct cpufreq_policy *policy, cdata->get_cpu_cdbs(j)->shared = shared; mutex_init(&shared->timer_mutex);
- spin_lock_init(&shared->timer_lock);
- INIT_WORK(&shared->work, dbs_work_handler); return 0;
} @@ -452,7 +481,9 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy, if (ignore_nice) j_cdbs->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
INIT_DEFERRABLE_WORK(&j_cdbs->dwork, dbs_timer);
__setup_timer(&j_cdbs->timer, dbs_timer_handler,
(unsigned long)j_cdbs,
}TIMER_DEFERRABLE | TIMER_IRQSAFE);
if (cdata->governor == GOV_CONSERVATIVE) { @@ -470,8 +501,7 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy, od_ops->powersave_bias_init_cpu(cpu); }
- gov_queue_work(dbs_data, policy, delay_for_sampling_rate(sampling_rate),
true);
- gov_add_timers(policy, delay_for_sampling_rate(sampling_rate)); return 0;
} @@ -485,16 +515,9 @@ static int cpufreq_governor_stop(struct cpufreq_policy *policy, if (!shared || !shared->policy) return -EBUSY;
- /*
* Work-handler must see this updated, as it should not proceed any
* further after governor is disabled. And so timer_mutex is taken while
* updating this value.
*/
- mutex_lock(&shared->timer_mutex);
- gov_cancel_work(shared); shared->policy = NULL;
- mutex_unlock(&shared->timer_mutex);
- gov_cancel_work(dbs_data, policy); return 0;
} diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index 0c7589016b6c..76742902491e 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -132,12 +132,20 @@ static void *get_cpu_dbs_info_s(int cpu) \ struct cpu_common_dbs_info { struct cpufreq_policy *policy; /*
* percpu mutex that serializes governor limit change with dbs_timer
* invocation. We do not want dbs_timer to run when user is changing
* the governor or limits.
* Per policy mutex that serializes load evaluation from limit-change
*/ struct mutex timer_mutex;* and work-handler.
- /*
* Per policy lock that serializes access to queuing work from timer
* handlers.
*/
- spinlock_t timer_lock;
- ktime_t time_stamp;
- unsigned int skip_work;
- struct work_struct work;
}; /* Per cpu structures */ @@ -152,7 +160,7 @@ struct cpu_dbs_info { * wake-up from idle. */ unsigned int prev_load;
- struct delayed_work dwork;
- struct timer_list timer; struct cpu_common_dbs_info *shared;
}; @@ -268,11 +276,11 @@ static ssize_t show_sampling_rate_min_gov_pol \ extern struct mutex cpufreq_governor_lock; +void gov_add_timers(struct cpufreq_policy *policy, unsigned int delay); +void gov_cancel_work(struct cpu_common_dbs_info *shared); void dbs_check_cpu(struct dbs_data *dbs_data, int cpu); int cpufreq_governor_dbs(struct cpufreq_policy *policy, struct common_dbs_data *cdata, unsigned int event); -void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
unsigned int delay, bool all_cpus);
void od_register_powersave_bias_handler(unsigned int (*f) (struct cpufreq_policy *, unsigned int, unsigned int), unsigned int powersave_bias); diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 0848c8ac6847..9e0293c23285 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -285,13 +285,11 @@ static void update_sampling_rate(struct dbs_data *dbs_data, continue; next_sampling = jiffies + usecs_to_jiffies(new_rate);
appointed_at = dbs_info->cdbs.dwork.timer.expires;
appointed_at = dbs_info->cdbs.timer.expires;
if (time_before(next_sampling, appointed_at)) {
cancel_delayed_work_sync(&dbs_info->cdbs.dwork);
gov_queue_work(dbs_data, policy,
usecs_to_jiffies(new_rate), true);
gov_cancel_work(shared);
gov_add_timers(policy, usecs_to_jiffies(new_rate));
} }
On 30-11-15, 13:05, Lucas Stach wrote:
I don't want to block this patch on that, but maybe as a thought for further consideration: Wouldn't it make sense to use a single unbound deferrable work item for this? There was some work to make this possible already: "timer: make deferrable cpu unbound timers really not bound to a cpu"
Yes, it would be sensible but that work has gone nowhere since April. Once that is merged, we can think about it.
Currently update_sampling_rate() runs over each online CPU and cancels/queues timers on all policy->cpus every time. This should be done just once for any cpu belonging to a policy.
Create a cpumask and keep on clearing it as and when we process policies, so that we don't have to traverse through all CPUs of the same policy.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq_ondemand.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 9e0293c23285..6eac39124894 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -251,6 +251,7 @@ static void update_sampling_rate(struct dbs_data *dbs_data, struct cpufreq_policy *policy; struct cpu_common_dbs_info *shared; unsigned long next_sampling, appointed_at; + struct cpumask cpumask; int cpu;
od_tuners->sampling_rate = new_rate = max(new_rate, @@ -261,7 +262,9 @@ static void update_sampling_rate(struct dbs_data *dbs_data, */ mutex_lock(&od_dbs_cdata.mutex);
- for_each_online_cpu(cpu) { + cpumask_copy(&cpumask, cpu_online_mask); + + for_each_cpu(cpu, &cpumask) { dbs_info = &per_cpu(od_cpu_dbs_info, cpu); cdbs = &dbs_info->cdbs; shared = cdbs->shared; @@ -275,6 +278,9 @@ static void update_sampling_rate(struct dbs_data *dbs_data,
policy = shared->policy;
+ /* clear all CPUs of this policy */ + cpumask_andnot(&cpumask, &cpumask, policy->cpus); + /* * Update sampling rate for CPUs whose policy is governed by * dbs_data. In case of governor_per_policy, only a single @@ -284,6 +290,10 @@ static void update_sampling_rate(struct dbs_data *dbs_data, if (dbs_data != policy->governor_data) continue;
+ /* + * Checking this for any CPU should be fine, timers for all of + * them are scheduled together. + */ next_sampling = jiffies + usecs_to_jiffies(new_rate); appointed_at = dbs_info->cdbs.timer.expires;
On 29-10-15, 17:57, Viresh Kumar wrote:
Hi Rafael,
As suggested by you yesterday, I have updated the governor core to keep per-cpu timers and a shared work for the entire policy.
More details are present in the changelogs, hope they are somewhat better this time.
I have tested it with the test-suite, that I created sometime back while fixing locking issues in governors.. Tried all kind of stuff in parallel that could have broken it (those are the testcases that separate people reported over time, around governors). It works fine.
The first one is a bug fix really, which I noticed today only :), next three are minor cleanups to prepare for the big change. Fourth one is the main patch that does the conversion and the final one is cherry-picked from the last series as that was still relevant.
This is rebased over: pm/bleeding-edge + "[PATCH V4] cpufreq: governor: Quit work-handlers early if governor is stopped", which I sent separately this morning with the changelog you suggested.
Please see if this meets your expectations or not :)
Ping !!
On Thursday, October 29, 2015 05:57:19 PM Viresh Kumar wrote:
Hi Rafael,
As suggested by you yesterday, I have updated the governor core to keep per-cpu timers and a shared work for the entire policy.
More details are present in the changelogs, hope they are somewhat better this time.
I have tested it with the test-suite, that I created sometime back while fixing locking issues in governors.. Tried all kind of stuff in parallel that could have broken it (those are the testcases that separate people reported over time, around governors). It works fine.
The first one is a bug fix really, which I noticed today only :), next three are minor cleanups to prepare for the big change. Fourth one is the main patch that does the conversion and the final one is cherry-picked from the last series as that was still relevant.
This is rebased over: pm/bleeding-edge + "[PATCH V4] cpufreq: governor: Quit work-handlers early if governor is stopped", which I sent separately this morning with the changelog you suggested.
Please see if this meets your expectations or not :)
I like this set overall, except for the useless code shuffling in the first patch. Can you please update this one so it doesn't move code it has no reason to move and resend?
Thanks, Rafael
linaro-kernel@lists.linaro.org