Hi Rafael,
The aim of this series is to stop managing cpufreq sysfs directories on CPU hotplug.
Currently on removal of a 'cpu != policy->cpu', we remove its sysfs directories by removing the soft-link. And on removal of policy->cpu, we migrate the sysfs directories to the next cpu in policy. But if policy->cpu was the last CPU, we remove the policy completely and allocate it again once the CPUs come back.
This has shortcomings:
- Code Complexity - Slower hotplug - sysfs file permissions are reset after all policy->cpus are offlined - CPUFreq stats history lost after all policy->cpus are offlined - Special management of sysfs stuff during suspend/resume
To make things simple we can stop playing with sysfs files unless the driver is getting removed. Also the policy can be kept intact to be used later.
First few patches provide a clean base for others *more important* patches.
Rebased-over: your bleeding edge branch as there were dependencies on my earlier patches.
Pushed here: git://git.linaro.org/people/viresh.kumar/linux.git cpufreq/core/sysfs-v3
V2->V3: - First five are already applied by you and the 7th one was sent separately as a fix earlier and got applied. So, V2 had 20 patches and V3 has 14. - Dropped while(1) and used do/while. - policy->governor marked as NULL only while removing the governor and not when policy becomes inactive. - Commit logs/comments updated - Applied Acks from Saravan
v1->V2: - Dropped the idea of using policy-lists for getting policy for any cpu - Also dropped fallback list and its per-cpu variable - Stopped cleaning cpufreq_cpu_data and doing list_del(policy) on logical hotplug. - Added support for physical hotplug of CPUs (Untested).
Saravana Kannan (1): cpufreq: Track cpu managing sysfs kobjects separately
Viresh Kumar (13): cpufreq: Create for_each_{in}active_policy() cpufreq: Don't clear cpufreq_cpu_data and policy list for inactive policies cpufreq: Get rid of cpufreq_cpu_data_fallback cpufreq: Don't traverse all active policies to find policy for a cpu cpufreq: Manage governor usage history with 'policy->last_governor' cpufreq: Mark policy->governor = NULL for inactive policies cpufreq: Don't allow updating inactive-policies from sysfs cpufreq: Stop migrating sysfs files on hotplug cpufreq: Remove cpufreq_update_policy() cpufreq: Initialize policy->kobj while allocating policy cpufreq: Call cpufreq_policy_put_kobj() from cpufreq_policy_free() cpufreq: Restart governor as soon as possible cpufreq: Add support for physical hoplug of CPUs
drivers/cpufreq/cpufreq.c | 509 ++++++++++++++++++++++++++-------------------- include/linux/cpufreq.h | 5 +- 2 files changed, 290 insertions(+), 224 deletions(-)
policy->cpus is cleared unconditionally now on hotplug-out of a CPU and it can be checked to know if a policy is active or not. Create helper routines to iterate over all active/inactive policies, based on policy->cpus field.
Replace all instances of for_each_policy() with for_each_active_policy() to make them iterate only for active policies. (We haven't made changes yet to keep inactive policies in the same list, but that will be followed in a later patch).
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 82 +++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 73 insertions(+), 9 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 8cf0c0e7aea8..9229e7f81673 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -32,11 +32,75 @@ #include <trace/events/power.h>
/* Macros to iterate over lists */ -/* Iterate over online CPUs policies */ static LIST_HEAD(cpufreq_policy_list); + +static inline bool policy_is_inactive(struct cpufreq_policy *policy) +{ + return cpumask_empty(policy->cpus); +} + +/* + * Finds Next Acive/Inactive policy + * + * policy: Previous policy. + * active: Looking for active policy (true) or Inactive policy (false). + */ +static struct cpufreq_policy *next_policy(struct cpufreq_policy *policy, + bool active) +{ + do { + if (likely(policy)) + policy = list_next_entry(policy, policy_list); + else + policy = list_first_entry(&cpufreq_policy_list, + typeof(*policy), policy_list); + + /* No more policies */ + if (&policy->policy_list == &cpufreq_policy_list) + return policy; + + /* + * Table to explains logic behind following expression: + * + * Active policy_is_inactive Result + * (policy or next) + * + * 0 0 next + * 0 1 policy + * 1 0 policy + * 1 1 next + */ + } while (!(active ^ policy_is_inactive(policy))); + + return policy; +} + +/* + * Iterate over online CPUs policies + * + * Explanation: + * - expr1: marks __temp NULL and gets the first __active policy. + * - expr2: checks if list is finished, if yes then it sets __policy to the last + * __active policy and returns 0 to end the loop. + * - expr3: preserves last __active policy and gets the next one. + */ +#define __for_each_active_policy(__policy, __temp, __active) \ + for (__temp = NULL, __policy = next_policy(NULL, __active); \ + &__policy->policy_list != &cpufreq_policy_list || \ + ((__policy = __temp) && 0); \ + __temp = __policy, __policy = next_policy(__policy, __active)) + #define for_each_policy(__policy) \ list_for_each_entry(__policy, &cpufreq_policy_list, policy_list)
+/* + * Routines to iterate over active or inactive policies + * __policy: next active/inactive policy will be returned in this. + * __temp: for internal purpose, not to be used by caller. + */ +#define for_each_active_policy(__policy, __temp) __for_each_active_policy(__policy, __temp, true) +#define for_each_inactive_policy(__policy, __temp) __for_each_active_policy(__policy, __temp, false) + /* Iterate over governors */ static LIST_HEAD(cpufreq_governor_list); #define for_each_governor(__governor) \ @@ -1142,7 +1206,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) { unsigned int j, cpu = dev->id; int ret = -ENOMEM; - struct cpufreq_policy *policy; + struct cpufreq_policy *policy, *tpolicy; unsigned long flags; bool recover_policy = cpufreq_suspended;
@@ -1156,7 +1220,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
/* Check if this CPU already has a policy to manage it */ read_lock_irqsave(&cpufreq_driver_lock, flags); - for_each_policy(policy) { + for_each_active_policy(policy, tpolicy) { if (cpumask_test_cpu(cpu, policy->related_cpus)) { read_unlock_irqrestore(&cpufreq_driver_lock, flags); ret = cpufreq_add_policy_cpu(policy, cpu, dev); @@ -1664,7 +1728,7 @@ EXPORT_SYMBOL(cpufreq_generic_suspend); */ void cpufreq_suspend(void) { - struct cpufreq_policy *policy; + struct cpufreq_policy *policy, *tpolicy;
if (!cpufreq_driver) return; @@ -1674,7 +1738,7 @@ void cpufreq_suspend(void)
pr_debug("%s: Suspending Governors\n", __func__);
- for_each_policy(policy) { + for_each_active_policy(policy, tpolicy) { if (__cpufreq_governor(policy, CPUFREQ_GOV_STOP)) pr_err("%s: Failed to stop governor for policy: %p\n", __func__, policy); @@ -1696,7 +1760,7 @@ void cpufreq_suspend(void) */ void cpufreq_resume(void) { - struct cpufreq_policy *policy; + struct cpufreq_policy *policy, *tpolicy;
if (!cpufreq_driver) return; @@ -1708,7 +1772,7 @@ void cpufreq_resume(void)
pr_debug("%s: Resuming Governors\n", __func__);
- for_each_policy(policy) { + for_each_active_policy(policy, tpolicy) { if (cpufreq_driver->resume && cpufreq_driver->resume(policy)) pr_err("%s: Failed to resume driver: %p\n", __func__, policy); @@ -2351,10 +2415,10 @@ static struct notifier_block __refdata cpufreq_cpu_notifier = { static int cpufreq_boost_set_sw(int state) { struct cpufreq_frequency_table *freq_table; - struct cpufreq_policy *policy; + struct cpufreq_policy *policy, *tpolicy; int ret = -EINVAL;
- for_each_policy(policy) { + for_each_active_policy(policy, tpolicy) { freq_table = cpufreq_frequency_get_table(policy->cpu); if (freq_table) { ret = cpufreq_frequency_table_cpuinfo(policy,
On Friday, May 08, 2015 11:53:44 AM Viresh Kumar wrote:
policy->cpus is cleared unconditionally now on hotplug-out of a CPU and it can be checked to know if a policy is active or not. Create helper routines to iterate over all active/inactive policies, based on policy->cpus field.
Replace all instances of for_each_policy() with for_each_active_policy() to make them iterate only for active policies. (We haven't made changes yet to keep inactive policies in the same list, but that will be followed in a later patch).
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
drivers/cpufreq/cpufreq.c | 82 +++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 73 insertions(+), 9 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 8cf0c0e7aea8..9229e7f81673 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -32,11 +32,75 @@ #include <trace/events/power.h> /* Macros to iterate over lists */ -/* Iterate over online CPUs policies */ static LIST_HEAD(cpufreq_policy_list);
+static inline bool policy_is_inactive(struct cpufreq_policy *policy) +{
- return cpumask_empty(policy->cpus);
+}
+/*
- Finds Next Acive/Inactive policy
- policy: Previous policy.
- active: Looking for active policy (true) or Inactive policy (false).
- */
A proper kerneldoc, please.
+static struct cpufreq_policy *next_policy(struct cpufreq_policy *policy,
bool active)
+{
- do {
if (likely(policy))
The likely() thing looks like an attempted overoptimization. I wouldn't use it.
policy = list_next_entry(policy, policy_list);
else
policy = list_first_entry(&cpufreq_policy_list,
typeof(*policy), policy_list);
/* No more policies */
if (&policy->policy_list == &cpufreq_policy_list)
Why does this ignore the 'active' arg?
return policy;
/*
* Table to explains logic behind following expression:
*
* Active policy_is_inactive Result
* (policy or next)
*
* 0 0 next
* 0 1 policy
* 1 0 policy
* 1 1 next
*/
- } while (!(active ^ policy_is_inactive(policy)));
- return policy;
+}
+/*
- Iterate over online CPUs policies
"CPU policies" would look better IMO.
- Explanation:
- expr1: marks __temp NULL and gets the first __active policy.
- expr2: checks if list is finished, if yes then it sets __policy to the last
__active policy and returns 0 to end the loop.
- expr3: preserves last __active policy and gets the next one.
I'd expect this to describe the arguments rather. Also the code is rather difficult to follow.
The XOR thing is rather an unnecessary trick (it acts on bools but quite directly assumes them to be integers with the lowest bit representing the value) and you can do "active == !policy_is_inactive(policy)" instead.
So if you separate the last check as
static bool suitable_policy(struct cpufreq_policy *policy, bool active) { return active == !policy_is_inactive(policy); }
then you can introduce
static struct cpufreq_policy *first_policy(bool active) { struct cpufreq_policy *policy;
policy = list_first_entry(&cpufreq_policy_list, typeof(*policy), policy_list); if (!suitable_policy(policy, active)) policy = next_policy(policy, active);
return policy; }
and then next_policy() becomes much simpler and, moreover, this:
- */
+#define __for_each_active_policy(__policy, __temp, __active) \
- for (__temp = NULL, __policy = next_policy(NULL, __active); \
&__policy->policy_list != &cpufreq_policy_list || \
((__policy = __temp) && 0); \
__temp = __policy, __policy = next_policy(__policy, __active))
can be rewritten as:
#define __for_each_active_policy(__policy, __active) \ for (__policy = first_policy(__active); __policy; \ __policy = next_policy(__policy, __active))
and you don't need to explain what it does even. Of course, next_policy() has to return 'false' when it can't find anything suitable, but that shouldn't be too difficult to arrange I suppose?
Or if you need __temp because the object pointed to by __policy can go away, you can follow the design of list_for_each_entry_safe() here.
#define for_each_policy(__policy) \ list_for_each_entry(__policy, &cpufreq_policy_list, policy_list) +/*
- Routines to iterate over active or inactive policies
- __policy: next active/inactive policy will be returned in this.
- __temp: for internal purpose, not to be used by caller.
- */
+#define for_each_active_policy(__policy, __temp) __for_each_active_policy(__policy, __temp, true) +#define for_each_inactive_policy(__policy, __temp) __for_each_active_policy(__policy, __temp, false)
I must admit to having a problem with this definition.
#define for_each_inactive_something(X) __for_each_active_something(X)
looks really confusing.
Maybe rename __for_each_active_policy() to __for_each_suitable_policy()?
/* Iterate over governors */ static LIST_HEAD(cpufreq_governor_list); #define for_each_governor(__governor) \ @@ -1142,7 +1206,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) { unsigned int j, cpu = dev->id; int ret = -ENOMEM;
- struct cpufreq_policy *policy;
- struct cpufreq_policy *policy, *tpolicy;
I'd just use "tmp" or "aux" instead of "tpolicy".
unsigned long flags; bool recover_policy = cpufreq_suspended; @@ -1156,7 +1220,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) /* Check if this CPU already has a policy to manage it */ read_lock_irqsave(&cpufreq_driver_lock, flags);
- for_each_policy(policy) {
- for_each_active_policy(policy, tpolicy) { if (cpumask_test_cpu(cpu, policy->related_cpus)) { read_unlock_irqrestore(&cpufreq_driver_lock, flags); ret = cpufreq_add_policy_cpu(policy, cpu, dev);
@@ -1664,7 +1728,7 @@ EXPORT_SYMBOL(cpufreq_generic_suspend); */ void cpufreq_suspend(void) {
- struct cpufreq_policy *policy;
- struct cpufreq_policy *policy, *tpolicy;
if (!cpufreq_driver) return; @@ -1674,7 +1738,7 @@ void cpufreq_suspend(void) pr_debug("%s: Suspending Governors\n", __func__);
- for_each_policy(policy) {
- for_each_active_policy(policy, tpolicy) { if (__cpufreq_governor(policy, CPUFREQ_GOV_STOP)) pr_err("%s: Failed to stop governor for policy: %p\n", __func__, policy);
@@ -1696,7 +1760,7 @@ void cpufreq_suspend(void) */ void cpufreq_resume(void) {
- struct cpufreq_policy *policy;
- struct cpufreq_policy *policy, *tpolicy;
if (!cpufreq_driver) return; @@ -1708,7 +1772,7 @@ void cpufreq_resume(void) pr_debug("%s: Resuming Governors\n", __func__);
- for_each_policy(policy) {
- for_each_active_policy(policy, tpolicy) { if (cpufreq_driver->resume && cpufreq_driver->resume(policy)) pr_err("%s: Failed to resume driver: %p\n", __func__, policy);
@@ -2351,10 +2415,10 @@ static struct notifier_block __refdata cpufreq_cpu_notifier = { static int cpufreq_boost_set_sw(int state) { struct cpufreq_frequency_table *freq_table;
- struct cpufreq_policy *policy;
- struct cpufreq_policy *policy, *tpolicy; int ret = -EINVAL;
- for_each_policy(policy) {
- for_each_active_policy(policy, tpolicy) { freq_table = cpufreq_frequency_get_table(policy->cpu); if (freq_table) { ret = cpufreq_frequency_table_cpuinfo(policy,
Thanks for the really detailed review ..
On 9 May 2015 at 03:16, Rafael J. Wysocki rjw@rjwysocki.net wrote:
On Friday, May 08, 2015 11:53:44 AM Viresh Kumar wrote:
+/*
- Finds Next Acive/Inactive policy
- policy: Previous policy.
- active: Looking for active policy (true) or Inactive policy (false).
- */
A proper kerneldoc, please.
I thought its an internal function which we may not want to put into kernel-doc and so did it this way. Will take care in future ..
+static struct cpufreq_policy *next_policy(struct cpufreq_policy *policy,
bool active)
+{
do {
if (likely(policy))
The likely() thing looks like an attempted overoptimization. I wouldn't use it.
ok
policy = list_next_entry(policy, policy_list);
else
policy = list_first_entry(&cpufreq_policy_list,
typeof(*policy), policy_list);
/* No more policies */
if (&policy->policy_list == &cpufreq_policy_list)
Why does this ignore the 'active' arg?
Because what we are checking here is that the list is finished or not. The 'policy' we are returning here is not a real policy, but a not-to-be-used structure over the list HEAD.
- Iterate over online CPUs policies
"CPU policies" would look better IMO.
Sure.
- Explanation:
- expr1: marks __temp NULL and gets the first __active policy.
- expr2: checks if list is finished, if yes then it sets __policy to the last
__active policy and returns 0 to end the loop.
- expr3: preserves last __active policy and gets the next one.
I'd expect this to describe the arguments rather. Also the code is rather difficult to follow.
The XOR thing is rather an unnecessary trick (it acts on bools but quite directly assumes them to be integers with the lowest bit representing the value) and you can do "active == !policy_is_inactive(policy)" instead.
So if you separate the last check as
static bool suitable_policy(struct cpufreq_policy *policy, bool active) { return active == !policy_is_inactive(policy); }
then you can introduce
static struct cpufreq_policy *first_policy(bool active) { struct cpufreq_policy *policy;
policy = list_first_entry(&cpufreq_policy_list, typeof(*policy), policy_list); if (!suitable_policy(policy, active)) policy = next_policy(policy, active); return policy;
}
and then next_policy() becomes much simpler and, moreover, this:
- */
+#define __for_each_active_policy(__policy, __temp, __active) \
for (__temp = NULL, __policy = next_policy(NULL, __active); \
&__policy->policy_list != &cpufreq_policy_list || \
((__policy = __temp) && 0); \
__temp = __policy, __policy = next_policy(__policy, __active))
can be rewritten as:
#define __for_each_active_policy(__policy, __active) \ for (__policy = first_policy(__active); __policy; \ __policy = next_policy(__policy, __active))
and you don't need to explain what it does even. Of course, next_policy() has to return 'false' when it can't find anything suitable, but that shouldn't be too difficult to arrange I suppose?
Or if you need __temp because the object pointed to by __policy can go away, you can follow the design of list_for_each_entry_safe() here.
#define for_each_policy(__policy) \ list_for_each_entry(__policy, &cpufreq_policy_list, policy_list)
+/*
- Routines to iterate over active or inactive policies
- __policy: next active/inactive policy will be returned in this.
- __temp: for internal purpose, not to be used by caller.
- */
+#define for_each_active_policy(__policy, __temp) __for_each_active_policy(__policy, __temp, true) +#define for_each_inactive_policy(__policy, __temp) __for_each_active_policy(__policy, __temp, false)
I must admit to having a problem with this definition.
#define for_each_inactive_something(X) __for_each_active_something(X)
looks really confusing.
Maybe rename __for_each_active_policy() to __for_each_suitable_policy()?
/* Iterate over governors */ static LIST_HEAD(cpufreq_governor_list); #define for_each_governor(__governor) \ @@ -1142,7 +1206,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) { unsigned int j, cpu = dev->id; int ret = -ENOMEM;
struct cpufreq_policy *policy;
struct cpufreq_policy *policy, *tpolicy;
I'd just use "tmp" or "aux" instead of "tpolicy".
All accepted..
On 08-05-15, 23:46, Rafael J. Wysocki wrote:
On Friday, May 08, 2015 11:53:44 AM Viresh Kumar wrote:
+#define __for_each_active_policy(__policy, __temp, __active) \
- for (__temp = NULL, __policy = next_policy(NULL, __active); \
&__policy->policy_list != &cpufreq_policy_list || \
((__policy = __temp) && 0); \
__temp = __policy, __policy = next_policy(__policy, __active))
can be rewritten as:
#define __for_each_active_policy(__policy, __active) \ for (__policy = first_policy(__active); __policy; \ __policy = next_policy(__policy, __active))
and you don't need to explain what it does even. Of course, next_policy() has to return 'false' when it can't find anything suitable, but that shouldn't be too difficult to arrange I suppose?
Or if you need __temp because the object pointed to by __policy can go away, you can follow the design of list_for_each_entry_safe() here.
I wasn't using __temp to be safe, but to make sure that the loop points to a 'active' policy at the end. But that isn't required anymore as the only call site that required this, is fixed in a better way:
c75de0ac0756 ("cpufreq: Schedule work for the first-online CPU on resume")
policy->cpus is cleared unconditionally now on hotplug-out of a CPU and it can be checked to know if a policy is active or not. Create helper routines to iterate over all active/inactive policies, based on policy->cpus field.
Replace all instances of for_each_policy() with for_each_active_policy() to make them iterate only for active policies. (We haven't made changes yet to keep inactive policies in the same list, but that will be followed in a later patch).
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- Hi Rafael,
I have changed this patch based on your suggestions. Because of this, 3 more patches had rebase conflicts and am sending them again as well (In-reply-to the original messages).
Because only 4 patches were affected, I am no sending the whole series again, please let me know if you want the whole series to be sent again.
V3-V4: - Remove __temp from the arguments of for_each_[in]active_policies. - Simplified macros/next_policy, etc. - Other patches sent with this one, rebased on top of this one: - [PATCH V4 04/14] cpufreq: Don't traverse all active policies to find - [PATCH V4 05/14] cpufreq: Manage governor usage history with - [PATCH V4 06/14] cpufreq: Mark policy->governor = NULL for inactive
drivers/cpufreq/cpufreq.c | 66 ++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 59 insertions(+), 7 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 8cf0c0e7aea8..74d9fcbbe4f9 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -31,10 +31,62 @@ #include <linux/tick.h> #include <trace/events/power.h>
-/* Macros to iterate over lists */ -/* Iterate over online CPUs policies */ static LIST_HEAD(cpufreq_policy_list); -#define for_each_policy(__policy) \ + +static inline bool policy_is_inactive(struct cpufreq_policy *policy) +{ + return cpumask_empty(policy->cpus); +} + +static bool suitable_policy(struct cpufreq_policy *policy, bool active) +{ + return active == !policy_is_inactive(policy); +} + +/* Finds Next Acive/Inactive policy */ +static struct cpufreq_policy *next_policy(struct cpufreq_policy *policy, + bool active) +{ + do { + policy = list_next_entry(policy, policy_list); + + /* No more policies in the list */ + if (&policy->policy_list == &cpufreq_policy_list) + return NULL; + } while (!suitable_policy(policy, active)); + + return policy; +} + +static struct cpufreq_policy *first_policy(bool active) +{ + struct cpufreq_policy *policy; + + /* No policies in the list */ + if (list_empty(&cpufreq_policy_list)) + return NULL; + + policy = list_first_entry(&cpufreq_policy_list, typeof(*policy), + policy_list); + + if (!suitable_policy(policy, active)) + policy = next_policy(policy, active); + + return policy; +} + +/* Macros to iterate over CPU policies */ +#define for_each_suitable_policy(__policy, __active) \ + for (__policy = first_policy(__active); \ + __policy; \ + __policy = next_policy(__policy, __active)) + +#define for_each_active_policy(__policy) \ + for_each_suitable_policy(__policy, true) +#define for_each_inactive_policy(__policy) \ + for_each_suitable_policy(__policy, false) + +#define for_each_policy(__policy) \ list_for_each_entry(__policy, &cpufreq_policy_list, policy_list)
/* Iterate over governors */ @@ -1156,7 +1208,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
/* Check if this CPU already has a policy to manage it */ read_lock_irqsave(&cpufreq_driver_lock, flags); - for_each_policy(policy) { + for_each_active_policy(policy) { if (cpumask_test_cpu(cpu, policy->related_cpus)) { read_unlock_irqrestore(&cpufreq_driver_lock, flags); ret = cpufreq_add_policy_cpu(policy, cpu, dev); @@ -1674,7 +1726,7 @@ void cpufreq_suspend(void)
pr_debug("%s: Suspending Governors\n", __func__);
- for_each_policy(policy) { + for_each_active_policy(policy) { if (__cpufreq_governor(policy, CPUFREQ_GOV_STOP)) pr_err("%s: Failed to stop governor for policy: %p\n", __func__, policy); @@ -1708,7 +1760,7 @@ void cpufreq_resume(void)
pr_debug("%s: Resuming Governors\n", __func__);
- for_each_policy(policy) { + for_each_active_policy(policy) { if (cpufreq_driver->resume && cpufreq_driver->resume(policy)) pr_err("%s: Failed to resume driver: %p\n", __func__, policy); @@ -2354,7 +2406,7 @@ static int cpufreq_boost_set_sw(int state) struct cpufreq_policy *policy; int ret = -EINVAL;
- for_each_policy(policy) { + for_each_active_policy(policy) { freq_table = cpufreq_frequency_get_table(policy->cpu); if (freq_table) { ret = cpufreq_frequency_table_cpuinfo(policy,
Now that we can check policy->cpus to find if policy is active or not, we don't need to clean cpufreq_cpu_data and delete policy from the list on light weight tear down of policies (like in suspend).
To make it consistent and clean, set cpufreq_cpu_data for all related CPUs when the policy is first created and clean it only while it is freed.
Also update cpufreq_cpu_get_raw() to check if cpu is part of policy->cpus mask, so that we don't end up getting policies for offline CPUs.
In order to make sure that no users of 'policy' are using an inactive policy, use cpufreq_cpu_get_raw() instead of directly accessing cpufreq_cpu_data.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 79 ++++++++++++++++++++--------------------------- 1 file changed, 34 insertions(+), 45 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 9229e7f81673..c802ded03595 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -251,10 +251,18 @@ int cpufreq_generic_init(struct cpufreq_policy *policy, } EXPORT_SYMBOL_GPL(cpufreq_generic_init);
-unsigned int cpufreq_generic_get(unsigned int cpu) +/* Only for cpufreq core internal use */ +struct cpufreq_policy *cpufreq_cpu_get_raw(unsigned int cpu) { struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
+ return policy && cpumask_test_cpu(cpu, policy->cpus) ? policy : NULL; +} + +unsigned int cpufreq_generic_get(unsigned int cpu) +{ + struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); + if (!policy || IS_ERR(policy->clk)) { pr_err("%s: No %s associated to cpu: %d\n", __func__, policy ? "clk" : "policy", cpu); @@ -265,12 +273,6 @@ unsigned int cpufreq_generic_get(unsigned int cpu) } EXPORT_SYMBOL_GPL(cpufreq_generic_get);
-/* Only for cpufreq core internal use */ -struct cpufreq_policy *cpufreq_cpu_get_raw(unsigned int cpu) -{ - return per_cpu(cpufreq_cpu_data, cpu); -} - /** * cpufreq_cpu_get: returns policy for a cpu and marks it busy. * @@ -304,7 +306,7 @@ struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
if (cpufreq_driver) { /* get the CPU */ - policy = per_cpu(cpufreq_cpu_data, cpu); + policy = cpufreq_cpu_get_raw(cpu); if (policy) kobject_get(&policy->kobj); } @@ -1054,7 +1056,6 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu, struct device *dev) { int ret = 0; - unsigned long flags;
/* Has this CPU been taken care of already? */ if (cpumask_test_cpu(cpu, policy->cpus)) @@ -1069,13 +1070,7 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, }
down_write(&policy->rwsem); - - write_lock_irqsave(&cpufreq_driver_lock, flags); - cpumask_set_cpu(cpu, policy->cpus); - per_cpu(cpufreq_cpu_data, cpu) = policy; - write_unlock_irqrestore(&cpufreq_driver_lock, flags); - up_write(&policy->rwsem);
if (has_target()) { @@ -1166,6 +1161,17 @@ static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy)
static void cpufreq_policy_free(struct cpufreq_policy *policy) { + unsigned long flags; + int cpu; + + /* Remove policy from list */ + write_lock_irqsave(&cpufreq_driver_lock, flags); + list_del(&policy->policy_list); + + for_each_cpu(cpu, policy->related_cpus) + per_cpu(cpufreq_cpu_data, cpu) = NULL; + write_unlock_irqrestore(&cpufreq_driver_lock, flags); + free_cpumask_var(policy->related_cpus); free_cpumask_var(policy->cpus); kfree(policy); @@ -1287,12 +1293,12 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) __func__, ret); goto err_init_policy_kobj; } - }
- write_lock_irqsave(&cpufreq_driver_lock, flags); - for_each_cpu(j, policy->cpus) - per_cpu(cpufreq_cpu_data, j) = policy; - write_unlock_irqrestore(&cpufreq_driver_lock, flags); + write_lock_irqsave(&cpufreq_driver_lock, flags); + for_each_cpu(j, policy->related_cpus) + per_cpu(cpufreq_cpu_data, j) = policy; + write_unlock_irqrestore(&cpufreq_driver_lock, flags); + }
if (cpufreq_driver->get && !cpufreq_driver->setpolicy) { policy->cur = cpufreq_driver->get(policy->cpu); @@ -1351,11 +1357,11 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) goto err_out_unregister; blocking_notifier_call_chain(&cpufreq_policy_notifier_list, CPUFREQ_CREATE_POLICY, policy); - }
- write_lock_irqsave(&cpufreq_driver_lock, flags); - list_add(&policy->policy_list, &cpufreq_policy_list); - write_unlock_irqrestore(&cpufreq_driver_lock, flags); + write_lock_irqsave(&cpufreq_driver_lock, flags); + list_add(&policy->policy_list, &cpufreq_policy_list); + write_unlock_irqrestore(&cpufreq_driver_lock, flags); + }
cpufreq_init_policy(policy);
@@ -1379,11 +1385,6 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
err_out_unregister: err_get_freq: - write_lock_irqsave(&cpufreq_driver_lock, flags); - for_each_cpu(j, policy->cpus) - per_cpu(cpufreq_cpu_data, j) = NULL; - write_unlock_irqrestore(&cpufreq_driver_lock, flags); - if (!recover_policy) { kobject_put(&policy->kobj); wait_for_completion(&policy->kobj_unregister); @@ -1419,7 +1420,7 @@ static int __cpufreq_remove_dev_prepare(struct device *dev,
write_lock_irqsave(&cpufreq_driver_lock, flags);
- policy = per_cpu(cpufreq_cpu_data, cpu); + policy = cpufreq_cpu_get_raw(cpu);
/* Save the policy somewhere when doing a light-weight tear-down */ if (cpufreq_suspended) @@ -1477,15 +1478,9 @@ static int __cpufreq_remove_dev_prepare(struct device *dev, static int __cpufreq_remove_dev_finish(struct device *dev, struct subsys_interface *sif) { - unsigned int cpu = dev->id, cpus; + unsigned int cpu = dev->id; int ret; - unsigned long flags; - struct cpufreq_policy *policy; - - write_lock_irqsave(&cpufreq_driver_lock, flags); - policy = per_cpu(cpufreq_cpu_data, cpu); - per_cpu(cpufreq_cpu_data, cpu) = NULL; - write_unlock_irqrestore(&cpufreq_driver_lock, flags); + struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
if (!policy) { pr_debug("%s: No cpu_data found\n", __func__); @@ -1493,12 +1488,11 @@ static int __cpufreq_remove_dev_finish(struct device *dev, }
down_write(&policy->rwsem); - cpus = cpumask_weight(policy->cpus); cpumask_clear_cpu(cpu, policy->cpus); up_write(&policy->rwsem);
/* If cpu is last user of policy, free policy */ - if (cpus == 1) { + if (policy_is_inactive(policy)) { if (has_target()) { ret = __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); @@ -1520,11 +1514,6 @@ static int __cpufreq_remove_dev_finish(struct device *dev, if (cpufreq_driver->exit) cpufreq_driver->exit(policy);
- /* Remove policy from list of active policies */ - write_lock_irqsave(&cpufreq_driver_lock, flags); - list_del(&policy->policy_list); - write_unlock_irqrestore(&cpufreq_driver_lock, flags); - if (!cpufreq_suspended) cpufreq_policy_free(policy); } else if (has_target()) {
We can extract the same information from cpufreq_cpu_data as it is also available for inactive policies now. And so don't need cpufreq_cpu_data_fallback anymore.
Also add a WARN_ON() for the case where we try to restore from an active policy.
Acked-by: Saravana Kannan skannan@codeaurora.org Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index c802ded03595..b60311fe207f 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -113,7 +113,6 @@ static LIST_HEAD(cpufreq_governor_list); */ static struct cpufreq_driver *cpufreq_driver; static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); -static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data_fallback); static DEFINE_RWLOCK(cpufreq_driver_lock); DEFINE_MUTEX(cpufreq_governor_lock);
@@ -1093,13 +1092,14 @@ static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu) unsigned long flags;
read_lock_irqsave(&cpufreq_driver_lock, flags); - - policy = per_cpu(cpufreq_cpu_data_fallback, cpu); - + policy = per_cpu(cpufreq_cpu_data, cpu); read_unlock_irqrestore(&cpufreq_driver_lock, flags);
- if (policy) + if (likely(policy)) { + /* Policy should be inactive here */ + WARN_ON(!policy_is_inactive(policy)); policy->governor = NULL; + }
return policy; } @@ -1395,11 +1395,8 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) if (cpufreq_driver->exit) cpufreq_driver->exit(policy); err_set_policy_cpu: - if (recover_policy) { - /* Do not leave stale fallback data behind. */ - per_cpu(cpufreq_cpu_data_fallback, cpu) = NULL; + if (recover_policy) cpufreq_policy_put_kobj(policy); - } cpufreq_policy_free(policy);
nomem_out: @@ -1413,21 +1410,11 @@ static int __cpufreq_remove_dev_prepare(struct device *dev, { unsigned int cpu = dev->id, cpus; int ret; - unsigned long flags; struct cpufreq_policy *policy;
pr_debug("%s: unregistering CPU %u\n", __func__, cpu);
- write_lock_irqsave(&cpufreq_driver_lock, flags); - policy = cpufreq_cpu_get_raw(cpu); - - /* Save the policy somewhere when doing a light-weight tear-down */ - if (cpufreq_suspended) - per_cpu(cpufreq_cpu_data_fallback, cpu) = policy; - - write_unlock_irqrestore(&cpufreq_driver_lock, flags); - if (!policy) { pr_debug("%s: No cpu_data found\n", __func__); return -EINVAL;
We reach here while adding policy for a CPU and enter into the 'if' block only if a policy already exists for the CPU.
As cpufreq_cpu_data is set for all policy->related_cpus now, when the policy is first added, we can use that to find the CPU's policy instead of traversing the list of all active policies.
Acked-by: Saravana Kannan skannan@codeaurora.org Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index b60311fe207f..a3eb76969969 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1212,7 +1212,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) { unsigned int j, cpu = dev->id; int ret = -ENOMEM; - struct cpufreq_policy *policy, *tpolicy; + struct cpufreq_policy *policy; unsigned long flags; bool recover_policy = cpufreq_suspended;
@@ -1225,16 +1225,13 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) return 0;
/* Check if this CPU already has a policy to manage it */ - read_lock_irqsave(&cpufreq_driver_lock, flags); - for_each_active_policy(policy, tpolicy) { - if (cpumask_test_cpu(cpu, policy->related_cpus)) { - read_unlock_irqrestore(&cpufreq_driver_lock, flags); - ret = cpufreq_add_policy_cpu(policy, cpu, dev); - up_read(&cpufreq_rwsem); - return ret; - } + policy = per_cpu(cpufreq_cpu_data, cpu); + if (policy && !policy_is_inactive(policy)) { + WARN_ON(!cpumask_test_cpu(cpu, policy->related_cpus)); + ret = cpufreq_add_policy_cpu(policy, cpu, dev); + up_read(&cpufreq_rwsem); + return ret; } - read_unlock_irqrestore(&cpufreq_driver_lock, flags);
/* * Restore the saved policy when doing light-weight init and fall back
We reach here while adding policy for a CPU and enter into the 'if' block only if a policy already exists for the CPU.
As cpufreq_cpu_data is set for all policy->related_cpus now, when the policy is first added, we can use that to find the CPU's policy instead of traversing the list of all active policies.
Acked-by: Saravana Kannan skannan@codeaurora.org Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- V4->V5: - Rebased on top of [V4 1/14]
drivers/cpufreq/cpufreq.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index eb0c3a802b14..e6a63d6ba6f1 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1213,16 +1213,13 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) return 0;
/* Check if this CPU already has a policy to manage it */ - read_lock_irqsave(&cpufreq_driver_lock, flags); - for_each_active_policy(policy) { - if (cpumask_test_cpu(cpu, policy->related_cpus)) { - read_unlock_irqrestore(&cpufreq_driver_lock, flags); - ret = cpufreq_add_policy_cpu(policy, cpu, dev); - up_read(&cpufreq_rwsem); - return ret; - } + policy = per_cpu(cpufreq_cpu_data, cpu); + if (policy && !policy_is_inactive(policy)) { + WARN_ON(!cpumask_test_cpu(cpu, policy->related_cpus)); + ret = cpufreq_add_policy_cpu(policy, cpu, dev); + up_read(&cpufreq_rwsem); + return ret; } - read_unlock_irqrestore(&cpufreq_driver_lock, flags);
/* * Restore the saved policy when doing light-weight init and fall back
History of which governor was used last is common to all CPUs within a policy and maintaining it per-cpu isn't the best approach for sure.
Apart from wasting memory, this also increases the complexity of managing this data structure as it has to be updated for all CPUs.
To make that somewhat simpler, lets store this information in a new field 'last_governor' in struct cpufreq_policy and update it on removal of last cpu of a policy.
As a side-effect it also solves an old problem, consider a system with two clusters 0 & 1. And there is one policy per cluster.
Cluster 0: CPU0 and 1. Cluster 1: CPU2 and 3.
- CPU2 is first brought online, and governor is set to performance (default as cpufreq_cpu_governor wasn't set). - Governor is changed to ondemand. - CPU2 is taken offline and cpufreq_cpu_governor is updated for CPU2. - CPU3 is brought online. - Because cpufreq_cpu_governor wasn't set for CPU3, the default governor performance is picked for CPU3.
This patch fixes the bug as we now have a single variable to update for policy.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 30 +++++++++++++++--------------- include/linux/cpufreq.h | 1 + 2 files changed, 16 insertions(+), 15 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index a3eb76969969..5e87679ec6d1 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -116,9 +116,6 @@ static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); static DEFINE_RWLOCK(cpufreq_driver_lock); DEFINE_MUTEX(cpufreq_governor_lock);
-/* This one keeps track of the previously set governor of a removed CPU */ -static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor); - /* Flag to suspend/resume CPUFreq governors */ static bool cpufreq_suspended;
@@ -1029,7 +1026,7 @@ static void cpufreq_init_policy(struct cpufreq_policy *policy) memcpy(&new_policy, policy, sizeof(*policy));
/* Update governor of new_policy to the governor used before hotplug */ - gov = find_governor(per_cpu(cpufreq_cpu_governor, policy->cpu)); + gov = find_governor(policy->last_governor); if (gov) pr_debug("Restoring governor %s for cpu %d\n", policy->governor->name, policy->cpu); @@ -1423,14 +1420,15 @@ static int __cpufreq_remove_dev_prepare(struct device *dev, pr_err("%s: Failed to stop governor\n", __func__); return ret; } - - strncpy(per_cpu(cpufreq_cpu_governor, cpu), - policy->governor->name, CPUFREQ_NAME_LEN); }
- down_read(&policy->rwsem); + down_write(&policy->rwsem); cpus = cpumask_weight(policy->cpus); - up_read(&policy->rwsem); + + if (has_target() && cpus == 1) + strncpy(policy->last_governor, policy->governor->name, + CPUFREQ_NAME_LEN); + up_write(&policy->rwsem);
if (cpu != policy->cpu) { sysfs_remove_link(&dev->kobj, "cpufreq"); @@ -2147,7 +2145,8 @@ EXPORT_SYMBOL_GPL(cpufreq_register_governor);
void cpufreq_unregister_governor(struct cpufreq_governor *governor) { - int cpu; + struct cpufreq_policy *policy, *tpolicy; + unsigned long flags;
if (!governor) return; @@ -2155,12 +2154,13 @@ void cpufreq_unregister_governor(struct cpufreq_governor *governor) if (cpufreq_disabled()) return;
- for_each_present_cpu(cpu) { - if (cpu_online(cpu)) - continue; - if (!strcmp(per_cpu(cpufreq_cpu_governor, cpu), governor->name)) - strcpy(per_cpu(cpufreq_cpu_governor, cpu), "\0"); + /* clear last_governor for all inactive policies */ + read_lock_irqsave(&cpufreq_driver_lock, flags); + for_each_inactive_policy(policy, tpolicy) { + if (!strcmp(policy->last_governor, governor->name)) + strcpy(policy->last_governor, "\0"); } + read_unlock_irqrestore(&cpufreq_driver_lock, flags);
mutex_lock(&cpufreq_governor_mutex); list_del(&governor->governor_list); diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 2ee4888c1f47..48e37c07eb84 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -80,6 +80,7 @@ struct cpufreq_policy { struct cpufreq_governor *governor; /* see below */ void *governor_data; bool governor_enabled; /* governor start/stop flag */ + char last_governor[CPUFREQ_NAME_LEN]; /* last governor used */
struct work_struct update; /* if update_policy() needs to be * called, but you're in IRQ context */
History of which governor was used last is common to all CPUs within a policy and maintaining it per-cpu isn't the best approach for sure.
Apart from wasting memory, this also increases the complexity of managing this data structure as it has to be updated for all CPUs.
To make that somewhat simpler, lets store this information in a new field 'last_governor' in struct cpufreq_policy and update it on removal of last cpu of a policy.
As a side-effect it also solves an old problem, consider a system with two clusters 0 & 1. And there is one policy per cluster.
Cluster 0: CPU0 and 1. Cluster 1: CPU2 and 3.
- CPU2 is first brought online, and governor is set to performance (default as cpufreq_cpu_governor wasn't set). - Governor is changed to ondemand. - CPU2 is taken offline and cpufreq_cpu_governor is updated for CPU2. - CPU3 is brought online. - Because cpufreq_cpu_governor wasn't set for CPU3, the default governor performance is picked for CPU3.
This patch fixes the bug as we now have a single variable to update for policy.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- V4->V5: - Rebased on top of [V4 1/14]
drivers/cpufreq/cpufreq.c | 30 +++++++++++++++--------------- include/linux/cpufreq.h | 1 + 2 files changed, 16 insertions(+), 15 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index e6a63d6ba6f1..16275ba6428e 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -104,9 +104,6 @@ static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); static DEFINE_RWLOCK(cpufreq_driver_lock); DEFINE_MUTEX(cpufreq_governor_lock);
-/* This one keeps track of the previously set governor of a removed CPU */ -static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor); - /* Flag to suspend/resume CPUFreq governors */ static bool cpufreq_suspended;
@@ -1017,7 +1014,7 @@ static void cpufreq_init_policy(struct cpufreq_policy *policy) memcpy(&new_policy, policy, sizeof(*policy));
/* Update governor of new_policy to the governor used before hotplug */ - gov = find_governor(per_cpu(cpufreq_cpu_governor, policy->cpu)); + gov = find_governor(policy->last_governor); if (gov) pr_debug("Restoring governor %s for cpu %d\n", policy->governor->name, policy->cpu); @@ -1411,14 +1408,15 @@ static int __cpufreq_remove_dev_prepare(struct device *dev, pr_err("%s: Failed to stop governor\n", __func__); return ret; } - - strncpy(per_cpu(cpufreq_cpu_governor, cpu), - policy->governor->name, CPUFREQ_NAME_LEN); }
- down_read(&policy->rwsem); + down_write(&policy->rwsem); cpus = cpumask_weight(policy->cpus); - up_read(&policy->rwsem); + + if (has_target() && cpus == 1) + strncpy(policy->last_governor, policy->governor->name, + CPUFREQ_NAME_LEN); + up_write(&policy->rwsem);
if (cpu != policy->cpu) { sysfs_remove_link(&dev->kobj, "cpufreq"); @@ -2135,7 +2133,8 @@ EXPORT_SYMBOL_GPL(cpufreq_register_governor);
void cpufreq_unregister_governor(struct cpufreq_governor *governor) { - int cpu; + struct cpufreq_policy *policy; + unsigned long flags;
if (!governor) return; @@ -2143,12 +2142,13 @@ void cpufreq_unregister_governor(struct cpufreq_governor *governor) if (cpufreq_disabled()) return;
- for_each_present_cpu(cpu) { - if (cpu_online(cpu)) - continue; - if (!strcmp(per_cpu(cpufreq_cpu_governor, cpu), governor->name)) - strcpy(per_cpu(cpufreq_cpu_governor, cpu), "\0"); + /* clear last_governor for all inactive policies */ + read_lock_irqsave(&cpufreq_driver_lock, flags); + for_each_inactive_policy(policy) { + if (!strcmp(policy->last_governor, governor->name)) + strcpy(policy->last_governor, "\0"); } + read_unlock_irqrestore(&cpufreq_driver_lock, flags);
mutex_lock(&cpufreq_governor_mutex); list_del(&governor->governor_list); diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 2ee4888c1f47..48e37c07eb84 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -80,6 +80,7 @@ struct cpufreq_policy { struct cpufreq_governor *governor; /* see below */ void *governor_data; bool governor_enabled; /* governor start/stop flag */ + char last_governor[CPUFREQ_NAME_LEN]; /* last governor used */
struct work_struct update; /* if update_policy() needs to be * called, but you're in IRQ context */
Later commits would change the way policies are managed today. Policies wouldn't be freed on cpu hotplug (currently they aren't freed on suspend), and while the CPU is offline, the sysfs cpufreq files would still be present.
Because we don't mark policy->governor as NULL, it still contains pointer of the last used governor. And if the governor is removed, while all the CPUs of a policy are hotplugged out, this pointer wouldn't be valid anymore. And if we try to read the 'scaling_governor', etc. from sysfs, it will result in kernel OOPs.
To prevent this, mark policy->governor as NULL for all inactive policies while the governor is removed from kernel.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 5e87679ec6d1..bd8a47b5054e 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1095,7 +1095,6 @@ static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu) if (likely(policy)) { /* Policy should be inactive here */ WARN_ON(!policy_is_inactive(policy)); - policy->governor = NULL; }
return policy; @@ -2157,8 +2156,10 @@ void cpufreq_unregister_governor(struct cpufreq_governor *governor) /* clear last_governor for all inactive policies */ read_lock_irqsave(&cpufreq_driver_lock, flags); for_each_inactive_policy(policy, tpolicy) { - if (!strcmp(policy->last_governor, governor->name)) + if (!strcmp(policy->last_governor, governor->name)) { + policy->governor = NULL; strcpy(policy->last_governor, "\0"); + } } read_unlock_irqrestore(&cpufreq_driver_lock, flags);
Later commits would change the way policies are managed today. Policies wouldn't be freed on cpu hotplug (currently they aren't freed on suspend), and while the CPU is offline, the sysfs cpufreq files would still be present.
Because we don't mark policy->governor as NULL, it still contains pointer of the last used governor. And if the governor is removed, while all the CPUs of a policy are hotplugged out, this pointer wouldn't be valid anymore. And if we try to read the 'scaling_governor', etc. from sysfs, it will result in kernel OOPs.
To prevent this, mark policy->governor as NULL for all inactive policies while the governor is removed from kernel.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- V4->V5: - Rebased on top of [V4 1/14]
drivers/cpufreq/cpufreq.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 16275ba6428e..c08de5e985c7 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1083,7 +1083,6 @@ static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu) if (likely(policy)) { /* Policy should be inactive here */ WARN_ON(!policy_is_inactive(policy)); - policy->governor = NULL; }
return policy; @@ -2145,8 +2144,10 @@ void cpufreq_unregister_governor(struct cpufreq_governor *governor) /* clear last_governor for all inactive policies */ read_lock_irqsave(&cpufreq_driver_lock, flags); for_each_inactive_policy(policy) { - if (!strcmp(policy->last_governor, governor->name)) + if (!strcmp(policy->last_governor, governor->name)) { + policy->governor = NULL; strcpy(policy->last_governor, "\0"); + } } read_unlock_irqrestore(&cpufreq_driver_lock, flags);
Later commits would change the way policies are managed today. Policies wouldn't be freed on cpu hotplug (currently they aren't freed only for suspend), and while the CPU is offline, the sysfs cpufreq files would still be present.
User may accidentally try to update the sysfs files in following directory: '/sys/devices/system/cpu/cpuX/cpufreq/'. And that would result in undefined behavior as policy wouldn't be active then.
Apart from updating the store() routine, we also update __cpufreq_get() which can call cpufreq_out_of_sync(). The later routine tries to update policy->cur and starts notifying kernel about it.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index bd8a47b5054e..652a843a553b 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -887,11 +887,22 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr,
down_write(&policy->rwsem);
+ /* + * Policy might not be active currently, and so we shouldn't try + * updating any values here. policy->cpus is cleared for inactive policy + * and so cpufreq_cpu_get_raw() should fail. + */ + if (unlikely(policy_is_inactive(policy))) { + ret = -EPERM; + goto unlock_policy_rwsem; + } + if (fattr->store) ret = fattr->store(policy, buf, count); else ret = -EIO;
+unlock_policy_rwsem: up_write(&policy->rwsem);
up_read(&cpufreq_rwsem); @@ -1619,6 +1630,14 @@ static unsigned int __cpufreq_get(struct cpufreq_policy *policy)
ret_freq = cpufreq_driver->get(policy->cpu);
+ /* + * Policy might not be active currently, and so we shouldn't try + * updating any values here. policy->cpus is cleared for inactive policy + * and so cpufreq_cpu_get_raw() should fail. + */ + if (unlikely(policy_is_inactive(policy))) + return ret_freq; + if (ret_freq && policy->cur && !(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { /* verify no discrepancy between actual and
The dash after "inactive" in the subject is not necessary IMO.
On Friday, May 08, 2015 11:53:50 AM Viresh Kumar wrote:
Later commits would change the way policies are managed today. Policies wouldn't be freed on cpu hotplug (currently they aren't freed only for suspend), and while the CPU is offline, the sysfs cpufreq files would still be present.
User may accidentally try to update the sysfs files in following directory: '/sys/devices/system/cpu/cpuX/cpufreq/'. And that would result in undefined behavior as policy wouldn't be active then.
Apart from updating the store() routine, we also update __cpufreq_get() which can call cpufreq_out_of_sync(). The later routine tries to update policy->cur and starts notifying kernel about it.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
drivers/cpufreq/cpufreq.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index bd8a47b5054e..652a843a553b 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -887,11 +887,22 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr, down_write(&policy->rwsem);
- /*
* Policy might not be active currently, and so we shouldn't try
* updating any values here. policy->cpus is cleared for inactive policy
* and so cpufreq_cpu_get_raw() should fail.
These comments don't really clarify things. It'd be better to say something like "Updating inactive policies is invalid, so avoid doing that."
*/
- if (unlikely(policy_is_inactive(policy))) {
ret = -EPERM;
This doesn't seem to be the appropriate error code to return here.
-EBUSY or -EAGAIN would be better IMO.
goto unlock_policy_rwsem;
- }
- if (fattr->store) ret = fattr->store(policy, buf, count); else ret = -EIO;
+unlock_policy_rwsem: up_write(&policy->rwsem); up_read(&cpufreq_rwsem); @@ -1619,6 +1630,14 @@ static unsigned int __cpufreq_get(struct cpufreq_policy *policy) ret_freq = cpufreq_driver->get(policy->cpu);
- /*
* Policy might not be active currently, and so we shouldn't try
* updating any values here. policy->cpus is cleared for inactive policy
* and so cpufreq_cpu_get_raw() should fail.
*/
- if (unlikely(policy_is_inactive(policy)))
return ret_freq;
- if (ret_freq && policy->cur && !(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { /* verify no discrepancy between actual and
On 16-05-15, 03:10, Rafael J. Wysocki wrote:
The dash after "inactive" in the subject is not necessary IMO.
Okay
On Friday, May 08, 2015 11:53:50 AM Viresh Kumar wrote:
- /*
* Policy might not be active currently, and so we shouldn't try
* updating any values here. policy->cpus is cleared for inactive policy
* and so cpufreq_cpu_get_raw() should fail.
These comments don't really clarify things. It'd be better to say something like "Updating inactive policies is invalid, so avoid doing that."
Okay..
*/
- if (unlikely(policy_is_inactive(policy))) {
ret = -EPERM;
This doesn't seem to be the appropriate error code to return here.
-EBUSY or -EAGAIN would be better IMO.
Okay
From: Saravana Kannan skannan@codeaurora.org
In order to prepare for the next few commits, that will stop migrating sysfs files on cpu hotplug, this patch starts managing sysfs-cpu separately.
The behavior is still the same as we are still migrating sysfs files on hotplug, later commits would change that.
Signed-off-by: Saravana Kannan skannan@codeaurora.org Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 11 +++++++---- include/linux/cpufreq.h | 4 +++- 2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 652a843a553b..75647c13a12b 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -982,7 +982,7 @@ static int cpufreq_add_dev_symlink(struct cpufreq_policy *policy) for_each_cpu(j, policy->cpus) { struct device *cpu_dev;
- if (j == policy->cpu) + if (j == policy->kobj_cpu) continue;
pr_debug("Adding link for CPU: %u\n", j); @@ -1201,6 +1201,7 @@ static int update_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu,
down_write(&policy->rwsem); policy->cpu = cpu; + policy->kobj_cpu = cpu; up_write(&policy->rwsem);
return 0; @@ -1258,10 +1259,12 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) * the creation of a brand new one. So we need to perform this update * by invoking update_policy_cpu(). */ - if (recover_policy && cpu != policy->cpu) + if (recover_policy && cpu != policy->cpu) { WARN_ON(update_policy_cpu(policy, cpu, dev)); - else + } else { policy->cpu = cpu; + policy->kobj_cpu = cpu; + }
cpumask_copy(policy->cpus, cpumask_of(cpu));
@@ -1440,7 +1443,7 @@ static int __cpufreq_remove_dev_prepare(struct device *dev, CPUFREQ_NAME_LEN); up_write(&policy->rwsem);
- if (cpu != policy->cpu) { + if (cpu != policy->kobj_cpu) { sysfs_remove_link(&dev->kobj, "cpufreq"); } else if (cpus > 1) { /* Nominate new CPU */ diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 48e37c07eb84..29ad97c34fd5 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -65,7 +65,9 @@ struct cpufreq_policy {
unsigned int shared_type; /* ACPI: ANY or ALL affected CPUs should set cpufreq */ - unsigned int cpu; /* cpu nr of CPU managing this policy */ + unsigned int cpu; /* cpu managing this policy, must be online */ + unsigned int kobj_cpu; /* cpu managing sysfs files, can be offline */ + struct clk *clk; struct cpufreq_cpuinfo cpuinfo;/* see above */
When we hot-unplug a cpu, we remove its sysfs cpufreq directory and if the outgoing cpu was the owner of policy->kobj earlier then we migrate the sysfs directory to under another online cpu.
There are few disadvantages this brings: - Code Complexity - Slower hotplug/suspend/resume - sysfs file permissions are reset after all policy->cpus are offlined - CPUFreq stats history lost after all policy->cpus are offlined - Special management of sysfs stuff during suspend/resume
To overcome these, this patch modifies the way sysfs directories are managed: - Select sysfs kobjects owner while initializing policy and don't change it during hotplugs. Track it with kobj_cpu created earlier.
- Create symlinks for all related CPUs (can be offline) instead of affected CPUs on policy initialization and remove them only when the policy is freed.
- Free policy structure only on the removal of cpufreq-driver and not during hotplug/suspend/resume, detected by checking 'struct subsys_interface *' (Valid only when called from subsys_interface_unregister() while unregistering driver).
[ Something similar attempted by Saravana earlier ]
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 154 ++++++++++++++++++++++------------------------ 1 file changed, 72 insertions(+), 82 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 75647c13a12b..67f0ef711c6d 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -973,25 +973,36 @@ void cpufreq_sysfs_remove_file(const struct attribute *attr) } EXPORT_SYMBOL(cpufreq_sysfs_remove_file);
-/* symlink affected CPUs */ -static int cpufreq_add_dev_symlink(struct cpufreq_policy *policy) +/* Add/remove symlinks for all related CPUs */ +static int cpufreq_add_remove_dev_symlink(struct cpufreq_policy *policy, + bool add) { unsigned int j; int ret = 0;
- for_each_cpu(j, policy->cpus) { + for_each_cpu(j, policy->related_cpus) { struct device *cpu_dev;
if (j == policy->kobj_cpu) continue;
- pr_debug("Adding link for CPU: %u\n", j); + pr_debug("%s: %s symlink for CPU: %u\n", __func__, + add ? "Adding" : "Removing", j); + cpu_dev = get_cpu_device(j); - ret = sysfs_create_link(&cpu_dev->kobj, &policy->kobj, - "cpufreq"); - if (ret) - break; + if (WARN_ON(!cpu_dev)) + continue; + + if (add) { + ret = sysfs_create_link(&cpu_dev->kobj, &policy->kobj, + "cpufreq"); + if (ret) + break; + } else { + sysfs_remove_link(&cpu_dev->kobj, "cpufreq"); + } } + return ret; }
@@ -1025,7 +1036,7 @@ static int cpufreq_add_dev_interface(struct cpufreq_policy *policy, return ret; }
- return cpufreq_add_dev_symlink(policy); + return cpufreq_add_remove_dev_symlink(policy, true); }
static void cpufreq_init_policy(struct cpufreq_policy *policy) @@ -1091,7 +1102,7 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, } }
- return sysfs_create_link(&dev->kobj, &policy->kobj, "cpufreq"); + return 0; }
static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu) @@ -1111,7 +1122,7 @@ static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu) return policy; }
-static struct cpufreq_policy *cpufreq_policy_alloc(void) +static struct cpufreq_policy *cpufreq_policy_alloc(int cpu) { struct cpufreq_policy *policy;
@@ -1132,6 +1143,11 @@ static struct cpufreq_policy *cpufreq_policy_alloc(void) init_completion(&policy->kobj_unregister); INIT_WORK(&policy->update, handle_update);
+ policy->cpu = cpu; + + /* Set this once on allocation */ + policy->kobj_cpu = cpu; + return policy;
err_free_cpumask: @@ -1150,10 +1166,11 @@ static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy) blocking_notifier_call_chain(&cpufreq_policy_notifier_list, CPUFREQ_REMOVE_POLICY, policy);
- down_read(&policy->rwsem); + down_write(&policy->rwsem); + cpufreq_add_remove_dev_symlink(policy, false); kobj = &policy->kobj; cmp = &policy->kobj_unregister; - up_read(&policy->rwsem); + up_write(&policy->rwsem); kobject_put(kobj);
/* @@ -1184,27 +1201,14 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy) kfree(policy); }
-static int update_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu, - struct device *cpu_dev) +static void update_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu) { - int ret; - if (WARN_ON(cpu == policy->cpu)) - return 0; - - /* Move kobject to the new policy->cpu */ - ret = kobject_move(&policy->kobj, &cpu_dev->kobj); - if (ret) { - pr_err("%s: Failed to move kobj: %d\n", __func__, ret); - return ret; - } + return;
down_write(&policy->rwsem); policy->cpu = cpu; - policy->kobj_cpu = cpu; up_write(&policy->rwsem); - - return 0; }
/** @@ -1222,7 +1226,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) int ret = -ENOMEM; struct cpufreq_policy *policy; unsigned long flags; - bool recover_policy = cpufreq_suspended; + bool recover_policy = !sif;
if (cpu_is_offline(cpu)) return 0; @@ -1248,7 +1252,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) policy = recover_policy ? cpufreq_policy_restore(cpu) : NULL; if (!policy) { recover_policy = false; - policy = cpufreq_policy_alloc(); + policy = cpufreq_policy_alloc(cpu); if (!policy) goto nomem_out; } @@ -1259,12 +1263,8 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) * the creation of a brand new one. So we need to perform this update * by invoking update_policy_cpu(). */ - if (recover_policy && cpu != policy->cpu) { - WARN_ON(update_policy_cpu(policy, cpu, dev)); - } else { - policy->cpu = cpu; - policy->kobj_cpu = cpu; - } + if (recover_policy && cpu != policy->cpu) + update_policy_cpu(policy, cpu);
cpumask_copy(policy->cpus, cpumask_of(cpu));
@@ -1443,29 +1443,14 @@ static int __cpufreq_remove_dev_prepare(struct device *dev, CPUFREQ_NAME_LEN); up_write(&policy->rwsem);
- if (cpu != policy->kobj_cpu) { - sysfs_remove_link(&dev->kobj, "cpufreq"); - } else if (cpus > 1) { - /* Nominate new CPU */ - int new_cpu = cpumask_any_but(policy->cpus, cpu); - struct device *cpu_dev = get_cpu_device(new_cpu); - - sysfs_remove_link(&cpu_dev->kobj, "cpufreq"); - ret = update_policy_cpu(policy, new_cpu, cpu_dev); - if (ret) { - if (sysfs_create_link(&cpu_dev->kobj, &policy->kobj, - "cpufreq")) - pr_err("%s: Failed to restore kobj link to cpu:%d\n", - __func__, cpu_dev->id); - return ret; - } + if (cpu != policy->cpu) + return 0;
- if (!cpufreq_suspended) - pr_debug("%s: policy Kobject moved to cpu: %d from: %d\n", - __func__, new_cpu, cpu); - } else if (cpufreq_driver->stop_cpu) { + if (cpus > 1) + /* Nominate new CPU */ + update_policy_cpu(policy, cpumask_any_but(policy->cpus, cpu)); + else if (cpufreq_driver->stop_cpu) cpufreq_driver->stop_cpu(policy); - }
return 0; } @@ -1486,32 +1471,11 @@ static int __cpufreq_remove_dev_finish(struct device *dev, cpumask_clear_cpu(cpu, policy->cpus); up_write(&policy->rwsem);
- /* If cpu is last user of policy, free policy */ - if (policy_is_inactive(policy)) { - if (has_target()) { - ret = __cpufreq_governor(policy, - CPUFREQ_GOV_POLICY_EXIT); - if (ret) { - pr_err("%s: Failed to exit governor\n", - __func__); - return ret; - } - } - - if (!cpufreq_suspended) - cpufreq_policy_put_kobj(policy); - - /* - * Perform the ->exit() even during light-weight tear-down, - * since this is a core component, and is essential for the - * subsequent light-weight ->init() to succeed. - */ - if (cpufreq_driver->exit) - cpufreq_driver->exit(policy); + /* Not the last cpu of policy, start governor again ? */ + if (!policy_is_inactive(policy)) { + if (!has_target()) + return 0;
- if (!cpufreq_suspended) - cpufreq_policy_free(policy); - } else if (has_target()) { ret = __cpufreq_governor(policy, CPUFREQ_GOV_START); if (!ret) ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); @@ -1520,8 +1484,34 @@ static int __cpufreq_remove_dev_finish(struct device *dev, pr_err("%s: Failed to start governor\n", __func__); return ret; } + + return 0; + } + + /* If cpu is last user of policy, free policy */ + if (has_target()) { + ret = __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); + if (ret) { + pr_err("%s: Failed to exit governor\n", __func__); + return ret; + } }
+ /* Free the policy kobjects only if the driver is getting removed. */ + if (sif) + cpufreq_policy_put_kobj(policy); + + /* + * Perform the ->exit() even during light-weight tear-down, + * since this is a core component, and is essential for the + * subsequent light-weight ->init() to succeed. + */ + if (cpufreq_driver->exit) + cpufreq_driver->exit(policy); + + if (sif) + cpufreq_policy_free(policy); + return 0; }
cpufreq_update_policy() was kept as a separate routine earlier as it was handling migration of sysfs directories, which isn't done anymore. It is only updating policy->cpu now and can be removed.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 32 ++++++++++---------------------- 1 file changed, 10 insertions(+), 22 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 67f0ef711c6d..9dfdd79c9a60 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1117,6 +1117,10 @@ static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu) if (likely(policy)) { /* Policy should be inactive here */ WARN_ON(!policy_is_inactive(policy)); + + down_write(&policy->rwsem); + policy->cpu = cpu; + up_write(&policy->rwsem); }
return policy; @@ -1201,16 +1205,6 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy) kfree(policy); }
-static void update_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu) -{ - if (WARN_ON(cpu == policy->cpu)) - return; - - down_write(&policy->rwsem); - policy->cpu = cpu; - up_write(&policy->rwsem); -} - /** * cpufreq_add_dev - add a CPU device * @@ -1257,15 +1251,6 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) goto nomem_out; }
- /* - * In the resume path, since we restore a saved policy, the assignment - * to policy->cpu is like an update of the existing policy, rather than - * the creation of a brand new one. So we need to perform this update - * by invoking update_policy_cpu(). - */ - if (recover_policy && cpu != policy->cpu) - update_policy_cpu(policy, cpu); - cpumask_copy(policy->cpus, cpumask_of(cpu));
/* call driver. From then on the cpufreq must be able @@ -1446,11 +1431,14 @@ static int __cpufreq_remove_dev_prepare(struct device *dev, if (cpu != policy->cpu) return 0;
- if (cpus > 1) + if (cpus > 1) { /* Nominate new CPU */ - update_policy_cpu(policy, cpumask_any_but(policy->cpus, cpu)); - else if (cpufreq_driver->stop_cpu) + down_write(&policy->rwsem); + policy->cpu = cpumask_any_but(policy->cpus, cpu); + up_write(&policy->rwsem); + } else if (cpufreq_driver->stop_cpu) { cpufreq_driver->stop_cpu(policy); + }
return 0; }
policy->kobj is required to be initialized once in the lifetime of a policy. Currently we are initializing it from __cpufreq_add_dev() and that doesn't look to be the best place for doing so as we have to do this on special cases (like: !recover_policy).
We can initialize it from a more obvious place cpufreq_policy_alloc() and that will make code look cleaner, specially the error handling part.
The error handling part of __cpufreq_add_dev() was doing almost the same thing while recover_policy is true or false. Fix that as well by always calling cpufreq_policy_put_kobj() with an additional parameter to skip notification part of it.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 44 ++++++++++++++++++++------------------------ 1 file changed, 20 insertions(+), 24 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 9dfdd79c9a60..e7f16c4bd041 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1126,9 +1126,10 @@ static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu) return policy; }
-static struct cpufreq_policy *cpufreq_policy_alloc(int cpu) +static struct cpufreq_policy *cpufreq_policy_alloc(struct device *dev) { struct cpufreq_policy *policy; + int ret;
policy = kzalloc(sizeof(*policy), GFP_KERNEL); if (!policy) @@ -1140,6 +1141,13 @@ static struct cpufreq_policy *cpufreq_policy_alloc(int cpu) if (!zalloc_cpumask_var(&policy->related_cpus, GFP_KERNEL)) goto err_free_cpumask;
+ ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, &dev->kobj, + "cpufreq"); + if (ret) { + pr_err("%s: failed to init policy->kobj: %d\n", __func__, ret); + goto err_free_rcpumask; + } + INIT_LIST_HEAD(&policy->policy_list); init_rwsem(&policy->rwsem); spin_lock_init(&policy->transition_lock); @@ -1147,13 +1155,15 @@ static struct cpufreq_policy *cpufreq_policy_alloc(int cpu) init_completion(&policy->kobj_unregister); INIT_WORK(&policy->update, handle_update);
- policy->cpu = cpu; + policy->cpu = dev->id;
/* Set this once on allocation */ - policy->kobj_cpu = cpu; + policy->kobj_cpu = dev->id;
return policy;
+err_free_rcpumask: + free_cpumask_var(policy->related_cpus); err_free_cpumask: free_cpumask_var(policy->cpus); err_free_policy: @@ -1162,13 +1172,14 @@ static struct cpufreq_policy *cpufreq_policy_alloc(int cpu) return NULL; }
-static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy) +static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy, bool notify) { struct kobject *kobj; struct completion *cmp;
- blocking_notifier_call_chain(&cpufreq_policy_notifier_list, - CPUFREQ_REMOVE_POLICY, policy); + if (notify) + blocking_notifier_call_chain(&cpufreq_policy_notifier_list, + CPUFREQ_REMOVE_POLICY, policy);
down_write(&policy->rwsem); cpufreq_add_remove_dev_symlink(policy, false); @@ -1246,7 +1257,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) policy = recover_policy ? cpufreq_policy_restore(cpu) : NULL; if (!policy) { recover_policy = false; - policy = cpufreq_policy_alloc(cpu); + policy = cpufreq_policy_alloc(dev); if (!policy) goto nomem_out; } @@ -1277,15 +1288,6 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) policy->user_policy.min = policy->min; policy->user_policy.max = policy->max;
- /* prepare interface data */ - ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, - &dev->kobj, "cpufreq"); - if (ret) { - pr_err("%s: failed to init policy->kobj: %d\n", - __func__, ret); - goto err_init_policy_kobj; - } - write_lock_irqsave(&cpufreq_driver_lock, flags); for_each_cpu(j, policy->related_cpus) per_cpu(cpufreq_cpu_data, j) = policy; @@ -1377,18 +1379,12 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
err_out_unregister: err_get_freq: - if (!recover_policy) { - kobject_put(&policy->kobj); - wait_for_completion(&policy->kobj_unregister); - } -err_init_policy_kobj: up_write(&policy->rwsem);
if (cpufreq_driver->exit) cpufreq_driver->exit(policy); err_set_policy_cpu: - if (recover_policy) - cpufreq_policy_put_kobj(policy); + cpufreq_policy_put_kobj(policy, recover_policy); cpufreq_policy_free(policy);
nomem_out: @@ -1487,7 +1483,7 @@ static int __cpufreq_remove_dev_finish(struct device *dev,
/* Free the policy kobjects only if the driver is getting removed. */ if (sif) - cpufreq_policy_put_kobj(policy); + cpufreq_policy_put_kobj(policy, true);
/* * Perform the ->exit() even during light-weight tear-down,
cpufreq_policy_put_kobj() is actually part of freeing the policy and can be called from cpufreq_policy_free() directly instead of a separate call.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index e7f16c4bd041..6f08b70de21f 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1198,7 +1198,7 @@ static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy, bool notify) pr_debug("wait complete\n"); }
-static void cpufreq_policy_free(struct cpufreq_policy *policy) +static void cpufreq_policy_free(struct cpufreq_policy *policy, bool notify) { unsigned long flags; int cpu; @@ -1211,6 +1211,7 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy) per_cpu(cpufreq_cpu_data, cpu) = NULL; write_unlock_irqrestore(&cpufreq_driver_lock, flags);
+ cpufreq_policy_put_kobj(policy, notify); free_cpumask_var(policy->related_cpus); free_cpumask_var(policy->cpus); kfree(policy); @@ -1384,9 +1385,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) if (cpufreq_driver->exit) cpufreq_driver->exit(policy); err_set_policy_cpu: - cpufreq_policy_put_kobj(policy, recover_policy); - cpufreq_policy_free(policy); - + cpufreq_policy_free(policy, recover_policy); nomem_out: up_read(&cpufreq_rwsem);
@@ -1481,10 +1480,6 @@ static int __cpufreq_remove_dev_finish(struct device *dev, } }
- /* Free the policy kobjects only if the driver is getting removed. */ - if (sif) - cpufreq_policy_put_kobj(policy, true); - /* * Perform the ->exit() even during light-weight tear-down, * since this is a core component, and is essential for the @@ -1493,8 +1488,9 @@ static int __cpufreq_remove_dev_finish(struct device *dev, if (cpufreq_driver->exit) cpufreq_driver->exit(policy);
+ /* Free the policy only if the driver is getting removed. */ if (sif) - cpufreq_policy_free(policy); + cpufreq_policy_free(policy, true);
return 0; }
On cpu hot-unplug, we don't need to wait for POST_DEAD notification to restart the governor if the policy has atleast one online cpu left. We can restart the governor right from the DOWN_PREPARE notification instead.
[ Something similar attempted by Saravana earlier ]
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 58 ++++++++++++++++++++--------------------------- 1 file changed, 24 insertions(+), 34 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 6f08b70de21f..6bbc7b112e7a 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1395,8 +1395,8 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) static int __cpufreq_remove_dev_prepare(struct device *dev, struct subsys_interface *sif) { - unsigned int cpu = dev->id, cpus; - int ret; + unsigned int cpu = dev->id; + int ret = 0; struct cpufreq_policy *policy;
pr_debug("%s: unregistering CPU %u\n", __func__, cpu); @@ -1416,26 +1416,33 @@ static int __cpufreq_remove_dev_prepare(struct device *dev, }
down_write(&policy->rwsem); - cpus = cpumask_weight(policy->cpus); + cpumask_clear_cpu(cpu, policy->cpus);
- if (has_target() && cpus == 1) - strncpy(policy->last_governor, policy->governor->name, - CPUFREQ_NAME_LEN); + if (policy_is_inactive(policy)) { + if (has_target()) + strncpy(policy->last_governor, policy->governor->name, + CPUFREQ_NAME_LEN); + } else if (cpu == policy->cpu) { + /* Nominate new CPU */ + policy->cpu = cpumask_any(policy->cpus); + } up_write(&policy->rwsem);
- if (cpu != policy->cpu) - return 0; + /* Start governor again for active policy */ + if (!policy_is_inactive(policy)) { + if (has_target()) { + ret = __cpufreq_governor(policy, CPUFREQ_GOV_START); + if (!ret) + ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
- if (cpus > 1) { - /* Nominate new CPU */ - down_write(&policy->rwsem); - policy->cpu = cpumask_any_but(policy->cpus, cpu); - up_write(&policy->rwsem); + if (ret) + pr_err("%s: Failed to start governor\n", __func__); + } } else if (cpufreq_driver->stop_cpu) { cpufreq_driver->stop_cpu(policy); }
- return 0; + return ret; }
static int __cpufreq_remove_dev_finish(struct device *dev, @@ -1443,33 +1450,16 @@ static int __cpufreq_remove_dev_finish(struct device *dev, { unsigned int cpu = dev->id; int ret; - struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); + struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
if (!policy) { pr_debug("%s: No cpu_data found\n", __func__); return -EINVAL; }
- down_write(&policy->rwsem); - cpumask_clear_cpu(cpu, policy->cpus); - up_write(&policy->rwsem); - - /* Not the last cpu of policy, start governor again ? */ - if (!policy_is_inactive(policy)) { - if (!has_target()) - return 0; - - ret = __cpufreq_governor(policy, CPUFREQ_GOV_START); - if (!ret) - ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); - - if (ret) { - pr_err("%s: Failed to start governor\n", __func__); - return ret; - } - + /* Only proceed for inactive policies */ + if (!policy_is_inactive(policy)) return 0; - }
/* If cpu is last user of policy, free policy */ if (has_target()) {
It is possible to physically hotplug the CPUs and it happens in this sequence.
Hot removal: - CPU is offlined first, ~ 'echo 0 > /sys/devices/system/cpu/cpuX/online' - Then its device is removed along with all sysfs files, cpufreq core notified with cpufreq_remove_dev() callback from subsys-interface..
Hot addition: - First the device along with its sysfs files is added, cpufreq core notified with cpufreq_add_dev() callback from subsys-interface.. - CPU is onlined, ~ 'echo 1 > /sys/devices/system/cpu/cpuX/online'
This needs to be handled specially as current code isn't taking care of this case. Because we will hit the same routines with both hotplug and subsys callbacks, we can handle most of the stuff with regular hotplug callback paths. We only need to take care of adding/removing cpufreq sysfs links or freeing policy from subsys callbacks.
And that can be sensed easily as cpu would be offline in those cases. This patch adds special code in those paths to take care of policy and its links.
cpufreq_add_remove_dev_symlink() is also broken into another routine add_remove_cpu_dev_symlink() to reuse code at several places.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/cpufreq/cpufreq.c | 87 +++++++++++++++++++++++++++++++++++------------ 1 file changed, 65 insertions(+), 22 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 6bbc7b112e7a..234ced430057 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -973,6 +973,26 @@ void cpufreq_sysfs_remove_file(const struct attribute *attr) } EXPORT_SYMBOL(cpufreq_sysfs_remove_file);
+static inline int add_remove_cpu_dev_symlink(struct cpufreq_policy *policy, + int cpu, bool add) +{ + struct device *cpu_dev; + + pr_debug("%s: %s symlink for CPU: %u\n", __func__, + add ? "Adding" : "Removing", cpu); + + cpu_dev = get_cpu_device(cpu); + if (WARN_ON(!cpu_dev)) + return 0; + + if (add) + return sysfs_create_link(&cpu_dev->kobj, &policy->kobj, + "cpufreq"); + + sysfs_remove_link(&cpu_dev->kobj, "cpufreq"); + return 0; +} + /* Add/remove symlinks for all related CPUs */ static int cpufreq_add_remove_dev_symlink(struct cpufreq_policy *policy, bool add) @@ -980,27 +1000,14 @@ static int cpufreq_add_remove_dev_symlink(struct cpufreq_policy *policy, unsigned int j; int ret = 0;
- for_each_cpu(j, policy->related_cpus) { - struct device *cpu_dev; - + /* Some related CPUs might not be present (physically hotplugged) */ + for_each_cpu_and(j, policy->related_cpus, cpu_present_mask) { if (j == policy->kobj_cpu) continue;
- pr_debug("%s: %s symlink for CPU: %u\n", __func__, - add ? "Adding" : "Removing", j); - - cpu_dev = get_cpu_device(j); - if (WARN_ON(!cpu_dev)) - continue; - - if (add) { - ret = sysfs_create_link(&cpu_dev->kobj, &policy->kobj, - "cpufreq"); - if (ret) - break; - } else { - sysfs_remove_link(&cpu_dev->kobj, "cpufreq"); - } + ret = add_remove_cpu_dev_symlink(policy, j, add); + if (ret) + break; }
return ret; @@ -1234,11 +1241,23 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) unsigned long flags; bool recover_policy = !sif;
- if (cpu_is_offline(cpu)) - return 0; - pr_debug("adding CPU %u\n", cpu);
+ /* + * Only possible if 'cpu' wasn't physically present earlier and we are + * here from subsys_interface add callback. A hotplug notifier will + * follow and we will handle it like logical CPU hotplug then. For now, + * just create the sysfs link. + */ + if (cpu_is_offline(cpu)) { + policy = per_cpu(cpufreq_cpu_data, cpu); + /* No need to create link of the first cpu of a policy */ + if (!policy) + return 0; + + return add_remove_cpu_dev_symlink(policy, cpu, true); + } + if (!down_read_trylock(&cpufreq_rwsem)) return 0;
@@ -1495,8 +1514,32 @@ static int cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif) unsigned int cpu = dev->id; int ret;
- if (cpu_is_offline(cpu)) + /* + * Only possible if 'cpu' is getting physically removed now. A hotplug + * notifier should have already been called and we just need to remove + * link or free policy here. + */ + if (cpu_is_offline(cpu)) { + struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); + struct cpumask mask; + + if (!policy) + return 0; + + /* Prepare mask similar to related-cpus without this 'cpu' */ + cpumask_copy(&mask, policy->related_cpus); + cpumask_clear_cpu(cpu, &mask); + + /* + * Remove link if few CPUs are still present physically, else + * free policy when all are gone. + */ + if (cpumask_intersects(&mask, cpu_present_mask)) + return add_remove_cpu_dev_symlink(policy, cpu, false); + + cpufreq_policy_free(policy, true); return 0; + }
ret = __cpufreq_remove_dev_prepare(dev, sif);
On Friday, May 08, 2015 11:53:57 AM Viresh Kumar wrote:
It is possible to physically hotplug the CPUs and it happens in this sequence.
Hot removal:
- CPU is offlined first, ~ 'echo 0 > /sys/devices/system/cpu/cpuX/online'
- Then its device is removed along with all sysfs files, cpufreq core notified with cpufreq_remove_dev() callback from subsys-interface..
Hot addition:
- First the device along with its sysfs files is added, cpufreq core notified with cpufreq_add_dev() callback from subsys-interface..
- CPU is onlined, ~ 'echo 1 > /sys/devices/system/cpu/cpuX/online'
This needs to be handled specially as current code isn't taking care of this case. Because we will hit the same routines with both hotplug and subsys callbacks, we can handle most of the stuff with regular hotplug callback paths. We only need to take care of adding/removing cpufreq sysfs links or freeing policy from subsys callbacks.
And that can be sensed easily as cpu would be offline in those cases. This patch adds special code in those paths to take care of policy and its links.
cpufreq_add_remove_dev_symlink() is also broken into another routine add_remove_cpu_dev_symlink() to reuse code at several places.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
You seem to be breaking things first and then fixing them up with this patch.
Would it be possible to avoid breaking them in the first place?
On 16-05-15, 03:18, Rafael J. Wysocki wrote:
You seem to be breaking things first and then fixing them up with this patch.
Would it be possible to avoid breaking them in the first place?
I thought it was already broken in mainline and so did it towards the end. Will check that again.
On Saturday, May 16, 2015 07:43:07 AM Viresh Kumar wrote:
On 16-05-15, 03:18, Rafael J. Wysocki wrote:
You seem to be breaking things first and then fixing them up with this patch.
Would it be possible to avoid breaking them in the first place?
I thought it was already broken in mainline and so did it towards the end. Will check that again.
Well, if the sysfs directories are removed on every offline, then in particular they will be removed when the device is physically going away, so it should actually work (unless we've broken it already and nobody noticed).
If you want to keep them around after offline, you need to become careful about the "physical hot-remove" case at the same time (and not several patches later).
On 18-05-15, 02:30, Rafael J. Wysocki wrote:
Well, if the sysfs directories are removed on every offline, then in particular they will be removed when the device is physically going away, so it should actually work (unless we've broken it already and nobody noticed).
The problem is that the remove routine is called twice and I thought it might not be safe to call it twice. Ofcourse I missed the cpu_offline() checks in the sysfs removal path.
If you want to keep them around after offline, you need to become careful about the "physical hot-remove" case at the same time (and not several patches later).
Sure, will do that today.
linaro-kernel@lists.linaro.org