Add notifier calls in clk_prepare and clk_unprepare so drivers which are interested in knowing that clk_prepare/unprepare call can act accordingly.
The existing "clk_set_rate" notifier is not enough for normal DVFS inplementation since clock might be enabled/disabled at runtime. Adding these notifiers is useful on DVFS core which take clk_prepare as a hint on that the notified clock might be enabled later so it can raise voltage to a safe level before enabling the clock, and take clk_unprepare as a hint that the clock has been disabled and is safe to lower the voltage.
The added notifier events are:
PRE_CLK_PREPARE POST_CLK_PREPARE ABORT_CLK_PREPARE PRE_CLK_UNPREPARE POST_CLK_UNPREPARE
Signed-off-by: Bill Huang bilhuang@nvidia.com --- drivers/clk/clk.c | 88 ++++++++++++++++++++++++++++++--------------------- include/linux/clk.h | 5 +++ 2 files changed, 57 insertions(+), 36 deletions(-)
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index ed87b24..ac07c6e 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -516,6 +516,42 @@ struct clk *__clk_lookup(const char *name)
/*** clk api ***/
+/** + * __clk_notify - call clk notifier chain + * @clk: struct clk * that is changing rate + * @msg: clk notifier type (see include/linux/clk.h) + * @old_rate: old clk rate + * @new_rate: new clk rate + * + * Triggers a notifier call chain on the clk rate-change notification + * for 'clk'. Passes a pointer to the struct clk and the previous + * and current rates to the notifier callback. Intended to be called by + * internal clock code only. Returns NOTIFY_DONE from the last driver + * called if all went well, or NOTIFY_STOP or NOTIFY_BAD immediately if + * a driver returns that. + */ +static int __clk_notify(struct clk *clk, unsigned long msg, + unsigned long old_rate, unsigned long new_rate) +{ + struct clk_notifier *cn; + struct clk_notifier_data cnd; + int ret = NOTIFY_DONE; + + cnd.clk = clk; + cnd.old_rate = old_rate; + cnd.new_rate = new_rate; + + list_for_each_entry(cn, &clk_notifier_list, node) { + if (cn->clk == clk) { + ret = srcu_notifier_call_chain(&cn->notifier_head, msg, + &cnd); + break; + } + } + + return ret; +} + void __clk_unprepare(struct clk *clk) { if (!clk) @@ -549,7 +585,14 @@ void __clk_unprepare(struct clk *clk) void clk_unprepare(struct clk *clk) { mutex_lock(&prepare_lock); + + if (clk->notifier_count) + __clk_notify(clk, PRE_CLK_UNPREPARE, clk->rate, clk->rate); + __clk_unprepare(clk); + if (clk->notifier_count) + __clk_notify(clk, POST_CLK_UNPREPARE, clk->rate, clk->rate); + mutex_unlock(&prepare_lock); } EXPORT_SYMBOL_GPL(clk_unprepare); @@ -597,7 +640,16 @@ int clk_prepare(struct clk *clk) int ret;
mutex_lock(&prepare_lock); + + if (clk->notifier_count) + __clk_notify(clk, PRE_CLK_PREPARE, clk->rate, clk->rate); + ret = __clk_prepare(clk); + if (!ret && clk->notifier_count) + __clk_notify(clk, POST_CLK_PREPARE, clk->rate, clk->rate); + else if (clk->notifier_count) + __clk_notify(clk, ABORT_CLK_PREPARE, clk->rate, clk->rate); + mutex_unlock(&prepare_lock);
return ret; @@ -749,42 +801,6 @@ long clk_round_rate(struct clk *clk, unsigned long rate) EXPORT_SYMBOL_GPL(clk_round_rate);
/** - * __clk_notify - call clk notifier chain - * @clk: struct clk * that is changing rate - * @msg: clk notifier type (see include/linux/clk.h) - * @old_rate: old clk rate - * @new_rate: new clk rate - * - * Triggers a notifier call chain on the clk rate-change notification - * for 'clk'. Passes a pointer to the struct clk and the previous - * and current rates to the notifier callback. Intended to be called by - * internal clock code only. Returns NOTIFY_DONE from the last driver - * called if all went well, or NOTIFY_STOP or NOTIFY_BAD immediately if - * a driver returns that. - */ -static int __clk_notify(struct clk *clk, unsigned long msg, - unsigned long old_rate, unsigned long new_rate) -{ - struct clk_notifier *cn; - struct clk_notifier_data cnd; - int ret = NOTIFY_DONE; - - cnd.clk = clk; - cnd.old_rate = old_rate; - cnd.new_rate = new_rate; - - list_for_each_entry(cn, &clk_notifier_list, node) { - if (cn->clk == clk) { - ret = srcu_notifier_call_chain(&cn->notifier_head, msg, - &cnd); - break; - } - } - - return ret; -} - -/** * __clk_recalc_rates * @clk: first clk in the subtree * @msg: notification type (see include/linux/clk.h) diff --git a/include/linux/clk.h b/include/linux/clk.h index b3ac22d..41d567d 100644 --- a/include/linux/clk.h +++ b/include/linux/clk.h @@ -43,6 +43,11 @@ struct clk; #define PRE_RATE_CHANGE BIT(0) #define POST_RATE_CHANGE BIT(1) #define ABORT_RATE_CHANGE BIT(2) +#define PRE_CLK_PREPARE BIT(3) +#define POST_CLK_PREPARE BIT(4) +#define ABORT_CLK_PREPARE BIT(5) +#define PRE_CLK_UNPREPARE BIT(6) +#define POST_CLK_UNPREPARE BIT(7)
/** * struct clk_notifier - associate a clk with a notifier
Quoting Bill Huang (2013-03-19 06:28:32)
Add notifier calls in clk_prepare and clk_unprepare so drivers which are interested in knowing that clk_prepare/unprepare call can act accordingly.
The existing "clk_set_rate" notifier is not enough for normal DVFS inplementation since clock might be enabled/disabled at runtime. Adding these notifiers is useful on DVFS core which take clk_prepare as a hint on that the notified clock might be enabled later so it can raise voltage to a safe level before enabling the clock, and take clk_unprepare as a hint that the clock has been disabled and is safe to lower the voltage.
The added notifier events are:
PRE_CLK_PREPARE POST_CLK_PREPARE ABORT_CLK_PREPARE PRE_CLK_UNPREPARE POST_CLK_UNPREPARE
Signed-off-by: Bill Huang bilhuang@nvidia.com
Is this a resend or a new version? It would be nice to put RESEND in the patch $SUBJECT or put a changelog from previous patches under the "---" below.
I'm still not sure about this approach. Based on feedback I got from Linaro Connect I am not convinced that scaling voltage through clk rate-change notifiers is the right way to go. As I understand it this patch only exists for that single purpose, so if the voltage-notifier idea gets dropped then I will not take this patch in.
Regards, Mike
drivers/clk/clk.c | 88 ++++++++++++++++++++++++++++++--------------------- include/linux/clk.h | 5 +++ 2 files changed, 57 insertions(+), 36 deletions(-)
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index ed87b24..ac07c6e 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -516,6 +516,42 @@ struct clk *__clk_lookup(const char *name) /*** clk api ***/ +/**
- __clk_notify - call clk notifier chain
- @clk: struct clk * that is changing rate
- @msg: clk notifier type (see include/linux/clk.h)
- @old_rate: old clk rate
- @new_rate: new clk rate
- Triggers a notifier call chain on the clk rate-change notification
- for 'clk'. Passes a pointer to the struct clk and the previous
- and current rates to the notifier callback. Intended to be called by
- internal clock code only. Returns NOTIFY_DONE from the last driver
- called if all went well, or NOTIFY_STOP or NOTIFY_BAD immediately if
- a driver returns that.
- */
+static int __clk_notify(struct clk *clk, unsigned long msg,
unsigned long old_rate, unsigned long new_rate)
+{
struct clk_notifier *cn;
struct clk_notifier_data cnd;
int ret = NOTIFY_DONE;
cnd.clk = clk;
cnd.old_rate = old_rate;
cnd.new_rate = new_rate;
list_for_each_entry(cn, &clk_notifier_list, node) {
if (cn->clk == clk) {
ret = srcu_notifier_call_chain(&cn->notifier_head, msg,
&cnd);
break;
}
}
return ret;
+}
void __clk_unprepare(struct clk *clk) { if (!clk) @@ -549,7 +585,14 @@ void __clk_unprepare(struct clk *clk) void clk_unprepare(struct clk *clk) { mutex_lock(&prepare_lock);
if (clk->notifier_count)
__clk_notify(clk, PRE_CLK_UNPREPARE, clk->rate, clk->rate);
__clk_unprepare(clk);
if (clk->notifier_count)
__clk_notify(clk, POST_CLK_UNPREPARE, clk->rate, clk->rate);
mutex_unlock(&prepare_lock);
} EXPORT_SYMBOL_GPL(clk_unprepare); @@ -597,7 +640,16 @@ int clk_prepare(struct clk *clk) int ret; mutex_lock(&prepare_lock);
if (clk->notifier_count)
__clk_notify(clk, PRE_CLK_PREPARE, clk->rate, clk->rate);
ret = __clk_prepare(clk);
if (!ret && clk->notifier_count)
__clk_notify(clk, POST_CLK_PREPARE, clk->rate, clk->rate);
else if (clk->notifier_count)
__clk_notify(clk, ABORT_CLK_PREPARE, clk->rate, clk->rate);
mutex_unlock(&prepare_lock);
return ret; @@ -749,42 +801,6 @@ long clk_round_rate(struct clk *clk, unsigned long rate) EXPORT_SYMBOL_GPL(clk_round_rate); /**
- __clk_notify - call clk notifier chain
- @clk: struct clk * that is changing rate
- @msg: clk notifier type (see include/linux/clk.h)
- @old_rate: old clk rate
- @new_rate: new clk rate
- Triggers a notifier call chain on the clk rate-change notification
- for 'clk'. Passes a pointer to the struct clk and the previous
- and current rates to the notifier callback. Intended to be called by
- internal clock code only. Returns NOTIFY_DONE from the last driver
- called if all went well, or NOTIFY_STOP or NOTIFY_BAD immediately if
- a driver returns that.
- */
-static int __clk_notify(struct clk *clk, unsigned long msg,
unsigned long old_rate, unsigned long new_rate)
-{
struct clk_notifier *cn;
struct clk_notifier_data cnd;
int ret = NOTIFY_DONE;
cnd.clk = clk;
cnd.old_rate = old_rate;
cnd.new_rate = new_rate;
list_for_each_entry(cn, &clk_notifier_list, node) {
if (cn->clk == clk) {
ret = srcu_notifier_call_chain(&cn->notifier_head, msg,
&cnd);
break;
}
}
return ret;
-}
-/**
- __clk_recalc_rates
- @clk: first clk in the subtree
- @msg: notification type (see include/linux/clk.h)
diff --git a/include/linux/clk.h b/include/linux/clk.h index b3ac22d..41d567d 100644 --- a/include/linux/clk.h +++ b/include/linux/clk.h @@ -43,6 +43,11 @@ struct clk; #define PRE_RATE_CHANGE BIT(0) #define POST_RATE_CHANGE BIT(1) #define ABORT_RATE_CHANGE BIT(2) +#define PRE_CLK_PREPARE BIT(3) +#define POST_CLK_PREPARE BIT(4) +#define ABORT_CLK_PREPARE BIT(5) +#define PRE_CLK_UNPREPARE BIT(6) +#define POST_CLK_UNPREPARE BIT(7) /**
- struct clk_notifier - associate a clk with a notifier
-- 1.7.9.5
On Wed, 2013-03-20 at 01:01 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 06:28:32)
Add notifier calls in clk_prepare and clk_unprepare so drivers which are interested in knowing that clk_prepare/unprepare call can act accordingly.
The existing "clk_set_rate" notifier is not enough for normal DVFS inplementation since clock might be enabled/disabled at runtime. Adding these notifiers is useful on DVFS core which take clk_prepare as a hint on that the notified clock might be enabled later so it can raise voltage to a safe level before enabling the clock, and take clk_unprepare as a hint that the clock has been disabled and is safe to lower the voltage.
The added notifier events are:
PRE_CLK_PREPARE POST_CLK_PREPARE ABORT_CLK_PREPARE PRE_CLK_UNPREPARE POST_CLK_UNPREPARE
Signed-off-by: Bill Huang bilhuang@nvidia.com
I'm still not sure about this approach. Based on feedback I got from Linaro Connect I am not convinced that scaling voltage through clk rate-change notifiers is the right way to go. As I understand it this patch only exists for that single purpose, so if the voltage-notifier idea gets dropped then I will not take this patch in.
Thanks Mike, actually we won't use your "clk: notifier handler for dynamic voltage scaling" patch instead we are trying to port our DVFS into Non-CPU DVFS framework "devfreq" which will need to hook those notifiers, without the clock notifiers been extended the framework is useless for us since we cannot do polling due to the fact that polling is not in real time. If it ended up extending the notifiers cannot happen then the only choice for us I think would be giving up "devfreq" and implement them in Tegra's "clk_hw".
Quoting Bill Huang (2013-03-19 19:55:49)
On Wed, 2013-03-20 at 01:01 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 06:28:32)
Add notifier calls in clk_prepare and clk_unprepare so drivers which are interested in knowing that clk_prepare/unprepare call can act accordingly.
The existing "clk_set_rate" notifier is not enough for normal DVFS inplementation since clock might be enabled/disabled at runtime. Adding these notifiers is useful on DVFS core which take clk_prepare as a hint on that the notified clock might be enabled later so it can raise voltage to a safe level before enabling the clock, and take clk_unprepare as a hint that the clock has been disabled and is safe to lower the voltage.
The added notifier events are:
PRE_CLK_PREPARE POST_CLK_PREPARE ABORT_CLK_PREPARE PRE_CLK_UNPREPARE POST_CLK_UNPREPARE
Signed-off-by: Bill Huang bilhuang@nvidia.com
I'm still not sure about this approach. Based on feedback I got from Linaro Connect I am not convinced that scaling voltage through clk rate-change notifiers is the right way to go. As I understand it this patch only exists for that single purpose, so if the voltage-notifier idea gets dropped then I will not take this patch in.
Thanks Mike, actually we won't use your "clk: notifier handler for dynamic voltage scaling" patch instead we are trying to port our DVFS into Non-CPU DVFS framework "devfreq" which will need to hook those notifiers, without the clock notifiers been extended the framework is useless for us since we cannot do polling due to the fact that polling is not in real time. If it ended up extending the notifiers cannot happen then the only choice for us I think would be giving up "devfreq" and implement them in Tegra's "clk_hw".
I'm familiar with the devfreq framework. Can you explain further how you plan to use devfreq with the clock notifiers? What does the call graph look like?
Thanks, Mike
On Wed, 2013-03-20 at 11:31 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 19:55:49)
On Wed, 2013-03-20 at 01:01 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 06:28:32)
Add notifier calls in clk_prepare and clk_unprepare so drivers which are interested in knowing that clk_prepare/unprepare call can act accordingly.
The existing "clk_set_rate" notifier is not enough for normal DVFS inplementation since clock might be enabled/disabled at runtime. Adding these notifiers is useful on DVFS core which take clk_prepare as a hint on that the notified clock might be enabled later so it can raise voltage to a safe level before enabling the clock, and take clk_unprepare as a hint that the clock has been disabled and is safe to lower the voltage.
The added notifier events are:
PRE_CLK_PREPARE POST_CLK_PREPARE ABORT_CLK_PREPARE PRE_CLK_UNPREPARE POST_CLK_UNPREPARE
Signed-off-by: Bill Huang bilhuang@nvidia.com
I'm still not sure about this approach. Based on feedback I got from Linaro Connect I am not convinced that scaling voltage through clk rate-change notifiers is the right way to go. As I understand it this patch only exists for that single purpose, so if the voltage-notifier idea gets dropped then I will not take this patch in.
Thanks Mike, actually we won't use your "clk: notifier handler for dynamic voltage scaling" patch instead we are trying to port our DVFS into Non-CPU DVFS framework "devfreq" which will need to hook those notifiers, without the clock notifiers been extended the framework is useless for us since we cannot do polling due to the fact that polling is not in real time. If it ended up extending the notifiers cannot happen then the only choice for us I think would be giving up "devfreq" and implement them in Tegra's "clk_hw".
I'm familiar with the devfreq framework. Can you explain further how you plan to use devfreq with the clock notifiers? What does the call graph look like?
The call graph will look like this, when any DVFS interested clock rate changes (including enable and disable) happen -> Tegra devfreq clock notifier is called -> call into update_devfreq if needed -> in update_devfreq it will call into .get_target_freq in Tegra "devfreq_governor" -> and then call into .target of tegra devfreq_dev_profile to set voltage and done. More details are as below.
We'll create devfreq driver for Tegra VDD_CORE rail, and the safe voltage level of the rail is determined by tens of clocks (2d, 3d, mpe,...), all the frequency ladders of those clocks are defined in DT also the operating points for VDD_CORE is declared in DT where its frequency will be more of a virtual clock or index.
operating-points = < /* virtual-kHz uV */ 0 950000 1 1000000 2 1050000 3 1100000 4 1150000 5 1200000 6 1250000 7 1300000 8 1350000
Register a Tegra governor where the callback .get_target_freq is the function to determine the overall frequency it can go to, and the .target callback in "devfreq_dev_profile" will be the function really do the voltage scaling.
Tegra devfreq driver will register clock notifiers on all its interested clock and hence when any of those clock rate changes, disabled, enabled, we'll specifically call update_devfreq in the notifier.
Quoting Bill Huang (2013-03-19 21:39:44)
On Wed, 2013-03-20 at 11:31 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 19:55:49)
On Wed, 2013-03-20 at 01:01 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 06:28:32)
Add notifier calls in clk_prepare and clk_unprepare so drivers which are interested in knowing that clk_prepare/unprepare call can act accordingly.
The existing "clk_set_rate" notifier is not enough for normal DVFS inplementation since clock might be enabled/disabled at runtime. Adding these notifiers is useful on DVFS core which take clk_prepare as a hint on that the notified clock might be enabled later so it can raise voltage to a safe level before enabling the clock, and take clk_unprepare as a hint that the clock has been disabled and is safe to lower the voltage.
The added notifier events are:
PRE_CLK_PREPARE POST_CLK_PREPARE ABORT_CLK_PREPARE PRE_CLK_UNPREPARE POST_CLK_UNPREPARE
Signed-off-by: Bill Huang bilhuang@nvidia.com
I'm still not sure about this approach. Based on feedback I got from Linaro Connect I am not convinced that scaling voltage through clk rate-change notifiers is the right way to go. As I understand it this patch only exists for that single purpose, so if the voltage-notifier idea gets dropped then I will not take this patch in.
Thanks Mike, actually we won't use your "clk: notifier handler for dynamic voltage scaling" patch instead we are trying to port our DVFS into Non-CPU DVFS framework "devfreq" which will need to hook those notifiers, without the clock notifiers been extended the framework is useless for us since we cannot do polling due to the fact that polling is not in real time. If it ended up extending the notifiers cannot happen then the only choice for us I think would be giving up "devfreq" and implement them in Tegra's "clk_hw".
I'm familiar with the devfreq framework. Can you explain further how you plan to use devfreq with the clock notifiers? What does the call graph look like?
The call graph will look like this, when any DVFS interested clock rate changes (including enable and disable) happen -> Tegra devfreq clock notifier is called -> call into update_devfreq if needed -> in update_devfreq it will call into .get_target_freq in Tegra "devfreq_governor" -> and then call into .target of tegra devfreq_dev_profile to set voltage and done. More details are as below.
We'll create devfreq driver for Tegra VDD_CORE rail, and the safe voltage level of the rail is determined by tens of clocks (2d, 3d, mpe,...), all the frequency ladders of those clocks are defined in DT also the operating points for VDD_CORE is declared in DT where its frequency will be more of a virtual clock or index.
operating-points = < /* virtual-kHz uV */ 0 950000 1 1000000 2 1050000 3 1100000 4 1150000 5 1200000 6 1250000 7 1300000 8 1350000
Register a Tegra governor where the callback .get_target_freq is the function to determine the overall frequency it can go to, and the .target callback in "devfreq_dev_profile" will be the function really do the voltage scaling.
Tegra devfreq driver will register clock notifiers on all its interested clock and hence when any of those clock rate changes, disabled, enabled, we'll specifically call update_devfreq in the notifier.
Thank you for the explanation. Do you plan to use actual devfreq governors (like simple-ondemand, or something custom) for changing OPPs, or do you just plan to use the clock framework as a trigger for DVFS?
Regards, Mike
On 20 March 2013 15:47, Mike Turquette mturquette@linaro.org wrote:
Quoting Bill Huang (2013-03-19 21:39:44)
On Wed, 2013-03-20 at 11:31 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 19:55:49)
On Wed, 2013-03-20 at 01:01 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 06:28:32)
Add notifier calls in clk_prepare and clk_unprepare so drivers which are interested in knowing that clk_prepare/unprepare call can act accordingly.
The existing "clk_set_rate" notifier is not enough for normal DVFS inplementation since clock might be enabled/disabled at runtime. Adding these notifiers is useful on DVFS core which take clk_prepare as a hint on that the notified clock might be enabled later so it can raise voltage to a safe level before enabling the clock, and take clk_unprepare as a hint that the clock has been disabled and is safe to lower the voltage.
The added notifier events are:
PRE_CLK_PREPARE POST_CLK_PREPARE ABORT_CLK_PREPARE PRE_CLK_UNPREPARE POST_CLK_UNPREPARE
Signed-off-by: Bill Huang bilhuang@nvidia.com
I'm still not sure about this approach. Based on feedback I got from Linaro Connect I am not convinced that scaling voltage through clk rate-change notifiers is the right way to go. As I understand it this patch only exists for that single purpose, so if the voltage-notifier idea gets dropped then I will not take this patch in.
Thanks Mike, actually we won't use your "clk: notifier handler for dynamic voltage scaling" patch instead we are trying to port our DVFS into Non-CPU DVFS framework "devfreq" which will need to hook those notifiers, without the clock notifiers been extended the framework is useless for us since we cannot do polling due to the fact that polling is not in real time. If it ended up extending the notifiers cannot happen then the only choice for us I think would be giving up "devfreq" and implement them in Tegra's "clk_hw".
I'm familiar with the devfreq framework. Can you explain further how you plan to use devfreq with the clock notifiers? What does the call graph look like?
The call graph will look like this, when any DVFS interested clock rate changes (including enable and disable) happen -> Tegra devfreq clock notifier is called -> call into update_devfreq if needed -> in update_devfreq it will call into .get_target_freq in Tegra "devfreq_governor" -> and then call into .target of tegra devfreq_dev_profile to set voltage and done. More details are as below.
We'll create devfreq driver for Tegra VDD_CORE rail, and the safe voltage level of the rail is determined by tens of clocks (2d, 3d, mpe,...), all the frequency ladders of those clocks are defined in DT also the operating points for VDD_CORE is declared in DT where its frequency will be more of a virtual clock or index.
operating-points = < /* virtual-kHz uV */ 0 950000 1 1000000 2 1050000 3 1100000 4 1150000 5 1200000 6 1250000 7 1300000 8 1350000
Register a Tegra governor where the callback .get_target_freq is the function to determine the overall frequency it can go to, and the .target callback in "devfreq_dev_profile" will be the function really do the voltage scaling.
Tegra devfreq driver will register clock notifiers on all its interested clock and hence when any of those clock rate changes, disabled, enabled, we'll specifically call update_devfreq in the notifier.
Thank you for the explanation. Do you plan to use actual devfreq governors (like simple-ondemand, or something custom) for changing OPPs, or do you just plan to use the clock framework as a trigger for DVFS?
Regards, Mike
At a recent discussion regarding a previous version of this patch "[RFC 1/1] clk: Add notifier support in clk_prepare_enable/clk_disable_unprepare", we also discussed whether to use clk notifiers or to use a clk hw to implement DVFS.
Stephen Warren an myself, kind of pointed out that there could be benefits of not using notifers. I would just like to add that to this discussion as well.
The idea in principle, could be as an option to Bill's idea, using devfreq with notifiers, to implement a clk hw which possibly makes use of the opp libary and do implements the DVFS functionallity that is needed for each SoC.
Kind regards Ulf Hansson
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
Quoting Ulf Hansson (2013-03-20 14:06:14)
On 20 March 2013 15:47, Mike Turquette mturquette@linaro.org wrote:
Quoting Bill Huang (2013-03-19 21:39:44)
On Wed, 2013-03-20 at 11:31 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 19:55:49)
On Wed, 2013-03-20 at 01:01 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 06:28:32) > Add notifier calls in clk_prepare and clk_unprepare so drivers which are > interested in knowing that clk_prepare/unprepare call can act accordingly. > > The existing "clk_set_rate" notifier is not enough for normal DVFS > inplementation since clock might be enabled/disabled at runtime. Adding > these notifiers is useful on DVFS core which take clk_prepare as a hint > on that the notified clock might be enabled later so it can raise voltage > to a safe level before enabling the clock, and take clk_unprepare as a > hint that the clock has been disabled and is safe to lower the voltage. > > The added notifier events are: > > PRE_CLK_PREPARE > POST_CLK_PREPARE > ABORT_CLK_PREPARE > PRE_CLK_UNPREPARE > POST_CLK_UNPREPARE > > Signed-off-by: Bill Huang bilhuang@nvidia.com
I'm still not sure about this approach. Based on feedback I got from Linaro Connect I am not convinced that scaling voltage through clk rate-change notifiers is the right way to go. As I understand it this patch only exists for that single purpose, so if the voltage-notifier idea gets dropped then I will not take this patch in.
Thanks Mike, actually we won't use your "clk: notifier handler for dynamic voltage scaling" patch instead we are trying to port our DVFS into Non-CPU DVFS framework "devfreq" which will need to hook those notifiers, without the clock notifiers been extended the framework is useless for us since we cannot do polling due to the fact that polling is not in real time. If it ended up extending the notifiers cannot happen then the only choice for us I think would be giving up "devfreq" and implement them in Tegra's "clk_hw".
I'm familiar with the devfreq framework. Can you explain further how you plan to use devfreq with the clock notifiers? What does the call graph look like?
The call graph will look like this, when any DVFS interested clock rate changes (including enable and disable) happen -> Tegra devfreq clock notifier is called -> call into update_devfreq if needed -> in update_devfreq it will call into .get_target_freq in Tegra "devfreq_governor" -> and then call into .target of tegra devfreq_dev_profile to set voltage and done. More details are as below.
We'll create devfreq driver for Tegra VDD_CORE rail, and the safe voltage level of the rail is determined by tens of clocks (2d, 3d, mpe,...), all the frequency ladders of those clocks are defined in DT also the operating points for VDD_CORE is declared in DT where its frequency will be more of a virtual clock or index.
operating-points = < /* virtual-kHz uV */ 0 950000 1 1000000 2 1050000 3 1100000 4 1150000 5 1200000 6 1250000 7 1300000 8 1350000
Register a Tegra governor where the callback .get_target_freq is the function to determine the overall frequency it can go to, and the .target callback in "devfreq_dev_profile" will be the function really do the voltage scaling.
Tegra devfreq driver will register clock notifiers on all its interested clock and hence when any of those clock rate changes, disabled, enabled, we'll specifically call update_devfreq in the notifier.
Thank you for the explanation. Do you plan to use actual devfreq governors (like simple-ondemand, or something custom) for changing OPPs, or do you just plan to use the clock framework as a trigger for DVFS?
Regards, Mike
At a recent discussion regarding a previous version of this patch "[RFC 1/1] clk: Add notifier support in clk_prepare_enable/clk_disable_unprepare", we also discussed whether to use clk notifiers or to use a clk hw to implement DVFS.
Stephen Warren an myself, kind of pointed out that there could be benefits of not using notifers. I would just like to add that to this discussion as well.
The idea in principle, could be as an option to Bill's idea, using devfreq with notifiers, to implement a clk hw which possibly makes use of the opp libary and do implements the DVFS functionallity that is needed for each SoC.
To my knowledge, devfreq performs one task: implements an algorithm (typically one that loops/polls) and applies this heuristic towards a dvfs transition.
It is a policy layer, a high level layer. It should not be used as a lower-level mechanism. Please correct me if my understanding is wrong.
I think the very idea of the clk framework calling into devfreq is backwards. Ideally a devfreq driver would call clk_set_rate as part of it's target callback. This is analogous to a cpufreq .target callback which calls clk_set_rate and regulator_set_voltage. Can you imagine the clock framework cross-calling into cpufreq when clk_set_rate is called? I think that would be strange.
I think that all of this discussion highlights the fact that there is a missing piece of infrastructure. It isn't devfreq or clock rate-change notifiers. It is that there is not a dvfs mechanism which neatly builds on top of these lower-level frameworks (clocks & regulators). Clearly some higher-level abstraction layer is needed.
As I mentioned in the v1 thread I'm hacking on something now and I'll try to get it on the list soon.
Regards, Mike
Kind regards Ulf Hansson
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
On Thu, Mar 21, 2013 at 3:36 PM, Mike Turquette mturquette@linaro.org wrote:
To my knowledge, devfreq performs one task: implements an algorithm (typically one that loops/polls) and applies this heuristic towards a dvfs transition.
It is a policy layer, a high level layer. It should not be used as a lower-level mechanism. Please correct me if my understanding is wrong.
I think the very idea of the clk framework calling into devfreq is backwards. Ideally a devfreq driver would call clk_set_rate as part of it's target callback. This is analogous to a cpufreq .target callback which calls clk_set_rate and regulator_set_voltage. Can you imagine the clock framework cross-calling into cpufreq when clk_set_rate is called? I think that would be strange.
I think that all of this discussion highlights the fact that there is a missing piece of infrastructure. It isn't devfreq or clock rate-change notifiers. It is that there is not a dvfs mechanism which neatly builds on top of these lower-level frameworks (clocks & regulators). Clearly some higher-level abstraction layer is needed.
I went through all of this on Tegra2. For a while I had a dvfs_set_rate api for drivers that needed to modify the voltage when they updated a clock, but I ended up dropping it. Drivers rarely care about the voltage, all they want to do is set their clock rate. The voltage necessary to support that clock is an implementation detail of the silicon that is irrelevant to the driver (I know TI liked to specify voltage/frequency combos for the blocks, but their chips still had to support running at a lower clock speed for the voltage than specified in the OPP because that case always occurs during a dvfs change).
For Tegra2, before clk_prepare/clk_unprepare existed, I hacked dvfs into the clk framework by using a mixture of mutex locked clocks and spinlock locked clocks. The main issue is accidentally recursive locking the main clock locks when the call path is clk->dvfs->regulator set->i2c->clk. I think if you could guarantee that clocks required for dvfs were always in the "prepared" state (maybe a flag on the clock, kind of like WQ_MEM_RECLAIM marks "special" workqueues, or just have the machine call clk_prepare), and that clk_prepare on an already-prepared clock avoided taking the mutex (atomic op fastpath plus mutex slow path?), then the existing notifiers would be perfect for dvfs.
Quoting Colin Cross (2013-03-21 17:06:25)
On Thu, Mar 21, 2013 at 3:36 PM, Mike Turquette mturquette@linaro.org wrote:
To my knowledge, devfreq performs one task: implements an algorithm (typically one that loops/polls) and applies this heuristic towards a dvfs transition.
It is a policy layer, a high level layer. It should not be used as a lower-level mechanism. Please correct me if my understanding is wrong.
I think the very idea of the clk framework calling into devfreq is backwards. Ideally a devfreq driver would call clk_set_rate as part of it's target callback. This is analogous to a cpufreq .target callback which calls clk_set_rate and regulator_set_voltage. Can you imagine the clock framework cross-calling into cpufreq when clk_set_rate is called? I think that would be strange.
I think that all of this discussion highlights the fact that there is a missing piece of infrastructure. It isn't devfreq or clock rate-change notifiers. It is that there is not a dvfs mechanism which neatly builds on top of these lower-level frameworks (clocks & regulators). Clearly some higher-level abstraction layer is needed.
I went through all of this on Tegra2. For a while I had a dvfs_set_rate api for drivers that needed to modify the voltage when they updated a clock, but I ended up dropping it. Drivers rarely care about the voltage, all they want to do is set their clock rate. The voltage necessary to support that clock is an implementation detail of the silicon that is irrelevant to the driver
Hi Colin,
I agree about voltage scaling being an implementation detail, but I think that drivers similarly do not care about enabling clocks, clock domains, power domains, voltage domains, etc. The just want to say "give me what I need to turn on and run", and "I'm done with that stuff now, lazily turn off if you want to". Runtime pm gives drivers that abstraction layer today.
There is a need for a similar abstraction layer for dvfs or, more generically, an abstraction layer for performance. It is true that a driver doesn't care about scaling it's voltage, but it also might not care that its functional clock is changing rate, or that memory needs to run faster, or that an async bridge or interface clock needs to change it's rate.
These are also implementation details that are common in dvfs transitions, but the driver surely doesn't care about. (note that obviously some driver care specifically about clocks, such as multimedia codecs)
(I know TI liked to specify voltage/frequency combos for the blocks, but their chips still had to support running at a lower clock speed for the voltage than specified in the OPP because that case always occurs during a dvfs change).
I don't see the relevance to this discussion.
For Tegra2, before clk_prepare/clk_unprepare existed, I hacked dvfs into the clk framework by using a mixture of mutex locked clocks and spinlock locked clocks. The main issue is accidentally recursive locking the main clock locks when the call path is clk->dvfs->regulator set->i2c->clk. I think if you could guarantee that clocks required for dvfs were always in the "prepared" state (maybe a flag on the clock, kind of like WQ_MEM_RECLAIM marks "special" workqueues, or just have the machine call clk_prepare), and that clk_prepare on an already-prepared clock avoided taking the mutex (atomic op fastpath plus mutex slow path?), then the existing notifiers would be perfect for dvfs.
The clk reentrancy patchset[1] solves the particular locking problem you're referring to.
The bigger issue that worries me about using clock rate-change notifiers to implement a dvfs transition is that the mechanism may not be powerful enough, or may be very messy.
For instance consider OMAP's voltage domain dependencies. A straight forward example is running the MPU fast, which requires DDR to run fast. So a call to clk_set_rate(cpu_clk) will shoot off PRE_RATE_CHANGE notifiers that call clk_set_rate(ddr_clk). Both of those calls to clk_set_rate will also result in notifiers that each call regulator_scale_voltage on their respective regulators.
Since there is no user tracking going on in the clock framework, all it takes is any other actor in the system to call clk_set_rate(ddr_clk) and overwrite what the mpu_clk did. For instance a bluetooth file transfer needs CORE to run fast for some 3Mbps transfer, and then ramps clock rates back down (including the ddr_clk rate) after it completes, even while the MPU is still running fast. So now user requests have to be tracked and compared to prevent that sort of thing from happening. Should all of that user-tracking stuff end up in the clock framework? I'm not so sure.
Anyways I'm still looking at the voltage scaling via notifiers thing and trying to understand the limits of that design choice before everyone converts over to it and there is no turning back.
Regards, Mike
On 03/28/2013 04:01 PM, Mike Turquette wrote:
Quoting Colin Cross (2013-03-21 17:06:25)
On Thu, Mar 21, 2013 at 3:36 PM, Mike Turquette mturquette@linaro.org wrote:
To my knowledge, devfreq performs one task: implements an algorithm (typically one that loops/polls) and applies this heuristic towards a dvfs transition.
It is a policy layer, a high level layer. It should not be used as a lower-level mechanism. Please correct me if my understanding is wrong.
I think the very idea of the clk framework calling into devfreq is backwards. Ideally a devfreq driver would call clk_set_rate as part of it's target callback. This is analogous to a cpufreq .target callback which calls clk_set_rate and regulator_set_voltage. Can you imagine the clock framework cross-calling into cpufreq when clk_set_rate is called? I think that would be strange.
I think that all of this discussion highlights the fact that there is a missing piece of infrastructure. It isn't devfreq or clock rate-change notifiers. It is that there is not a dvfs mechanism which neatly builds on top of these lower-level frameworks (clocks & regulators). Clearly some higher-level abstraction layer is needed.
I went through all of this on Tegra2. For a while I had a dvfs_set_rate api for drivers that needed to modify the voltage when they updated a clock, but I ended up dropping it. Drivers rarely care about the voltage, all they want to do is set their clock rate. The voltage necessary to support that clock is an implementation detail of the silicon that is irrelevant to the driver
Hi Colin,
I agree about voltage scaling being an implementation detail, but I think that drivers similarly do not care about enabling clocks, clock domains, power domains, voltage domains, etc. The just want to say "give me what I need to turn on and run", and "I'm done with that stuff now, lazily turn off if you want to". Runtime pm gives drivers that abstraction layer today.
I don't understand how runtime PM gives this abstraction today. All the implementations of runtime PM that I've seen involve the driver itself implementing its own runtime PM callbacks, and explicitly managing the clocks itself. I don't see how that hides those details from the driver. Have I been looking at runtime PM implementations that aren't implemented philosophically correctly?
On Thu, Mar 28, 2013 at 11:01:09PM +0100, Mike Turquette wrote:
Quoting Colin Cross (2013-03-21 17:06:25)
On Thu, Mar 21, 2013 at 3:36 PM, Mike Turquette mturquette@linaro.org wrote:
To my knowledge, devfreq performs one task: implements an algorithm (typically one that loops/polls) and applies this heuristic towards a dvfs transition.
It is a policy layer, a high level layer. It should not be used as a lower-level mechanism. Please correct me if my understanding is wrong.
I think the very idea of the clk framework calling into devfreq is backwards. Ideally a devfreq driver would call clk_set_rate as part of it's target callback. This is analogous to a cpufreq .target callback which calls clk_set_rate and regulator_set_voltage. Can you imagine the clock framework cross-calling into cpufreq when clk_set_rate is called? I think that would be strange.
I think that all of this discussion highlights the fact that there is a missing piece of infrastructure. It isn't devfreq or clock rate-change notifiers. It is that there is not a dvfs mechanism which neatly builds on top of these lower-level frameworks (clocks & regulators). Clearly some higher-level abstraction layer is needed.
I went through all of this on Tegra2. For a while I had a dvfs_set_rate api for drivers that needed to modify the voltage when they updated a clock, but I ended up dropping it. Drivers rarely care about the voltage, all they want to do is set their clock rate. The voltage necessary to support that clock is an implementation detail of the silicon that is irrelevant to the driver
Hi Colin,
I agree about voltage scaling being an implementation detail, but I think that drivers similarly do not care about enabling clocks, clock domains, power domains, voltage domains, etc. The just want to say "give me what I need to turn on and run", and "I'm done with that stuff now, lazily turn off if you want to". Runtime pm gives drivers that abstraction layer today.
There is a need for a similar abstraction layer for dvfs or, more generically, an abstraction layer for performance. It is true that a driver doesn't care about scaling it's voltage, but it also might not care that its functional clock is changing rate, or that memory needs to run faster, or that an async bridge or interface clock needs to change it's rate.
Drivers are the ones which need to indicate their performance constraints. A single voltage domain can contain many blocks and also the clock can be shared (eg. all graphics on Tegra20 and Tegra30 run from the same PLL, even though there is a per block postdivider, we still want to vary the PLL depending on load). The only realistic way of stating this kind of constraints is using the clockrate I think. I don't see which other 'metric' would be useful across different IP blocks.
Cheers,
Peter.
On Wed, 2013-03-20 at 22:47 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 21:39:44)
On Wed, 2013-03-20 at 11:31 +0800, Mike Turquette wrote:
Quoting Bill Huang (2013-03-19 19:55:49)
On Wed, 2013-03-20 at 01:01 +0800, Mike Turquette wrote:
I'm still not sure about this approach. Based on feedback I got from Linaro Connect I am not convinced that scaling voltage through clk rate-change notifiers is the right way to go. As I understand it this patch only exists for that single purpose, so if the voltage-notifier idea gets dropped then I will not take this patch in.
Thanks Mike, actually we won't use your "clk: notifier handler for dynamic voltage scaling" patch instead we are trying to port our DVFS into Non-CPU DVFS framework "devfreq" which will need to hook those notifiers, without the clock notifiers been extended the framework is useless for us since we cannot do polling due to the fact that polling is not in real time. If it ended up extending the notifiers cannot happen then the only choice for us I think would be giving up "devfreq" and implement them in Tegra's "clk_hw".
I'm familiar with the devfreq framework. Can you explain further how you plan to use devfreq with the clock notifiers? What does the call graph look like?
The call graph will look like this, when any DVFS interested clock rate changes (including enable and disable) happen -> Tegra devfreq clock notifier is called -> call into update_devfreq if needed -> in update_devfreq it will call into .get_target_freq in Tegra "devfreq_governor" -> and then call into .target of tegra devfreq_dev_profile to set voltage and done. More details are as below.
We'll create devfreq driver for Tegra VDD_CORE rail, and the safe voltage level of the rail is determined by tens of clocks (2d, 3d, mpe,...), all the frequency ladders of those clocks are defined in DT also the operating points for VDD_CORE is declared in DT where its frequency will be more of a virtual clock or index.
operating-points = < /* virtual-kHz uV */ 0 950000 1 1000000 2 1050000 3 1100000 4 1150000 5 1200000 6 1250000 7 1300000 8 1350000
Register a Tegra governor where the callback .get_target_freq is the function to determine the overall frequency it can go to, and the .target callback in "devfreq_dev_profile" will be the function really do the voltage scaling.
Tegra devfreq driver will register clock notifiers on all its interested clock and hence when any of those clock rate changes, disabled, enabled, we'll specifically call update_devfreq in the notifier.
Thank you for the explanation. Do you plan to use actual devfreq governors (like simple-ondemand, or something custom) for changing OPPs, or do you just plan to use the clock framework as a trigger for DVFS?
I think the existing governors are all polling basis that do not work for us, we have only single voltage rails and have tens of clocks which are used by different device drivers, the voltage has to be boost before the frequency is increasing, and it can only be lower after all the interested clocks rate are allowing it to or we might crash. Clock notifier seem to be the only choice to build association with all those interested device driver (in real time) who might make changes on clock rate either by using runtime PM or the similar mechanism.
We do use OPP in our devfreq driver, we just need to do some extra work on checking all the interested clocks for making sure we will be choosing the correct operating point for the power rail.