Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
1. factor out / cleanup the cpuidle code as much as possible 2. better sharing of code amongst SoC idle drivers by moving common bits to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
These four steps impacts all the architecture. I began the factor out code / cleanup [4] and that has been accepted upstream and I proposed some modifications [5] but I had a very few answers.
The patch review are very slow and done at the last minute at the merge window and that makes code upstreaming very difficult. It is not a reproach, it is just how it is and I would like to propose a solution for that.
I propose to host a cpuidle-next tree where all these modifications will be and where people can send patches against, preventing last minutes conflicts and perhaps Lenb will agree to pull from this tree. In the meantime, the tree will be part of the linux-next, the patches will be more widely tested and could be fixed earlier.
Thanks -- Daniel
[1] http://lwn.net/Articles/491257/ [2] http://lwn.net/Articles/464808/ [3] http://summit.linaro.org/ [4] http://www.mail-archive.com/linux-omap@vger.kernel.org/msg67033.html, http://www.spinics.net/lists/linux-pm/msg27330.html, http://comments.gmane.org/gmane.linux.ports.arm.omap/76311, http://www.digipedia.pl/usenet/thread/18885/11795/
[5] https://lkml.org/lkml/2012/6/8/375
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
On huge systems especially servers, doing a cpuidle registration on a per-cpu basis creates a big overhead. So global registration was introduced in the first place.
Why not have it as a configurable option or so ? Architectures having uniform cpuidle state parameters can continue to use global registration, else have an api to register latencies per cpu as proposed. We can definitely work to see the best way to implement it.
These four steps impacts all the architecture. I began the factor out code / cleanup [4] and that has been accepted upstream and I proposed some modifications [5] but I had a very few answers.
The patch review are very slow and done at the last minute at the merge window and that makes code upstreaming very difficult. It is not a reproach, it is just how it is and I would like to propose a solution for that.
I propose to host a cpuidle-next tree where all these modifications will be and where people can send patches against, preventing last minutes conflicts and perhaps Lenb will agree to pull from this tree. In the meantime, the tree will be part of the linux-next, the patches will be more widely tested and could be fixed earlier.
Thanks -- Daniel
[1] http://lwn.net/Articles/491257/ [2] http://lwn.net/Articles/464808/ [3] http://summit.linaro.org/ [4] http://www.mail-archive.com/linux-omap@vger.kernel.org/msg67033.html, http://www.spinics.net/lists/linux-pm/msg27330.html, http://comments.gmane.org/gmane.linux.ports.arm.omap/76311, http://www.digipedia.pl/usenet/thread/18885/11795/
Cheers, Deepthi
On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
On huge systems especially servers, doing a cpuidle registration on a per-cpu basis creates a big overhead. So global registration was introduced in the first place.
Why not have it as a configurable option or so ? Architectures having uniform cpuidle state parameters can continue to use global registration, else have an api to register latencies per cpu as proposed. We can definitely work to see the best way to implement it.
Absolutely, this is one reason I think adding a function:
cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
makes sense if it is used only for cpus with different latencies. The other architecture will be kept untouched.
IMHO, before adding more functionalities to cpuidle, we should cleanup and consolidate the code. For example, there is a dependency between acpi_idle and intel_idle which can be resolved with the notifiers, or there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is also introduced to cpuidle which is related to x86 not the cpuidle core, etc ...
Cleanup the code will help to move the different bits from the arch specific code to the core code and reduce the impact of the core's modifications. That should let a common pattern to emerge and will facilitate the modifications in the future (per cpu latencies is one of them).
That will be a lot of changes and this is why I proposed to put in place a cpuidle-next tree in order to consolidate all the cpuidle modifications people is willing to see upstream and provide better testing.
These four steps impacts all the architecture. I began the factor out code / cleanup [4] and that has been accepted upstream and I proposed some modifications [5] but I had a very few answers.
The patch review are very slow and done at the last minute at the merge window and that makes code upstreaming very difficult. It is not a reproach, it is just how it is and I would like to propose a solution for that.
I propose to host a cpuidle-next tree where all these modifications will be and where people can send patches against, preventing last minutes conflicts and perhaps Lenb will agree to pull from this tree. In the meantime, the tree will be part of the linux-next, the patches will be more widely tested and could be fixed earlier.
Thanks -- Daniel
[1] http://lwn.net/Articles/491257/ [2] http://lwn.net/Articles/464808/ [3] http://summit.linaro.org/ [4] http://www.mail-archive.com/linux-omap@vger.kernel.org/msg67033.html, http://www.spinics.net/lists/linux-pm/msg27330.html, http://comments.gmane.org/gmane.linux.ports.arm.omap/76311, http://www.digipedia.pl/usenet/thread/18885/11795/
Cheers, Deepthi
On Mon, Jun 18, 2012 at 02:35:42PM +0200, Daniel Lezcano wrote:
On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
On huge systems especially servers, doing a cpuidle registration on a per-cpu basis creates a big overhead. So global registration was introduced in the first place.
Why not have it as a configurable option or so ? Architectures having uniform cpuidle state parameters can continue to use global registration, else have an api to register latencies per cpu as proposed. We can definitely work to see the best way to implement it.
Absolutely, this is one reason I think adding a function:
cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
makes sense if it is used only for cpus with different latencies. The other architecture will be kept untouched.
IMHO, before adding more functionalities to cpuidle, we should cleanup and consolidate the code. For example, there is a dependency between acpi_idle and intel_idle which can be resolved with the notifiers, or there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is also introduced to cpuidle which is related to x86 not the cpuidle core, etc ...
Cleanup the code will help to move the different bits from the arch specific code to the core code and reduce the impact of the core's modifications. That should let a common pattern to emerge and will facilitate the modifications in the future (per cpu latencies is one of them).
That will be a lot of changes and this is why I proposed to put in place a cpuidle-next tree in order to consolidate all the cpuidle modifications people is willing to see upstream and provide better testing.
Sounds like a good idea. Do you have something like that already?
Thanks,
Peter.
On 06/18/2012 02:53 PM, Peter De Schrijver wrote:
On Mon, Jun 18, 2012 at 02:35:42PM +0200, Daniel Lezcano wrote:
On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
On huge systems especially servers, doing a cpuidle registration on a per-cpu basis creates a big overhead. So global registration was introduced in the first place.
Why not have it as a configurable option or so ? Architectures having uniform cpuidle state parameters can continue to use global registration, else have an api to register latencies per cpu as proposed. We can definitely work to see the best way to implement it.
Absolutely, this is one reason I think adding a function:
cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
makes sense if it is used only for cpus with different latencies. The other architecture will be kept untouched.
IMHO, before adding more functionalities to cpuidle, we should cleanup and consolidate the code. For example, there is a dependency between acpi_idle and intel_idle which can be resolved with the notifiers, or there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is also introduced to cpuidle which is related to x86 not the cpuidle core, etc ...
Cleanup the code will help to move the different bits from the arch specific code to the core code and reduce the impact of the core's modifications. That should let a common pattern to emerge and will facilitate the modifications in the future (per cpu latencies is one of them).
That will be a lot of changes and this is why I proposed to put in place a cpuidle-next tree in order to consolidate all the cpuidle modifications people is willing to see upstream and provide better testing.
Sounds like a good idea. Do you have something like that already?
Yes but I need to cleanup the tree before.
http://git.linaro.org/gitweb?p=people/dlezcano/linux-next.git%3Ba=summary
Hi Daniel,
On Mon, Jun 18, 2012 at 2:55 PM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
On 06/18/2012 02:53 PM, Peter De Schrijver wrote:
On Mon, Jun 18, 2012 at 02:35:42PM +0200, Daniel Lezcano wrote:
On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
That makes sense, especially if you can refactor _and_ add new functionality at the same time.
On huge systems especially servers, doing a cpuidle registration on a per-cpu basis creates a big overhead. So global registration was introduced in the first place.
Why not have it as a configurable option or so ? Architectures having uniform cpuidle state parameters can continue to use global registration, else have an api to register latencies per cpu as proposed. We can definitely work to see the best way to implement it.
Absolutely, this is one reason I think adding a function:
cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
makes sense if it is used only for cpus with different latencies. The other architecture will be kept untouched.
Do you mean by keeping the parameters in the cpuidle_driver struct and not calling the new API? That looks great.
IMHO, before adding more functionalities to cpuidle, we should cleanup and consolidate the code. For example, there is a dependency between acpi_idle and intel_idle which can be resolved with the notifiers, or there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is also introduced to cpuidle which is related to x86 not the cpuidle core, etc ...
Cleanup the code will help to move the different bits from the arch specific code to the core code and reduce the impact of the core's modifications. That should let a common pattern to emerge and will facilitate the modifications in the future (per cpu latencies is one of them).
That will be a lot of changes and this is why I proposed to put in place a cpuidle-next tree in order to consolidate all the cpuidle modifications people is willing to see upstream and provide better testing.
Nice! The new tree needs to be as close as possible to mainline though. Do you have plans for that? Do not hesitate to ask for help on OMAPs!
Regards, Jean
Sounds like a good idea. Do you have something like that already?
Yes but I need to cleanup the tree before.
http://git.linaro.org/gitweb?p=people/dlezcano/linux-next.git%3Ba=summary
-- http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro Facebook | http://twitter.com/#!/linaroorg Twitter | http://www.linaro.org/linaro-blog/ Blog
-- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
On 06/18/2012 03:06 PM, Jean Pihet wrote:
Hi Daniel,
On Mon, Jun 18, 2012 at 2:55 PM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
On 06/18/2012 02:53 PM, Peter De Schrijver wrote:
On Mon, Jun 18, 2012 at 02:35:42PM +0200, Daniel Lezcano wrote:
On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
That makes sense, especially if you can refactor _and_ add new functionality at the same time.
Yes :)
On huge systems especially servers, doing a cpuidle registration on a per-cpu basis creates a big overhead. So global registration was introduced in the first place.
Why not have it as a configurable option or so ? Architectures having uniform cpuidle state parameters can continue to use global registration, else have an api to register latencies per cpu as proposed. We can definitely work to see the best way to implement it.
Absolutely, this is one reason I think adding a function:
cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
makes sense if it is used only for cpus with different latencies. The other architecture will be kept untouched.
Do you mean by keeping the parameters in the cpuidle_driver struct and not calling the new API?
Yes, right.
That looks great.
IMHO, before adding more functionalities to cpuidle, we should cleanup and consolidate the code. For example, there is a dependency between acpi_idle and intel_idle which can be resolved with the notifiers, or there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is also introduced to cpuidle which is related to x86 not the cpuidle core, etc ...
Cleanup the code will help to move the different bits from the arch specific code to the core code and reduce the impact of the core's modifications. That should let a common pattern to emerge and will facilitate the modifications in the future (per cpu latencies is one of them).
That will be a lot of changes and this is why I proposed to put in place a cpuidle-next tree in order to consolidate all the cpuidle modifications people is willing to see upstream and provide better testing.
Nice! The new tree needs to be as close as possible to mainline though. Do you have plans for that?
Yes, AFAIU as I ask for the cpuidle-next inclusion in linux-next, I have to base the tree on top of Linus's tree and it will be pulled every day.
That will allow to detect conflicts and bogus commit early, especially for the numerous x86 architecture variant and cpuidle combination.
For the moment I have a local commits in my tree and I am waiting for some feedbacks from the lists about the RFC I sent for some cpuidle core changes.
I will create a clean new tree cpuidle-next.
Do not hesitate to ask for help on OMAPs!
Cool thanks, will do :)
-- Daniel
Regards, Jean
Sounds like a good idea. Do you have something like that already?
Yes but I need to cleanup the tree before.
http://git.linaro.org/gitweb?p=people/dlezcano/linux-next.git%3Ba=summary
-- http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro Facebook | http://twitter.com/#!/linaroorg Twitter | http://www.linaro.org/linaro-blog/ Blog
-- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Daniel,
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
These four steps impacts all the architecture. I began the factor out code / cleanup [4] and that has been accepted upstream and I proposed some modifications [5] but I had a very few answers.
Another thing which we discussed is bringing the CPU cluster/package notion in the core idle code. Couple idle did bring that idea to some extent but in can be further extended and absratcted. Atm, most of the work is done in back-end cpuidle drivers which can be easily abstracted if the "cluster idle" notion is supported in the core layer.
Per CPU __and__ per operating point(OPP), latency is something which can be also added to the list. From the discussion I remember, it matters for few SoCs and can be beneficial.
Regards Santosh
On Mon, Jun 18, 2012 at 7:00 PM, a0393909 santosh.shilimkar@ti.com wrote:
Daniel,
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
These four steps impacts all the architecture. I began the factor out code / cleanup [4] and that has been accepted upstream and I proposed some modifications [5] but I had a very few answers.
Another thing which we discussed is bringing the CPU cluster/package notion in the core idle code. Couple idle did bring that idea to some extent but in can be further extended and abstracted. Atm, most of the work is done in back-end cpuidle drivers which can be easily abstracted if the "cluster idle" notion is supported in the core layer.
Are you considering the "cluster idle" as one of the topic ?
Regards Santosh
On 06/25/2012 02:58 PM, Shilimkar, Santosh wrote:
On Mon, Jun 18, 2012 at 7:00 PM, a0393909 santosh.shilimkar@ti.com wrote:
Daniel,
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
These four steps impacts all the architecture. I began the factor out code / cleanup [4] and that has been accepted upstream and I proposed some modifications [5] but I had a very few answers.
Another thing which we discussed is bringing the CPU cluster/package notion in the core idle code. Couple idle did bring that idea to some extent but in can be further extended and abstracted. Atm, most of the work is done in back-end cpuidle drivers which can be easily abstracted if the "cluster idle" notion is supported in the core layer.
Are you considering the "cluster idle" as one of the topic ?
Yes, absolutely. ATM, I am looking for refactoring the cpuidle code and cleanup whenever is possible.
On Mon, Jun 25, 2012 at 6:40 PM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
On 06/25/2012 02:58 PM, Shilimkar, Santosh wrote:
On Mon, Jun 18, 2012 at 7:00 PM, a0393909 santosh.shilimkar@ti.com wrote:
Daniel,
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per cpu latencies. We had a discussion about this patchset because it reverse the modifications Deepthi did some months ago [2] and we may want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu latencies through the device tree.
We agreed in the following steps:
- factor out / cleanup the cpuidle code as much as possible
- better sharing of code amongst SoC idle drivers by moving common bits
to core code 3. make the cpuidle_state structure contain only data 4. add a API to register latencies per cpu
These four steps impacts all the architecture. I began the factor out code / cleanup [4] and that has been accepted upstream and I proposed some modifications [5] but I had a very few answers.
Another thing which we discussed is bringing the CPU cluster/package notion in the core idle code. Couple idle did bring that idea to some extent but in can be further extended and abstracted. Atm, most of the work is done in back-end cpuidle drivers which can be easily abstracted if the "cluster idle" notion is supported in the core layer.
Are you considering the "cluster idle" as one of the topic ?
Yes, absolutely. ATM, I am looking for refactoring the cpuidle code and cleanup whenever is possible.
Cool !!
regards Santosh
Hi Stephen,
we discussed last week to put in place a tree grouping the cpuidle modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
It contains for the moment Colin Cross's cpuidle coupled states.
Thanks in advance -- Daniel
Hi Daniel,
On Mon, 25 Jun 2012 15:27:03 +0200 Daniel Lezcano daniel.lezcano@linaro.org wrote:
we discussed last week to put in place a tree grouping the cpuidle modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
It contains for the moment Colin Cross's cpuidle coupled states.
Added from today.
Thanks for adding your subsystem tree as a participant of linux-next. As you may know, this is not a judgment of your code. The purpose of linux-next is for integration testing and to lower the impact of conflicts between subsystems in the next merge window.
You will need to ensure that the patches/commits in your tree/series have been: * submitted under GPL v2 (or later) and include the Contributor's Signed-off-by, * posted to the relevant mailing list, * reviewed by you (or another maintainer of your subsystem tree), * successfully unit tested, and * destined for the current or next Linux merge window.
Basically, this should be just what you would send to Linus (or ask him to fetch). It is allowed to be rebased if you deem it necessary.
On Mon, Jun 25, 2012 at 3:27 PM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
we discussed last week to put in place a tree grouping the cpuidle modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
Thanks for doing this.
Since MAINTAINERS is lacking a listed maintainer for cpuidle, are you also going to add yourself as maintainer and list this tree in that file, or is this a one-time exercise?
Yours, Linus Walleij
On 07/02/2012 11:09 AM, Linus Walleij wrote:
On Mon, Jun 25, 2012 at 3:27 PM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
we discussed last week to put in place a tree grouping the cpuidle modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
Thanks for doing this.
Since MAINTAINERS is lacking a listed maintainer for cpuidle, are you also going to add yourself as maintainer and list this tree in that file, or is this a one-time exercise?
I will be glad to do that if Len and Rafael agree on that.
On Monday, July 02, 2012, Daniel Lezcano wrote:
On 07/02/2012 11:09 AM, Linus Walleij wrote:
On Mon, Jun 25, 2012 at 3:27 PM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
we discussed last week to put in place a tree grouping the cpuidle modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
Thanks for doing this.
Since MAINTAINERS is lacking a listed maintainer for cpuidle, are you also going to add yourself as maintainer and list this tree in that file, or is this a one-time exercise?
I will be glad to do that if Len and Rafael agree on that.
Len Brown has been a cpuidle maintainer for some time now. Moreover, he's been taking patches, but Linus refused to pull his entire tree during the last merge window (as you probably know). I honestly don't think this is a good enough reason for replacing him as a cpuidle maintainer by force.
So, you should ask Len whether or not he's willing to pass the cpuidle maintenance to someone else.
I know that Len hasn't been responsive recently, but I also know that he _does_ respond to inquiries sent directly to him.
Thanks, Rafael
On 07/02/2012 09:49 PM, Rafael J. Wysocki wrote:
On Monday, July 02, 2012, Daniel Lezcano wrote:
On 07/02/2012 11:09 AM, Linus Walleij wrote:
On Mon, Jun 25, 2012 at 3:27 PM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
we discussed last week to put in place a tree grouping the cpuidle modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
Thanks for doing this.
Since MAINTAINERS is lacking a listed maintainer for cpuidle, are you also going to add yourself as maintainer and list this tree in that file, or is this a one-time exercise?
I will be glad to do that if Len and Rafael agree on that.
Len Brown has been a cpuidle maintainer for some time now. Moreover, he's been taking patches, but Linus refused to pull his entire tree during the last merge window (as you probably know). I honestly don't think this is a good enough reason for replacing him as a cpuidle maintainer by force.
So, you should ask Len whether or not he's willing to pass the cpuidle maintenance to someone else.
No, no. You are misunderstanding what I am proposing. I don't want to replace Len I just want to act as a "proxy". I understand a maintainer can be busy and could not have enough time to take care of the subsystem is maintaining during a period because he's too busy for that. Trust me, I fully understand that :)
As there are a lot of modifications of cpuidle, I am proposing to take the patches when they are acked-by, to create a consolidated tree, providing a better integration for cpuidle, a wider testing, preventing conflicts and facilitating Len's work if he agrees to pull from this tree.
If that makes sense to add myself to the MAINTAINER file as a co-maintainer (understand: send to me also the patches, so I can take care of them if Len does not respond), I am ok with that.
It is just about helping :)
I know that Len hasn't been responsive recently, but I also know that he _does_ respond to inquiries sent directly to him.
Do you mean to its intel address ?
Thanks -- Daniel
-- To unsubscribe from this list: send the line "unsubscribe linux-next" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tuesday, July 03, 2012, Daniel Lezcano wrote:
On 07/02/2012 09:49 PM, Rafael J. Wysocki wrote:
On Monday, July 02, 2012, Daniel Lezcano wrote:
On 07/02/2012 11:09 AM, Linus Walleij wrote:
On Mon, Jun 25, 2012 at 3:27 PM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
we discussed last week to put in place a tree grouping the cpuidle modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
Thanks for doing this.
Since MAINTAINERS is lacking a listed maintainer for cpuidle, are you also going to add yourself as maintainer and list this tree in that file, or is this a one-time exercise?
I will be glad to do that if Len and Rafael agree on that.
Len Brown has been a cpuidle maintainer for some time now. Moreover, he's been taking patches, but Linus refused to pull his entire tree during the last merge window (as you probably know). I honestly don't think this is a good enough reason for replacing him as a cpuidle maintainer by force.
So, you should ask Len whether or not he's willing to pass the cpuidle maintenance to someone else.
No, no. You are misunderstanding what I am proposing. I don't want to replace Len I just want to act as a "proxy". I understand a maintainer can be busy and could not have enough time to take care of the subsystem is maintaining during a period because he's too busy for that. Trust me, I fully understand that :)
As there are a lot of modifications of cpuidle, I am proposing to take the patches when they are acked-by, to create a consolidated tree, providing a better integration for cpuidle, a wider testing, preventing conflicts and facilitating Len's work if he agrees to pull from this tree.
If that makes sense to add myself to the MAINTAINER file as a co-maintainer (understand: send to me also the patches, so I can take care of them if Len does not respond), I am ok with that.
It is just about helping :)
Cool. :-)
So do you have a branch in the cpuidle-next.git tree that isn't going to be rebased?
I know that Len hasn't been responsive recently, but I also know that he _does_ respond to inquiries sent directly to him.
Do you mean to its intel address ?
Yes, CCing the Len's Intel address won't hurt I think.
Thanks, Rafael
On 07/03/2012 10:59 AM, Rafael J. Wysocki wrote:
On Tuesday, July 03, 2012, Daniel Lezcano wrote:
On 07/02/2012 09:49 PM, Rafael J. Wysocki wrote:
On Monday, July 02, 2012, Daniel Lezcano wrote:
On 07/02/2012 11:09 AM, Linus Walleij wrote:
On Mon, Jun 25, 2012 at 3:27 PM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
we discussed last week to put in place a tree grouping the cpuidle modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
Thanks for doing this.
Since MAINTAINERS is lacking a listed maintainer for cpuidle, are you also going to add yourself as maintainer and list this tree in that file, or is this a one-time exercise?
I will be glad to do that if Len and Rafael agree on that.
Len Brown has been a cpuidle maintainer for some time now. Moreover, he's been taking patches, but Linus refused to pull his entire tree during the last merge window (as you probably know). I honestly don't think this is a good enough reason for replacing him as a cpuidle maintainer by force.
So, you should ask Len whether or not he's willing to pass the cpuidle maintenance to someone else.
No, no. You are misunderstanding what I am proposing. I don't want to replace Len I just want to act as a "proxy". I understand a maintainer can be busy and could not have enough time to take care of the subsystem is maintaining during a period because he's too busy for that. Trust me, I fully understand that :)
As there are a lot of modifications of cpuidle, I am proposing to take the patches when they are acked-by, to create a consolidated tree, providing a better integration for cpuidle, a wider testing, preventing conflicts and facilitating Len's work if he agrees to pull from this tree.
If that makes sense to add myself to the MAINTAINER file as a co-maintainer (understand: send to me also the patches, so I can take care of them if Len does not respond), I am ok with that.
It is just about helping :)
Cool. :-)
So do you have a branch in the cpuidle-next.git tree that isn't going to be rebased?
No. I am following Linus tree and adding the patches on top of it.
I know that Len hasn't been responsive recently, but I also know that he _does_ respond to inquiries sent directly to him.
Do you mean to its intel address ?
Yes, CCing the Len's Intel address won't hurt I think.
Ok, I will ping him and give him the pointers to the discussion we had.
Thanks -- Daniel
Hi Daniel,
On Tue, 03 Jul 2012 14:56:58 +0200 Daniel Lezcano daniel.lezcano@linaro.org wrote:
So do you have a branch in the cpuidle-next.git tree that isn't going to be rebased?
No. I am following Linus tree and adding the patches on top of it.
Please don't rebase your tree more than necessary - it just makes thing hard for anyone using your tree as a base for further development and throws away any testing you may have done.
On 07/03/2012 03:19 PM, Stephen Rothwell wrote:
Hi Daniel,
On Tue, 03 Jul 2012 14:56:58 +0200 Daniel Lezcano daniel.lezcano@linaro.org wrote:
So do you have a branch in the cpuidle-next.git tree that isn't going to be rebased?
No. I am following Linus tree and adding the patches on top of it.
Please don't rebase your tree more than necessary - it just makes thing hard for anyone using your tree as a base for further development and throws away any testing you may have done.
Ok, let me sync with Len and Rafael about the best way to do that.
Thanks -- Daniel
On Tuesday, July 03, 2012, Daniel Lezcano wrote:
On 07/03/2012 03:19 PM, Stephen Rothwell wrote:
Hi Daniel,
On Tue, 03 Jul 2012 14:56:58 +0200 Daniel Lezcano daniel.lezcano@linaro.org wrote:
So do you have a branch in the cpuidle-next.git tree that isn't going to be rebased?
No. I am following Linus tree and adding the patches on top of it.
Please don't rebase your tree more than necessary - it just makes thing hard for anyone using your tree as a base for further development and throws away any testing you may have done.
Ok, let me sync with Len and Rafael about the best way to do that.
Please create a branch in your tree for me to pull from and let me know which one it is. Please note that this branch must not be rebased after I've pulled from it and it's going to be included into my linux-next branch automatically.
I'll include it into my v3.6 push, because I have a couple of cpuidle patches queued up already. We'll need to discuss the future of it after 3.6, though.
Thanks, Rafael
On 07/03/2012 06:54 PM, Rafael J. Wysocki wrote:
On Tuesday, July 03, 2012, Daniel Lezcano wrote:
On 07/03/2012 03:19 PM, Stephen Rothwell wrote:
Hi Daniel,
On Tue, 03 Jul 2012 14:56:58 +0200 Daniel Lezcano daniel.lezcano@linaro.org wrote:
So do you have a branch in the cpuidle-next.git tree that isn't going to be rebased?
No. I am following Linus tree and adding the patches on top of it.
Please don't rebase your tree more than necessary - it just makes thing hard for anyone using your tree as a base for further development and throws away any testing you may have done.
Ok, let me sync with Len and Rafael about the best way to do that.
Please create a branch in your tree for me to pull from and let me know which one it is. Please note that this branch must not be rebased after I've pulled from it and it's going to be included into my linux-next branch automatically.
Ok that sounds good.
Let me put in place the branch and rework my patches because they conflict with the 'disable' flag moved to the per cpu structure. In the meantime, I will send you the other patches which do not conflict.
I'll include it into my v3.6 push, because I have a couple of cpuidle patches queued up already. We'll need to discuss the future of it after 3.6, though.
Ok, cool.
Thanks -- Daniel
On Tue, Jul 3, 2012 at 12:14 AM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
If that makes sense to add myself to the MAINTAINER file as a co-maintainer (understand: send to me also the patches, so I can take care of them if Len does not respond), I am ok with that.
What about a patch adding both you and Len as MAINTAINERs, right now there is noone just some informal consensus and noone really knows who to send patches to. Let's formalize it.
Yours, Linus Walleij
On Tuesday, July 03, 2012, Linus Walleij wrote:
On Tue, Jul 3, 2012 at 12:14 AM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
If that makes sense to add myself to the MAINTAINER file as a co-maintainer (understand: send to me also the patches, so I can take care of them if Len does not respond), I am ok with that.
What about a patch adding both you and Len as MAINTAINERs, right now there is noone just some informal consensus and noone really knows who to send patches to. Let's formalize it.
Send them to me for now. We'll settle the issue when Len is back.
Thanks, Rafael
Hi,
On Monday, June 25, 2012, Daniel Lezcano wrote:
Hi Stephen,
we discussed last week to put in place a tree grouping the cpuidle modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
It contains for the moment Colin Cross's cpuidle coupled states.
Do you have a stable branch in that tree, i.e. such that it is guaranteed not to be rebased?
Rafael
On Mon, Jun 18, 2012 at 1:40 AM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
I propose to host a cpuidle-next tree where all these modifications will be and where people can send patches against, preventing last minutes conflicts and perhaps Lenb will agree to pull from this tree. In the meantime, the tree will be part of the linux-next, the patches will be more widely tested and could be fixed earlier.
My coupled cpuidle patches were acked and temporarily in Len's next/Linus pull branch, but were later dropped when the first pull request to Linus was rejected. I asked Len to either put the coupled cpuidle patches into his next branch, or let me host them so people could base SoC branches off of them and let Len pull them later, but got no response. If you do start a cpuidle for-next branch, can you pull my coupled-cpuidle branch:
The following changes since commit 76e10d158efb6d4516018846f60c2ab5501900bc:
Linux 3.4 (2012-05-20 15:29:13 -0700)
are available in the git repository at: https://android.googlesource.com/kernel/common.git coupled-cpuidle
Colin Cross (4): cpuidle: refactor out cpuidle_enter_state cpuidle: fix error handling in __cpuidle_register_device cpuidle: add support for states that affect multiple cpus cpuidle: coupled: add parallel barrier function
drivers/cpuidle/Kconfig | 3 + drivers/cpuidle/Makefile | 1 + drivers/cpuidle/coupled.c | 715 +++++++++++++++++++++++++++++++++++++++++++++ drivers/cpuidle/cpuidle.c | 68 ++++- drivers/cpuidle/cpuidle.h | 32 ++ include/linux/cpuidle.h | 11 + 6 files changed, 813 insertions(+), 17 deletions(-) create mode 100644 drivers/cpuidle/coupled.c
On 06/18/2012 08:15 PM, Colin Cross wrote:
On Mon, Jun 18, 2012 at 1:40 AM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
I propose to host a cpuidle-next tree where all these modifications will be and where people can send patches against, preventing last minutes conflicts and perhaps Lenb will agree to pull from this tree. In the meantime, the tree will be part of the linux-next, the patches will be more widely tested and could be fixed earlier.
My coupled cpuidle patches were acked and temporarily in Len's next/Linus pull branch, but were later dropped when the first pull request to Linus was rejected. I asked Len to either put the coupled cpuidle patches into his next branch, or let me host them so people could base SoC branches off of them and let Len pull them later, but got no response. If you do start a cpuidle for-next branch, can you pull my coupled-cpuidle branch:
No problem.
Thanks -- Daniel
The following changes since commit 76e10d158efb6d4516018846f60c2ab5501900bc:
Linux 3.4 (2012-05-20 15:29:13 -0700)
are available in the git repository at: https://android.googlesource.com/kernel/common.git coupled-cpuidle
Colin Cross (4): cpuidle: refactor out cpuidle_enter_state cpuidle: fix error handling in __cpuidle_register_device cpuidle: add support for states that affect multiple cpus cpuidle: coupled: add parallel barrier function
drivers/cpuidle/Kconfig | 3 + drivers/cpuidle/Makefile | 1 + drivers/cpuidle/coupled.c | 715 +++++++++++++++++++++++++++++++++++++++++++++ drivers/cpuidle/cpuidle.c | 68 ++++- drivers/cpuidle/cpuidle.h | 32 ++ include/linux/cpuidle.h | 11 + 6 files changed, 813 insertions(+), 17 deletions(-) create mode 100644 drivers/cpuidle/coupled.c
On 06/18/2012 08:15 PM, Colin Cross wrote:
On Mon, Jun 18, 2012 at 1:40 AM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
I propose to host a cpuidle-next tree where all these modifications will be and where people can send patches against, preventing last minutes conflicts and perhaps Lenb will agree to pull from this tree. In the meantime, the tree will be part of the linux-next, the patches will be more widely tested and could be fixed earlier.
My coupled cpuidle patches were acked and temporarily in Len's next/Linus pull branch, but were later dropped when the first pull request to Linus was rejected. I asked Len to either put the coupled cpuidle patches into his next branch, or let me host them so people could base SoC branches off of them and let Len pull them later, but got no response. If you do start a cpuidle for-next branch, can you pull my coupled-cpuidle branch:
The following changes since commit 76e10d158efb6d4516018846f60c2ab5501900bc:
Linux 3.4 (2012-05-20 15:29:13 -0700)
are available in the git repository at: https://android.googlesource.com/kernel/common.git coupled-cpuidle
Colin Cross (4): cpuidle: refactor out cpuidle_enter_state cpuidle: fix error handling in __cpuidle_register_device cpuidle: add support for states that affect multiple cpus cpuidle: coupled: add parallel barrier function
drivers/cpuidle/Kconfig | 3 + drivers/cpuidle/Makefile | 1 + drivers/cpuidle/coupled.c | 715 +++++++++++++++++++++++++++++++++++++++++++++ drivers/cpuidle/cpuidle.c | 68 ++++- drivers/cpuidle/cpuidle.h | 32 ++ include/linux/cpuidle.h | 11 + 6 files changed, 813 insertions(+), 17 deletions(-) create mode 100644 drivers/cpuidle/coupled.c
Done.
http://git.linaro.org/gitweb?p=people/dlezcano/cpuidle-next.git%3Ba=shortlog...
Daniel Lezcano daniel.lezcano@linaro.org writes:
On 06/18/2012 08:15 PM, Colin Cross wrote:
On Mon, Jun 18, 2012 at 1:40 AM, Daniel Lezcano daniel.lezcano@linaro.org wrote:
I propose to host a cpuidle-next tree where all these modifications will be and where people can send patches against, preventing last minutes conflicts and perhaps Lenb will agree to pull from this tree. In the meantime, the tree will be part of the linux-next, the patches will be more widely tested and could be fixed earlier.
My coupled cpuidle patches were acked and temporarily in Len's next/Linus pull branch, but were later dropped when the first pull request to Linus was rejected. I asked Len to either put the coupled cpuidle patches into his next branch, or let me host them so people could base SoC branches off of them and let Len pull them later, but got no response. If you do start a cpuidle for-next branch, can you pull my coupled-cpuidle branch:
The following changes since commit 76e10d158efb6d4516018846f60c2ab5501900bc:
Linux 3.4 (2012-05-20 15:29:13 -0700)
are available in the git repository at: https://android.googlesource.com/kernel/common.git coupled-cpuidle
Colin Cross (4): cpuidle: refactor out cpuidle_enter_state cpuidle: fix error handling in __cpuidle_register_device cpuidle: add support for states that affect multiple cpus cpuidle: coupled: add parallel barrier function
drivers/cpuidle/Kconfig | 3 + drivers/cpuidle/Makefile | 1 + drivers/cpuidle/coupled.c | 715 +++++++++++++++++++++++++++++++++++++++++++++ drivers/cpuidle/cpuidle.c | 68 ++++- drivers/cpuidle/cpuidle.h | 32 ++ include/linux/cpuidle.h | 11 + 6 files changed, 813 insertions(+), 17 deletions(-) create mode 100644 drivers/cpuidle/coupled.c
Done.
http://git.linaro.org/gitweb?p=people/dlezcano/cpuidle-next.git%3Ba=shortlog...
Great!
Daniel, thanks for tracking this. Are you planning to submit a pull request to Rafael so we finally can get this into linux-next and merged for v3.6?
Looks like there will be a slight problem to sort out though. Len's 'next' branch[1] is already included in linux-next and as some version of the coupled CPUidle already merged.
I hope we can sort this out in time for v3.6 because this series has been well reviewed, well tested and ready for merge since before the v3.5 merge window.
Kevin
[1] git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux next