As everyone should know by now, we want to integrate the cpuidle governor with the scheduler for a more efficient idling of CPUs. In order to help the transition, this small patch series moves the existing interaction with cpuidle from architecture code to generic core code. No functional change should have occurred yet.
The ARM, PPC, SH and X86 architectures are concerned. Small cleanups to ARM and ARM64 are also included. I don't know yet the best path for those patches to get into mainline, but it is probably best if they stay together. So ACKs from architecture maintainers would be greatly appreciated.
arch/arm/kernel/process.c | 21 +++--------- arch/arm/kernel/setup.c | 7 ++++ arch/arm64/kernel/process.c | 5 --- arch/arm64/kernel/setup.c | 7 ++++ arch/powerpc/platforms/pseries/processor_idle.c | 5 +++ arch/powerpc/platforms/pseries/setup.c | 34 ++++++++----------- arch/sh/kernel/idle.c | 4 +-- arch/x86/kernel/process.c | 5 +-- include/linux/cpu.h | 1 - kernel/Makefile | 1 - kernel/cpu/Makefile | 1 - kernel/sched/Makefile | 2 +- kernel/{cpu => sched}/idle.c | 6 ++-- 13 files changed, 44 insertions(+), 55 deletions(-)
Nicolas
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org --- arch/arm/kernel/process.c | 5 ----- arch/arm/kernel/setup.c | 7 +++++++ 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 92f7b15dd2..725b8c95e0 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -142,11 +142,6 @@ static void default_idle(void) local_irq_enable(); }
-void arch_cpu_idle_prepare(void) -{ - local_fiq_enable(); -} - void arch_cpu_idle_enter(void) { ledtrig_cpu(CPU_LED_IDLE_START); diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 987a7f5bce..d027b1a6fe 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -789,6 +789,13 @@ static int __init init_machine_late(void) } late_initcall(init_machine_late);
+static int __init init_fiq_boot_cpu(void) +{ + local_fiq_enable(); + return 0; +} +late_initcall(init_fiq_boot_cpu); + #ifdef CONFIG_KEXEC static inline unsigned long long get_total_mem(void) {
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org
Reviewed-by: Daniel Lezcano daniel.lezcano@linaro.org
arch/arm/kernel/process.c | 5 ----- arch/arm/kernel/setup.c | 7 +++++++ 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 92f7b15dd2..725b8c95e0 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -142,11 +142,6 @@ static void default_idle(void) local_irq_enable(); }
-void arch_cpu_idle_prepare(void) -{
- local_fiq_enable();
-}
- void arch_cpu_idle_enter(void) { ledtrig_cpu(CPU_LED_IDLE_START);
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 987a7f5bce..d027b1a6fe 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -789,6 +789,13 @@ static int __init init_machine_late(void) } late_initcall(init_machine_late);
+static int __init init_fiq_boot_cpu(void) +{
- local_fiq_enable();
- return 0;
+} +late_initcall(init_fiq_boot_cpu);
- #ifdef CONFIG_KEXEC static inline unsigned long long get_total_mem(void) {
On Mon, Jan 27, 2014 at 09:22:55AM +0100, Daniel Lezcano wrote:
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org
Reviewed-by: Daniel Lezcano daniel.lezcano@linaro.org
What kind of review did you do when giving that attributation?
On 01/27/2014 05:07 PM, Russell King - ARM Linux wrote:
On Mon, Jan 27, 2014 at 09:22:55AM +0100, Daniel Lezcano wrote:
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org
Reviewed-by: Daniel Lezcano daniel.lezcano@linaro.org
What kind of review did you do when giving that attributation?
I did the review to the best of my knowledge and with good will.
I read your comment on this patch and I learnt one more thing.
Today, I am smarter than yesterday and dumber than tomorrow :)
-- Daniel
On Mon, Jan 27, 2014 at 06:12:53PM +0100, Daniel Lezcano wrote:
On 01/27/2014 05:07 PM, Russell King - ARM Linux wrote:
On Mon, Jan 27, 2014 at 09:22:55AM +0100, Daniel Lezcano wrote:
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org
Reviewed-by: Daniel Lezcano daniel.lezcano@linaro.org
What kind of review did you do when giving that attributation?
I did the review to the best of my knowledge and with good will.
I read your comment on this patch and I learnt one more thing.
Today, I am smarter than yesterday and dumber than tomorrow :)
Just be aware that putting a comment along with the reviewed-by tag is always a good idea. I know that's a little more work, but this has been raised a number of times by various people over the years.
A reviewed-by tag on its own doesn't mean much, as it could mean that you've just glanced over the code and decided "yea, it looks okay", or it could mean that you've spent all day verifying that the code change is indeed correct.
Consequently, some will ignore emails which just contain a reviewed-by attributation.
On 01/27/2014 06:21 PM, Russell King - ARM Linux wrote:
On Mon, Jan 27, 2014 at 06:12:53PM +0100, Daniel Lezcano wrote:
On 01/27/2014 05:07 PM, Russell King - ARM Linux wrote:
On Mon, Jan 27, 2014 at 09:22:55AM +0100, Daniel Lezcano wrote:
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org
Reviewed-by: Daniel Lezcano daniel.lezcano@linaro.org
What kind of review did you do when giving that attributation?
I did the review to the best of my knowledge and with good will.
I read your comment on this patch and I learnt one more thing.
Today, I am smarter than yesterday and dumber than tomorrow :)
Just be aware that putting a comment along with the reviewed-by tag is always a good idea. I know that's a little more work, but this has been raised a number of times by various people over the years.
A reviewed-by tag on its own doesn't mean much, as it could mean that you've just glanced over the code and decided "yea, it looks okay", or it could mean that you've spent all day verifying that the code change is indeed correct.
Consequently, some will ignore emails which just contain a reviewed-by attributation.
Thanks for the clarification. I will take care of giving a comment next time.
-- Daniel
On Mon, Jan 27, 2014 at 05:21:10PM +0000, Russell King - ARM Linux wrote:
A reviewed-by tag on its own doesn't mean much, as it could mean that you've just glanced over the code and decided "yea, it looks okay", or it could mean that you've spent all day verifying that the code change is indeed correct.
One should use Acked-by for the 'yea, it looks okay' thing.
On Mon, Jan 27, 2014 at 01:08:16AM -0500, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org
arch/arm/kernel/process.c | 5 ----- arch/arm/kernel/setup.c | 7 +++++++ 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 92f7b15dd2..725b8c95e0 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -142,11 +142,6 @@ static void default_idle(void) local_irq_enable(); } -void arch_cpu_idle_prepare(void) -{
- local_fiq_enable();
-}
void arch_cpu_idle_enter(void) { ledtrig_cpu(CPU_LED_IDLE_START); diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 987a7f5bce..d027b1a6fe 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -789,6 +789,13 @@ static int __init init_machine_late(void) } late_initcall(init_machine_late); +static int __init init_fiq_boot_cpu(void) +{
- local_fiq_enable();
- return 0;
+} +late_initcall(init_fiq_boot_cpu);
arch_cpu_idle_prepare() gets called from the swapper thread, and changes the swapper thread's CPSR. init_fiq_boot_cpu() gets called from PID1, the init thread, and changes the init thread's CPSR, which will already have FIQs enabled by way of how kernel threads are created.
Hence, the above code fragment has no effect what so ever, and those platforms using FIQs will not have FIQs delivered if they're idle (because the swapper will have FIQs masked at the CPU.)
NAK.
On Mon, 27 Jan 2014, Russell King - ARM Linux wrote:
On Mon, Jan 27, 2014 at 01:08:16AM -0500, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org
arch/arm/kernel/process.c | 5 ----- arch/arm/kernel/setup.c | 7 +++++++ 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 92f7b15dd2..725b8c95e0 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -142,11 +142,6 @@ static void default_idle(void) local_irq_enable(); } -void arch_cpu_idle_prepare(void) -{
- local_fiq_enable();
-}
void arch_cpu_idle_enter(void) { ledtrig_cpu(CPU_LED_IDLE_START); diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 987a7f5bce..d027b1a6fe 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -789,6 +789,13 @@ static int __init init_machine_late(void) } late_initcall(init_machine_late); +static int __init init_fiq_boot_cpu(void) +{
- local_fiq_enable();
- return 0;
+} +late_initcall(init_fiq_boot_cpu);
arch_cpu_idle_prepare() gets called from the swapper thread, and changes the swapper thread's CPSR. init_fiq_boot_cpu() gets called from PID1, the init thread, and changes the init thread's CPSR, which will already have FIQs enabled by way of how kernel threads are created.
Hence, the above code fragment has no effect what so ever, and those platforms using FIQs will not have FIQs delivered if they're idle (because the swapper will have FIQs masked at the CPU.)
You're right.
What about moving local_fiq_enable() to trap_init() then?
Nicolas
On Mon, Jan 27, 2014 at 10:45:59AM -0500, Nicolas Pitre wrote:
On Mon, 27 Jan 2014, Russell King - ARM Linux wrote:
On Mon, Jan 27, 2014 at 01:08:16AM -0500, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org
arch/arm/kernel/process.c | 5 ----- arch/arm/kernel/setup.c | 7 +++++++ 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 92f7b15dd2..725b8c95e0 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -142,11 +142,6 @@ static void default_idle(void) local_irq_enable(); } -void arch_cpu_idle_prepare(void) -{
- local_fiq_enable();
-}
void arch_cpu_idle_enter(void) { ledtrig_cpu(CPU_LED_IDLE_START); diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 987a7f5bce..d027b1a6fe 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -789,6 +789,13 @@ static int __init init_machine_late(void) } late_initcall(init_machine_late); +static int __init init_fiq_boot_cpu(void) +{
- local_fiq_enable();
- return 0;
+} +late_initcall(init_fiq_boot_cpu);
arch_cpu_idle_prepare() gets called from the swapper thread, and changes the swapper thread's CPSR. init_fiq_boot_cpu() gets called from PID1, the init thread, and changes the init thread's CPSR, which will already have FIQs enabled by way of how kernel threads are created.
Hence, the above code fragment has no effect what so ever, and those platforms using FIQs will not have FIQs delivered if they're idle (because the swapper will have FIQs masked at the CPU.)
You're right.
What about moving local_fiq_enable() to trap_init() then?
That's potentially unsafe, as we haven't touched any of the IRQ controllers at that point - we can't guarantee what state they'd be in. Given that the default FIQ is to just return, a FIQ being raised at that point will end up with an infinite loop re-entering the FIQ handler.
On Mon, 27 Jan 2014, Russell King - ARM Linux wrote:
On Mon, Jan 27, 2014 at 10:45:59AM -0500, Nicolas Pitre wrote:
On Mon, 27 Jan 2014, Russell King - ARM Linux wrote:
On Mon, Jan 27, 2014 at 01:08:16AM -0500, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice i.e. when FIQs are actually used.
Signed-off-by: Nicolas Pitre nico@linaro.org
arch/arm/kernel/process.c | 5 ----- arch/arm/kernel/setup.c | 7 +++++++ 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 92f7b15dd2..725b8c95e0 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -142,11 +142,6 @@ static void default_idle(void) local_irq_enable(); } -void arch_cpu_idle_prepare(void) -{
- local_fiq_enable();
-}
void arch_cpu_idle_enter(void) { ledtrig_cpu(CPU_LED_IDLE_START); diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 987a7f5bce..d027b1a6fe 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -789,6 +789,13 @@ static int __init init_machine_late(void) } late_initcall(init_machine_late); +static int __init init_fiq_boot_cpu(void) +{
- local_fiq_enable();
- return 0;
+} +late_initcall(init_fiq_boot_cpu);
arch_cpu_idle_prepare() gets called from the swapper thread, and changes the swapper thread's CPSR. init_fiq_boot_cpu() gets called from PID1, the init thread, and changes the init thread's CPSR, which will already have FIQs enabled by way of how kernel threads are created.
Hence, the above code fragment has no effect what so ever, and those platforms using FIQs will not have FIQs delivered if they're idle (because the swapper will have FIQs masked at the CPU.)
You're right.
What about moving local_fiq_enable() to trap_init() then?
That's potentially unsafe, as we haven't touched any of the IRQ controllers at that point - we can't guarantee what state they'd be in. Given that the default FIQ is to just return, a FIQ being raised at that point will end up with an infinite loop re-entering the FIQ handler.
Okay... I don't see any obvious way to work around that besides adding another explicit hook, which arch_cpu_idle_prepare() incidentally already is. So, unless you have a better idea, I'll drop this patch and leave ARM as the only user of arch_cpu_idle_prepare().
Nicolas
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice given that FIQs are not currently used on ARM64.
Signed-off-by: Nicolas Pitre nico@linaro.org --- arch/arm64/kernel/process.c | 5 ----- arch/arm64/kernel/setup.c | 7 +++++++ 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index de17c89985..f6c733da67 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -84,11 +84,6 @@ EXPORT_SYMBOL_GPL(pm_power_off); void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); EXPORT_SYMBOL_GPL(arm_pm_restart);
-void arch_cpu_idle_prepare(void) -{ - local_fiq_enable(); -} - /* * This is our default idle handler. */ diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index bd9bbd0e44..259557983a 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -255,6 +255,13 @@ static int __init arm64_device_init(void) } arch_initcall(arm64_device_init);
+static int __init init_fiq_boot_cpu(void) +{ + local_fiq_enable(); + return 0; +} +late_initcall(init_fiq_boot_cpu); + static DEFINE_PER_CPU(struct cpu, cpu_data);
static int __init topology_init(void)
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice given that FIQs are not currently used on ARM64.
Signed-off-by: Nicolas Pitre nico@linaro.org
Reviewed-by: Daniel Lezcano daniel.lezcano@linaro.org
arch/arm64/kernel/process.c | 5 ----- arch/arm64/kernel/setup.c | 7 +++++++ 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index de17c89985..f6c733da67 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -84,11 +84,6 @@ EXPORT_SYMBOL_GPL(pm_power_off); void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); EXPORT_SYMBOL_GPL(arm_pm_restart);
-void arch_cpu_idle_prepare(void) -{
- local_fiq_enable();
-}
- /*
*/
- This is our default idle handler.
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index bd9bbd0e44..259557983a 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -255,6 +255,13 @@ static int __init arm64_device_init(void) } arch_initcall(arm64_device_init);
+static int __init init_fiq_boot_cpu(void) +{
- local_fiq_enable();
- return 0;
+} +late_initcall(init_fiq_boot_cpu);
static DEFINE_PER_CPU(struct cpu, cpu_data);
static int __init topology_init(void)
On Mon, Jan 27, 2014 at 06:08:17AM +0000, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice given that FIQs are not currently used on ARM64.
Signed-off-by: Nicolas Pitre nico@linaro.org
For arm64, we could simply remove any reference to FIQs. I'm not aware of anyone using them.
On Mon, 27 Jan 2014, Catalin Marinas wrote:
On Mon, Jan 27, 2014 at 06:08:17AM +0000, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice given that FIQs are not currently used on ARM64.
Signed-off-by: Nicolas Pitre nico@linaro.org
For arm64, we could simply remove any reference to FIQs. I'm not aware of anyone using them.
OK. What if I sumply remove arch_cpu_idle_prepare() and let you do the remove the rest?
IMHO I'd simply remove local_fiq_{enable/disable}() from arm64/kernel/smp.c and leave the infrastructure in place in case someone needs it eventually. In which case I could include that into my patch as well.
Nicolas
On Mon, Jan 27, 2014 at 03:51:02PM +0000, Nicolas Pitre wrote:
On Mon, 27 Jan 2014, Catalin Marinas wrote:
On Mon, Jan 27, 2014 at 06:08:17AM +0000, Nicolas Pitre wrote:
ARM and ARM64 are the only two architectures implementing arch_cpu_idle_prepare() simply to call local_fiq_enable().
We have secondary_start_kernel() already calling local_fiq_enable() and this is done a second time in arch_cpu_idle_prepare() in that case. And enabling FIQs has nothing to do with idling the CPU to start with.
So let's introduce init_fiq_boot_cpu() to take care of FIQs on the boot CPU and remove arch_cpu_idle_prepare(). This is now done a bit earlier at late_initcall time but this shouldn't make a difference in practice given that FIQs are not currently used on ARM64.
Signed-off-by: Nicolas Pitre nico@linaro.org
For arm64, we could simply remove any reference to FIQs. I'm not aware of anyone using them.
OK. What if I sumply remove arch_cpu_idle_prepare() and let you do the remove the rest?
IMHO I'd simply remove local_fiq_{enable/disable}() from arm64/kernel/smp.c and leave the infrastructure in place in case someone needs it eventually. In which case I could include that into my patch as well.
Sounds good. We can keep the local_fiq_*() functions but remove about 4 calling sites (process.c and smp.c) until needed.
Thanks.
On Mon, 27 Jan 2014, Catalin Marinas wrote:
On Mon, Jan 27, 2014 at 03:51:02PM +0000, Nicolas Pitre wrote:
On Mon, 27 Jan 2014, Catalin Marinas wrote:
For arm64, we could simply remove any reference to FIQs. I'm not aware of anyone using them.
OK. What if I sumply remove arch_cpu_idle_prepare() and let you remove the rest?
IMHO I'd simply remove local_fiq_{enable/disable}() from arm64/kernel/smp.c and leave the infrastructure in place in case someone needs it eventually. In which case I could include that into my patch as well.
Sounds good. We can keep the local_fiq_*() functions but remove about 4 calling sites (process.c and smp.c) until needed.
OK here it is. Please feel free to merge this into your tree directly as it no longer has any dependency issues with other patches.
----- >8 From: Nicolas Pitre nicolas.pitre@linaro.org Subject: [PATCH] ARM64: FIQs are unused
So any FIQ handling is superfluous at the moment. The functions to disable/enable FIQs is kept around if ever someone needs them in the future, but existing calling sites including arch_cpu_idle_prepare() may go for now.
Signed-off-by: Nicolas Pitre nico@linaro.org
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index de17c89985..adeabe8c6b 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -84,11 +84,6 @@ EXPORT_SYMBOL_GPL(pm_power_off); void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); EXPORT_SYMBOL_GPL(arm_pm_restart);
-void arch_cpu_idle_prepare(void) -{ - local_fiq_enable(); -} - /* * This is our default idle handler. */ @@ -135,7 +130,6 @@ void machine_restart(char *cmd)
/* Disable interrupts first */ local_irq_disable(); - local_fiq_disable();
/* Now call the architecture specific reboot code. */ if (arm_pm_restart) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index a0c2ca602c..94576c02f5 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -159,7 +159,6 @@ asmlinkage void secondary_start_kernel(void) complete(&cpu_running);
local_irq_enable(); - local_fiq_enable(); local_async_enable();
/* @@ -491,7 +490,6 @@ static void ipi_cpu_stop(unsigned int cpu)
set_cpu_online(cpu, false);
- local_fiq_disable(); local_irq_disable();
while (1)
On Wed, Jan 29, 2014 at 06:00:45PM +0000, Nicolas Pitre wrote:
From: Nicolas Pitre nicolas.pitre@linaro.org Subject: [PATCH] ARM64: FIQs are unused
So any FIQ handling is superfluous at the moment. The functions to disable/enable FIQs is kept around if ever someone needs them in the future, but existing calling sites including arch_cpu_idle_prepare() may go for now.
Signed-off-by: Nicolas Pitre nico@linaro.org
Thanks. Applied.
... so we can get rid of it entirely.
Signed-off-by: Nicolas Pitre nico@linaro.org --- include/linux/cpu.h | 1 - kernel/cpu/idle.c | 2 -- 2 files changed, 3 deletions(-)
diff --git a/include/linux/cpu.h b/include/linux/cpu.h index 03e235ad1b..218fab7521 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -221,7 +221,6 @@ void cpu_idle(void); void cpu_idle_poll_ctrl(bool enable);
void arch_cpu_idle(void); -void arch_cpu_idle_prepare(void); void arch_cpu_idle_enter(void); void arch_cpu_idle_exit(void); void arch_cpu_idle_dead(void); diff --git a/kernel/cpu/idle.c b/kernel/cpu/idle.c index 988573a9a3..4e327e211b 100644 --- a/kernel/cpu/idle.c +++ b/kernel/cpu/idle.c @@ -52,7 +52,6 @@ static inline int cpu_idle_poll(void) }
/* Weak implementations for optional arch specific functions */ -void __weak arch_cpu_idle_prepare(void) { } void __weak arch_cpu_idle_enter(void) { } void __weak arch_cpu_idle_exit(void) { } void __weak arch_cpu_idle_dead(void) { } @@ -136,6 +135,5 @@ void cpu_startup_entry(enum cpuhp_state state) boot_init_stack_canary(); #endif __current_set_polling(); - arch_cpu_idle_prepare(); cpu_idle_loop(); }
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
... so we can get rid of it entirely.
Signed-off-by: Nicolas Pitre nico@linaro.org
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org
include/linux/cpu.h | 1 - kernel/cpu/idle.c | 2 -- 2 files changed, 3 deletions(-)
diff --git a/include/linux/cpu.h b/include/linux/cpu.h index 03e235ad1b..218fab7521 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -221,7 +221,6 @@ void cpu_idle(void); void cpu_idle_poll_ctrl(bool enable);
void arch_cpu_idle(void); -void arch_cpu_idle_prepare(void); void arch_cpu_idle_enter(void); void arch_cpu_idle_exit(void); void arch_cpu_idle_dead(void); diff --git a/kernel/cpu/idle.c b/kernel/cpu/idle.c index 988573a9a3..4e327e211b 100644 --- a/kernel/cpu/idle.c +++ b/kernel/cpu/idle.c @@ -52,7 +52,6 @@ static inline int cpu_idle_poll(void) }
/* Weak implementations for optional arch specific functions */ -void __weak arch_cpu_idle_prepare(void) { } void __weak arch_cpu_idle_enter(void) { } void __weak arch_cpu_idle_exit(void) { } void __weak arch_cpu_idle_dead(void) { } @@ -136,6 +135,5 @@ void cpu_startup_entry(enum cpuhp_state state) boot_init_stack_canary(); #endif __current_set_polling();
- arch_cpu_idle_prepare(); cpu_idle_loop(); }
In order to integrate cpuidle with the scheduler, we must have a better proximity in the core code with what cpuidle is doing and not delegate such interaction to arch code.
Architectures implementing arch_cpu_idle() should simply enter a cheap idle mode in the absence of a proper cpuidle driver.
Signed-off-by: Nicolas Pitre nico@linaro.org --- kernel/cpu/idle.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/cpu/idle.c b/kernel/cpu/idle.c index 4e327e211b..a6f40ad9f8 100644 --- a/kernel/cpu/idle.c +++ b/kernel/cpu/idle.c @@ -3,6 +3,7 @@ */ #include <linux/sched.h> #include <linux/cpu.h> +#include <linux/cpuidle.h> #include <linux/tick.h> #include <linux/mm.h> #include <linux/stackprotector.h> @@ -94,7 +95,8 @@ static void cpu_idle_loop(void) if (!current_clr_polling_and_test()) { stop_critical_timings(); rcu_idle_enter(); - arch_cpu_idle(); + if (cpuidle_idle_call()) + arch_cpu_idle(); WARN_ON_ONCE(irqs_disabled()); rcu_idle_exit(); start_critical_timings();
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
In order to integrate cpuidle with the scheduler, we must have a better proximity in the core code with what cpuidle is doing and not delegate such interaction to arch code.
Architectures implementing arch_cpu_idle() should simply enter a cheap idle mode in the absence of a proper cpuidle driver.
Signed-off-by: Nicolas Pitre nico@linaro.org
This patch without the next ones will lead to an extra call to cpuidle_idle_call.
cpuidle_idle_call arch_cpu_idle cpuidle_idle_call x86_idle
But I guess it is acceptable as it is fixed with the next patches of the serie.
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org
kernel/cpu/idle.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/cpu/idle.c b/kernel/cpu/idle.c index 4e327e211b..a6f40ad9f8 100644 --- a/kernel/cpu/idle.c +++ b/kernel/cpu/idle.c @@ -3,6 +3,7 @@ */ #include <linux/sched.h> #include <linux/cpu.h> +#include <linux/cpuidle.h> #include <linux/tick.h> #include <linux/mm.h> #include <linux/stackprotector.h> @@ -94,7 +95,8 @@ static void cpu_idle_loop(void) if (!current_clr_polling_and_test()) { stop_critical_timings(); rcu_idle_enter();
arch_cpu_idle();
if (cpuidle_idle_call())
arch_cpu_idle(); WARN_ON_ONCE(irqs_disabled()); rcu_idle_exit(); start_critical_timings();
The core idle loop now takes care of it.
Signed-off-by: Nicolas Pitre nico@linaro.org --- arch/arm/kernel/process.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 725b8c95e0..34a59b7614 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -30,7 +30,6 @@ #include <linux/uaccess.h> #include <linux/random.h> #include <linux/hw_breakpoint.h> -#include <linux/cpuidle.h> #include <linux/leds.h> #include <linux/reboot.h>
@@ -133,7 +132,11 @@ EXPORT_SYMBOL_GPL(arm_pm_restart);
void (*arm_pm_idle)(void);
-static void default_idle(void) +/* + * Called from the core idle loop. + */ + +void arch_cpu_idle(void) { if (arm_pm_idle) arm_pm_idle(); @@ -163,15 +166,6 @@ void arch_cpu_idle_dead(void) #endif
/* - * Called from the core idle loop. - */ -void arch_cpu_idle(void) -{ - if (cpuidle_idle_call()) - default_idle(); -} - -/* * Called by kexec, immediately prior to machine_kexec(). * * This must completely disable all secondary CPUs; simply causing those CPUs
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
The core idle loop now takes care of it.
Signed-off-by: Nicolas Pitre nico@linaro.org
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org
arch/arm/kernel/process.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 725b8c95e0..34a59b7614 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -30,7 +30,6 @@ #include <linux/uaccess.h> #include <linux/random.h> #include <linux/hw_breakpoint.h> -#include <linux/cpuidle.h> #include <linux/leds.h> #include <linux/reboot.h>
@@ -133,7 +132,11 @@ EXPORT_SYMBOL_GPL(arm_pm_restart);
void (*arm_pm_idle)(void);
-static void default_idle(void) +/*
- Called from the core idle loop.
- */
+void arch_cpu_idle(void) { if (arm_pm_idle) arm_pm_idle(); @@ -163,15 +166,6 @@ void arch_cpu_idle_dead(void) #endif
/*
- Called from the core idle loop.
- */
-void arch_cpu_idle(void) -{
- if (cpuidle_idle_call())
default_idle();
-}
-/*
- Called by kexec, immediately prior to machine_kexec().
- This must completely disable all secondary CPUs; simply causing those CPUs
The core idle loop now takes care of it. However a few things need checking:
- Invocation of cpuidle_idle_call() in pseries_lpar_idle() happened through arch_cpu_idle() and was therefore always preceded by a call to ppc64_runlatch_off(). To preserve this property now that cpuidle_idle_call() is invoked directly from core code, a call to ppc64_runlatch_off() has been added to idle_loop_prolog() in platforms/pseries/processor_idle.c.
- Similarly, cpuidle_idle_call() was followed by ppc64_runlatch_off() so a call to the later has been added to idle_loop_epilog().
- And since arch_cpu_idle() always made sure to re-enable IRQs if they were not enabled, this is now done in idle_loop_epilog() as well.
The above was made in order to keep the execution flow close to the original. I don't know if that was strictly necessary. Someone well aquainted with the platform details might find some room for possible optimizations.
Signed-off-by: Nicolas Pitre nico@linaro.org --- arch/powerpc/platforms/pseries/processor_idle.c | 5 ++++ arch/powerpc/platforms/pseries/setup.c | 34 ++++++++++--------------- 2 files changed, 19 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c index a166e38bd6..72ddfe3d2f 100644 --- a/arch/powerpc/platforms/pseries/processor_idle.c +++ b/arch/powerpc/platforms/pseries/processor_idle.c @@ -33,6 +33,7 @@ static struct cpuidle_state *cpuidle_state_table;
static inline void idle_loop_prolog(unsigned long *in_purr) { + ppc64_runlatch_off(); *in_purr = mfspr(SPRN_PURR); /* * Indicate to the HV that we are idle. Now would be @@ -49,6 +50,10 @@ static inline void idle_loop_epilog(unsigned long in_purr) wait_cycles += mfspr(SPRN_PURR) - in_purr; get_lppaca()->wait_state_cycles = cpu_to_be64(wait_cycles); get_lppaca()->idle = 0; + + if (irqs_disabled()) + local_irq_enable(); + ppc64_runlatch_on(); }
static int snooze_loop(struct cpuidle_device *dev, diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index c1f1908587..7604c19d54 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -39,7 +39,6 @@ #include <linux/irq.h> #include <linux/seq_file.h> #include <linux/root_dev.h> -#include <linux/cpuidle.h> #include <linux/of.h> #include <linux/kexec.h>
@@ -356,29 +355,24 @@ early_initcall(alloc_dispatch_log_kmem_cache);
static void pseries_lpar_idle(void) { - /* This would call on the cpuidle framework, and the back-end pseries - * driver to go to idle states + /* + * Default handler to go into low thread priority and possibly + * low power mode by cedeing processor to hypervisor */ - if (cpuidle_idle_call()) { - /* On error, execute default handler - * to go into low thread priority and possibly - * low power mode by cedeing processor to hypervisor - */
- /* Indicate to hypervisor that we are idle. */ - get_lppaca()->idle = 1; + /* Indicate to hypervisor that we are idle. */ + get_lppaca()->idle = 1;
- /* - * Yield the processor to the hypervisor. We return if - * an external interrupt occurs (which are driven prior - * to returning here) or if a prod occurs from another - * processor. When returning here, external interrupts - * are enabled. - */ - cede_processor(); + /* + * Yield the processor to the hypervisor. We return if + * an external interrupt occurs (which are driven prior + * to returning here) or if a prod occurs from another + * processor. When returning here, external interrupts + * are enabled. + */ + cede_processor();
- get_lppaca()->idle = 0; - } + get_lppaca()->idle = 0; }
/*
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
The core idle loop now takes care of it. However a few things need checking:
Invocation of cpuidle_idle_call() in pseries_lpar_idle() happened through arch_cpu_idle() and was therefore always preceded by a call to ppc64_runlatch_off(). To preserve this property now that cpuidle_idle_call() is invoked directly from core code, a call to ppc64_runlatch_off() has been added to idle_loop_prolog() in platforms/pseries/processor_idle.c.
Similarly, cpuidle_idle_call() was followed by ppc64_runlatch_off() so a call to the later has been added to idle_loop_epilog().
And since arch_cpu_idle() always made sure to re-enable IRQs if they were not enabled, this is now done in idle_loop_epilog() as well.
The above was made in order to keep the execution flow close to the original. I don't know if that was strictly necessary. Someone well aquainted with the platform details might find some room for possible optimizations.
Signed-off-by: Nicolas Pitre nico@linaro.org
Added Preeti U Murthy as recipient.
arch/powerpc/platforms/pseries/processor_idle.c | 5 ++++ arch/powerpc/platforms/pseries/setup.c | 34 ++++++++++--------------- 2 files changed, 19 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c index a166e38bd6..72ddfe3d2f 100644 --- a/arch/powerpc/platforms/pseries/processor_idle.c +++ b/arch/powerpc/platforms/pseries/processor_idle.c @@ -33,6 +33,7 @@ static struct cpuidle_state *cpuidle_state_table;
static inline void idle_loop_prolog(unsigned long *in_purr) {
- ppc64_runlatch_off(); *in_purr = mfspr(SPRN_PURR); /*
- Indicate to the HV that we are idle. Now would be
@@ -49,6 +50,10 @@ static inline void idle_loop_epilog(unsigned long in_purr) wait_cycles += mfspr(SPRN_PURR) - in_purr; get_lppaca()->wait_state_cycles = cpu_to_be64(wait_cycles); get_lppaca()->idle = 0;
if (irqs_disabled())
local_irq_enable();
ppc64_runlatch_on(); }
static int snooze_loop(struct cpuidle_device *dev,
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index c1f1908587..7604c19d54 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -39,7 +39,6 @@ #include <linux/irq.h> #include <linux/seq_file.h> #include <linux/root_dev.h> -#include <linux/cpuidle.h> #include <linux/of.h> #include <linux/kexec.h>
@@ -356,29 +355,24 @@ early_initcall(alloc_dispatch_log_kmem_cache);
static void pseries_lpar_idle(void) {
- /* This would call on the cpuidle framework, and the back-end pseries
* driver to go to idle states
- /*
* Default handler to go into low thread priority and possibly
*/* low power mode by cedeing processor to hypervisor
if (cpuidle_idle_call()) {
/* On error, execute default handler
* to go into low thread priority and possibly
* low power mode by cedeing processor to hypervisor
*/
/* Indicate to hypervisor that we are idle. */
get_lppaca()->idle = 1;
- /* Indicate to hypervisor that we are idle. */
- get_lppaca()->idle = 1;
/*
* Yield the processor to the hypervisor. We return if
* an external interrupt occurs (which are driven prior
* to returning here) or if a prod occurs from another
* processor. When returning here, external interrupts
* are enabled.
*/
cede_processor();
- /*
* Yield the processor to the hypervisor. We return if
* an external interrupt occurs (which are driven prior
* to returning here) or if a prod occurs from another
* processor. When returning here, external interrupts
* are enabled.
*/
- cede_processor();
get_lppaca()->idle = 0;
- }
get_lppaca()->idle = 0; }
/*
Hi Nicolas,
On 01/27/2014 11:38 AM, Nicolas Pitre wrote:
The core idle loop now takes care of it. However a few things need checking:
Invocation of cpuidle_idle_call() in pseries_lpar_idle() happened through arch_cpu_idle() and was therefore always preceded by a call to ppc64_runlatch_off(). To preserve this property now that cpuidle_idle_call() is invoked directly from core code, a call to ppc64_runlatch_off() has been added to idle_loop_prolog() in platforms/pseries/processor_idle.c.
Similarly, cpuidle_idle_call() was followed by ppc64_runlatch_off() so a call to the later has been added to idle_loop_epilog().
And since arch_cpu_idle() always made sure to re-enable IRQs if they were not enabled, this is now done in idle_loop_epilog() as well.
The above was made in order to keep the execution flow close to the original. I don't know if that was strictly necessary. Someone well aquainted with the platform details might find some room for possible optimizations.
Signed-off-by: Nicolas Pitre nico@linaro.org
arch/powerpc/platforms/pseries/processor_idle.c | 5 ++++ arch/powerpc/platforms/pseries/setup.c | 34 ++++++++++--------------- 2 files changed, 19 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c index a166e38bd6..72ddfe3d2f 100644 --- a/arch/powerpc/platforms/pseries/processor_idle.c +++ b/arch/powerpc/platforms/pseries/processor_idle.c @@ -33,6 +33,7 @@ static struct cpuidle_state *cpuidle_state_table;
static inline void idle_loop_prolog(unsigned long *in_purr) {
- ppc64_runlatch_off(); *in_purr = mfspr(SPRN_PURR); /*
- Indicate to the HV that we are idle. Now would be
@@ -49,6 +50,10 @@ static inline void idle_loop_epilog(unsigned long in_purr) wait_cycles += mfspr(SPRN_PURR) - in_purr; get_lppaca()->wait_state_cycles = cpu_to_be64(wait_cycles); get_lppaca()->idle = 0;
- if (irqs_disabled())
local_irq_enable();
- ppc64_runlatch_on();
}
static int snooze_loop(struct cpuidle_device *dev, diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index c1f1908587..7604c19d54 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -39,7 +39,6 @@ #include <linux/irq.h> #include <linux/seq_file.h> #include <linux/root_dev.h> -#include <linux/cpuidle.h> #include <linux/of.h> #include <linux/kexec.h>
@@ -356,29 +355,24 @@ early_initcall(alloc_dispatch_log_kmem_cache);
static void pseries_lpar_idle(void) {
- /* This would call on the cpuidle framework, and the back-end pseries
* driver to go to idle states
- /*
* Default handler to go into low thread priority and possibly
*/* low power mode by cedeing processor to hypervisor
if (cpuidle_idle_call()) {
/* On error, execute default handler
* to go into low thread priority and possibly
* low power mode by cedeing processor to hypervisor
*/
/* Indicate to hypervisor that we are idle. */
get_lppaca()->idle = 1;
- /* Indicate to hypervisor that we are idle. */
- get_lppaca()->idle = 1;
/*
* Yield the processor to the hypervisor. We return if
* an external interrupt occurs (which are driven prior
* to returning here) or if a prod occurs from another
* processor. When returning here, external interrupts
* are enabled.
*/
cede_processor();
- /*
* Yield the processor to the hypervisor. We return if
* an external interrupt occurs (which are driven prior
* to returning here) or if a prod occurs from another
* processor. When returning here, external interrupts
* are enabled.
*/
- cede_processor();
get_lppaca()->idle = 0;
- }
- get_lppaca()->idle = 0;
}
/*
Reviewed-by: Preeti U Murthy preeti@linux.vnet.ibm.com
The consequence of this would be for other Power platforms like PowerNV, we will need to invoke ppc_runlatch_off() and ppc_runlatch_on() in each of the idle routines since the idle_loop_prologue() and idle_loop_epilogue() are not invoked by them, but we will take care of this.
Regards Preeti U Murthy
The core idle loop now takes care of it.
Signed-off-by: Nicolas Pitre nico@linaro.org --- arch/sh/kernel/idle.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/arch/sh/kernel/idle.c b/arch/sh/kernel/idle.c index 2ea4483fd7..be616ee0cf 100644 --- a/arch/sh/kernel/idle.c +++ b/arch/sh/kernel/idle.c @@ -16,7 +16,6 @@ #include <linux/thread_info.h> #include <linux/irqflags.h> #include <linux/smp.h> -#include <linux/cpuidle.h> #include <linux/atomic.h> #include <asm/pgalloc.h> #include <asm/smp.h> @@ -40,8 +39,7 @@ void arch_cpu_idle_dead(void)
void arch_cpu_idle(void) { - if (cpuidle_idle_call()) - sh_idle(); + sh_idle(); }
void __init select_idle_routine(void)
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
The core idle loop now takes care of it.
Signed-off-by: Nicolas Pitre nico@linaro.org
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org
arch/sh/kernel/idle.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/arch/sh/kernel/idle.c b/arch/sh/kernel/idle.c index 2ea4483fd7..be616ee0cf 100644 --- a/arch/sh/kernel/idle.c +++ b/arch/sh/kernel/idle.c @@ -16,7 +16,6 @@ #include <linux/thread_info.h> #include <linux/irqflags.h> #include <linux/smp.h> -#include <linux/cpuidle.h> #include <linux/atomic.h> #include <asm/pgalloc.h> #include <asm/smp.h> @@ -40,8 +39,7 @@ void arch_cpu_idle_dead(void)
void arch_cpu_idle(void) {
- if (cpuidle_idle_call())
sh_idle();
sh_idle(); }
void __init select_idle_routine(void)
The core idle loop now takes care of it.
Signed-off-by: Nicolas Pitre nico@linaro.org --- arch/x86/kernel/process.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 3fb8d95ab8..4505e2a950 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -298,10 +298,7 @@ void arch_cpu_idle_dead(void) */ void arch_cpu_idle(void) { - if (cpuidle_idle_call()) - x86_idle(); - else - local_irq_enable(); + x86_idle(); }
/*
On 01/27/2014 07:08 AM, Nicolas Pitre wrote:
The core idle loop now takes care of it.
Signed-off-by: Nicolas Pitre nico@linaro.org
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org
arch/x86/kernel/process.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 3fb8d95ab8..4505e2a950 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -298,10 +298,7 @@ void arch_cpu_idle_dead(void) */ void arch_cpu_idle(void) {
- if (cpuidle_idle_call())
x86_idle();
- else
local_irq_enable();
For the record, it was pointless to enable the local irq here because it is handled by the cpuidle framework when exiting the idle state.
x86_idle(); }
/*
Integration of cpuidle with the scheduler requires that the idle loop be closely integrated with the scheduler proper. Moving cpu/idle.c into the sched directory will allow for a smoother integration, and eliminate a subdirectory which contained only one source file.
Signed-off-by: Nicolas Pitre nico@linaro.org --- kernel/Makefile | 1 - kernel/cpu/Makefile | 1 - kernel/sched/Makefile | 2 +- kernel/{cpu => sched}/idle.c | 0 4 files changed, 1 insertion(+), 3 deletions(-) delete mode 100644 kernel/cpu/Makefile rename kernel/{cpu => sched}/idle.c (100%)
diff --git a/kernel/Makefile b/kernel/Makefile index bc010ee272..6f1c7e5cfc 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -22,7 +22,6 @@ obj-y += sched/ obj-y += locking/ obj-y += power/ obj-y += printk/ -obj-y += cpu/ obj-y += irq/ obj-y += rcu/
diff --git a/kernel/cpu/Makefile b/kernel/cpu/Makefile deleted file mode 100644 index 59ab052ef7..0000000000 --- a/kernel/cpu/Makefile +++ /dev/null @@ -1 +0,0 @@ -obj-y = idle.o diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index 7b621409cf..ac3e0ea68f 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -11,7 +11,7 @@ ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y) CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer endif
-obj-y += core.o proc.o clock.o cputime.o idle_task.o fair.o rt.o stop_task.o +obj-y += core.o proc.o clock.o cputime.o idle_task.o idle.o fair.o rt.o stop_task.o obj-y += wait.o completion.o obj-$(CONFIG_SMP) += cpupri.o obj-$(CONFIG_SCHED_AUTOGROUP) += auto_group.o diff --git a/kernel/cpu/idle.c b/kernel/sched/idle.c similarity index 100% rename from kernel/cpu/idle.c rename to kernel/sched/idle.c
On Mon, Jan 27, 2014 at 01:08:15AM -0500, Nicolas Pitre wrote:
As everyone should know by now, we want to integrate the cpuidle governor with the scheduler for a more efficient idling of CPUs. In order to help the transition, this small patch series moves the existing interaction with cpuidle from architecture code to generic core code. No functional change should have occurred yet.
The ARM, PPC, SH and X86 architectures are concerned. Small cleanups to ARM and ARM64 are also included. I don't know yet the best path for those patches to get into mainline, but it is probably best if they stay together. So ACKs from architecture maintainers would be greatly appreciated.
arch/arm/kernel/process.c | 21 +++--------- arch/arm/kernel/setup.c | 7 ++++ arch/arm64/kernel/process.c | 5 --- arch/arm64/kernel/setup.c | 7 ++++ arch/powerpc/platforms/pseries/processor_idle.c | 5 +++ arch/powerpc/platforms/pseries/setup.c | 34 ++++++++----------- arch/sh/kernel/idle.c | 4 +-- arch/x86/kernel/process.c | 5 +-- include/linux/cpu.h | 1 - kernel/Makefile | 1 - kernel/cpu/Makefile | 1 - kernel/sched/Makefile | 2 +- kernel/{cpu => sched}/idle.c | 6 ++-- 13 files changed, 44 insertions(+), 55 deletions(-)
Thomas, any objections to this? It looks like a sensible thing to do.
linaro-kernel@lists.linaro.org