The upstream versions of the MCPM code for vexpress contains several fixes and tidy-ups in the cache disabling sequences. This patch series backports all of these to LSK.
These patches are also available in a branch to pull...
The following changes since commit 4bb2d496b52029fc12322af09f1a5dda95affdba:
ARM: vexpress: tc2: fix hotplug/idle/kexec race on cluster power down (2013-11-29 16:14:49 +0000)
are available in the git repository at:
git://git.linaro.org/people/tixy/kernel.git for-lsk-tc2
for you to fetch changes up to 27d55bacd5002422f8dab3ee6f941a2e23eb38c5:
ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code (2013-12-02 12:54:17 +0000)
---------------------------------------------------------------- Jon Medhurst (1): ARM: vexpress/TC2: Match mainline cache disabling sequence in tc2_pm_down
Nicolas Pitre (3): ARM: vexpress/dcscb: fix cache disabling sequences ARM: vexpress/MCPM: fix cache disable sequence when CONFIG_FRAME_POINTER=y ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code
arch/arm/include/asm/cacheflush.h | 46 ++++++++++++++++++++++++++++++++++++++++++++++ arch/arm/mach-vexpress/dcscb.c | 32 ++++---------------------------- arch/arm/mach-vexpress/tc2_pm.c | 30 ++++++++++++++---------------- 3 files changed, 64 insertions(+), 44 deletions(-)
From: Jon Medhurst tixy@linaro.org
When the TC2 pm code was finally upstreamed [1] the cache disbling sequence had been modified to avoid some potential race conditions. So lets backport these changes.
[1] Commit 11b277eabe70 ARM: vexpress/TC2: basic PM support
Signed-off-by: Jon Medhurst tixy@linaro.org --- arch/arm/mach-vexpress/tc2_pm.c | 64 ++++++++++++++++++++++++++++++--------- 1 file changed, 49 insertions(+), 15 deletions(-)
diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c index 2b519ee..b4b090c 100644 --- a/arch/arm/mach-vexpress/tc2_pm.c +++ b/arch/arm/mach-vexpress/tc2_pm.c @@ -135,20 +135,40 @@ static void tc2_pm_down(u64 residency) if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { arch_spin_unlock(&tc2_pm_lock);
- set_cr(get_cr() & ~CR_C); - flush_cache_all(); - asm volatile ("clrex"); - set_auxcr(get_auxcr() & ~(1 << 6)); - - cci_disable_port_by_cpu(mpidr); + if (read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A15) { + /* + * On the Cortex-A15 we need to disable + * L2 prefetching before flushing the cache. + */ + asm volatile( + "mcr p15, 1, %0, c15, c0, 3 \n\t" + "isb \n\t" + "dsb " + : : "r" (0x400) ); + }
/* - * Ensure that both C & I bits are disabled in the SCTLR - * before disabling ACE snoops. This ensures that no - * coherency traffic will originate from this cpu after - * ACE snoops are turned off. + * We need to disable and flush the whole (L1 and L2) cache. + * Let's do it in the safest possible way i.e. with + * no memory access within the following sequence + * including the stack. */ - cpu_proc_fin(); + asm volatile( + "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" + "bic r0, r0, #"__stringify(CR_C)" \n\t" + "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" + "isb \n\t" + "bl v7_flush_dcache_all \n\t" + "clrex \n\t" + "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" + "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" + "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" + "isb \n\t" + "dsb " + : : : "r0","r1","r2","r3","r4","r5","r6","r7", + "r9","r10","r11","lr","memory"); + + cci_disable_port_by_cpu(mpidr);
__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); } else { @@ -162,10 +182,24 @@ static void tc2_pm_down(u64 residency)
arch_spin_unlock(&tc2_pm_lock);
- set_cr(get_cr() & ~CR_C); - flush_cache_louis(); - asm volatile ("clrex"); - set_auxcr(get_auxcr() & ~(1 << 6)); + /* + * We need to disable and flush only the L1 cache. + * Let's do it in the safest possible way as above. + */ + asm volatile( + "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" + "bic r0, r0, #"__stringify(CR_C)" \n\t" + "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" + "isb \n\t" + "bl v7_flush_dcache_louis \n\t" + "clrex \n\t" + "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" + "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" + "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" + "isb \n\t" + "dsb " + : : : "r0","r1","r2","r3","r4","r5","r6","r7", + "r9","r10","r11","lr","memory"); }
__mcpm_cpu_down(cpu, cluster);
From: Nicolas Pitre nicolas.pitre@linaro.org
Commit e8f9bb1bd6bb93fff773345cc54c42585e0e3ece upstream
Unlike real A15/A7's, the RTSM simulation doesn't appear to hit the cache when the CTRL.C bit is cleared. Let's ensure there is no memory access within the disable and flush cache sequence, including to the stack.
Signed-off-by: Nicolas Pitre nico@linaro.org Signed-off-by: Jon Medhurst tixy@linaro.org --- arch/arm/mach-vexpress/dcscb.c | 58 +++++++++++++++++++++++++--------------- 1 file changed, 37 insertions(+), 21 deletions(-)
diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c index 948aec0..df59395 100644 --- a/arch/arm/mach-vexpress/dcscb.c +++ b/arch/arm/mach-vexpress/dcscb.c @@ -137,14 +137,29 @@ static void dcscb_power_down(void) /* * Flush all cache levels for this cluster. * - * A15/A7 can hit in the cache with SCTLR.C=0, so we don't need - * a preliminary flush here for those CPUs. At least, that's - * the theory -- without the extra flush, Linux explodes on - * RTSM (to be investigated). + * To do so we do: + * - Clear the SCTLR.C bit to prevent further cache allocations + * - Flush the whole cache + * - Clear the ACTLR "SMP" bit to disable local coherency + * + * Let's do it in the safest possible way i.e. with + * no memory access within the following sequence + * including to the stack. */ - flush_cache_all(); - set_cr(get_cr() & ~CR_C); - flush_cache_all(); + asm volatile( + "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" + "bic r0, r0, #"__stringify(CR_C)" \n\t" + "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" + "isb \n\t" + "bl v7_flush_dcache_all \n\t" + "clrex \n\t" + "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" + "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" + "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" + "isb \n\t" + "dsb " + : : : "r0","r1","r2","r3","r4","r5","r6","r7", + "r9","r10","r11","lr","memory");
/* * This is a harmless no-op. On platforms with a real @@ -153,9 +168,6 @@ static void dcscb_power_down(void) */ outer_flush_all();
- /* Disable local coherency by clearing the ACTLR "SMP" bit: */ - set_auxcr(get_auxcr() & ~(1 << 6)); - /* * Disable cluster-level coherency by masking * incoming snoops and DVM messages: @@ -168,18 +180,22 @@ static void dcscb_power_down(void)
/* * Flush the local CPU cache. - * - * A15/A7 can hit in the cache with SCTLR.C=0, so we don't need - * a preliminary flush here for those CPUs. At least, that's - * the theory -- without the extra flush, Linux explodes on - * RTSM (to be investigated). + * Let's do it in the safest possible way as above. */ - flush_cache_louis(); - set_cr(get_cr() & ~CR_C); - flush_cache_louis(); - - /* Disable local coherency by clearing the ACTLR "SMP" bit: */ - set_auxcr(get_auxcr() & ~(1 << 6)); + asm volatile( + "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" + "bic r0, r0, #"__stringify(CR_C)" \n\t" + "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" + "isb \n\t" + "bl v7_flush_dcache_louis \n\t" + "clrex \n\t" + "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" + "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" + "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" + "isb \n\t" + "dsb " + : : : "r0","r1","r2","r3","r4","r5","r6","r7", + "r9","r10","r11","lr","memory"); }
__mcpm_cpu_down(cpu, cluster);
From: Nicolas Pitre nicolas.pitre@linaro.org
Commit fac2e57742d9aa3dbe41860280352efda9d5566e upstream
If CONFIG_FRAME_POINTER=y we get the following error:
arch/arm/mach-vexpress/tc2_pm.c: In function 'tc2_pm_down': arch/arm/mach-vexpress/tc2_pm.c:200:1: error: fp cannot be used in asm here
Let's fix that by explicitly preserving r11 on the stack and removing it from the clobber list.
Reported-by: Russell King rmk+kernel@arm.linux.org.uk Reviewed-by: Dave Martin Dave.Martin@arm.com Signed-off-by: Nicolas Pitre nico@linaro.org Signed-off-by: Olof Johansson olof@lixom.net Signed-off-by: Jon Medhurst tixy@linaro.org --- arch/arm/mach-vexpress/dcscb.c | 16 ++++++++++++---- arch/arm/mach-vexpress/tc2_pm.c | 16 ++++++++++++---- 2 files changed, 24 insertions(+), 8 deletions(-)
diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c index df59395..19a7d08 100644 --- a/arch/arm/mach-vexpress/dcscb.c +++ b/arch/arm/mach-vexpress/dcscb.c @@ -145,8 +145,13 @@ static void dcscb_power_down(void) * Let's do it in the safest possible way i.e. with * no memory access within the following sequence * including to the stack. + * + * Note: fp is preserved to the stack explicitly prior doing + * this since adding it to the clobber list is incompatible + * with having CONFIG_FRAME_POINTER=y. */ asm volatile( + "str fp, [sp, #-4]! \n\t" "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" "bic r0, r0, #"__stringify(CR_C)" \n\t" "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" @@ -157,9 +162,10 @@ static void dcscb_power_down(void) "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" "isb \n\t" - "dsb " + "dsb \n\t" + "ldr fp, [sp], #4" : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","r11","lr","memory"); + "r9","r10","lr","memory");
/* * This is a harmless no-op. On platforms with a real @@ -183,6 +189,7 @@ static void dcscb_power_down(void) * Let's do it in the safest possible way as above. */ asm volatile( + "str fp, [sp, #-4]! \n\t" "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" "bic r0, r0, #"__stringify(CR_C)" \n\t" "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" @@ -193,9 +200,10 @@ static void dcscb_power_down(void) "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" "isb \n\t" - "dsb " + "dsb \n\t" + "ldr fp, [sp], #4" : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","r11","lr","memory"); + "r9","r10","lr","memory"); }
__mcpm_cpu_down(cpu, cluster); diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c index b4b090c..9221903 100644 --- a/arch/arm/mach-vexpress/tc2_pm.c +++ b/arch/arm/mach-vexpress/tc2_pm.c @@ -152,8 +152,13 @@ static void tc2_pm_down(u64 residency) * Let's do it in the safest possible way i.e. with * no memory access within the following sequence * including the stack. + * + * Note: fp is preserved to the stack explicitly prior doing + * this since adding it to the clobber list is incompatible + * with having CONFIG_FRAME_POINTER=y. */ asm volatile( + "str fp, [sp, #-4]! \n\t" "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" "bic r0, r0, #"__stringify(CR_C)" \n\t" "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" @@ -164,9 +169,10 @@ static void tc2_pm_down(u64 residency) "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" "isb \n\t" - "dsb " + "dsb \n\t" + "ldr fp, [sp], #4" : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","r11","lr","memory"); + "r9","r10","lr","memory");
cci_disable_port_by_cpu(mpidr);
@@ -187,6 +193,7 @@ static void tc2_pm_down(u64 residency) * Let's do it in the safest possible way as above. */ asm volatile( + "str fp, [sp, #-4]! \n\t" "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" "bic r0, r0, #"__stringify(CR_C)" \n\t" "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" @@ -197,9 +204,10 @@ static void tc2_pm_down(u64 residency) "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" "isb \n\t" - "dsb " + "dsb \n\t" + "ldr fp, [sp], #4" : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","r11","lr","memory"); + "r9","r10","lr","memory"); }
__mcpm_cpu_down(cpu, cluster);
From: Nicolas Pitre nicolas.pitre@linaro.org
Commit 39792c7cf3111d69dc4aa0923859d8b929e9039f upstream
This code is becoming duplicated in many places. So let's consolidate it into a handy macro that is known to be right and available for reuse.
Signed-off-by: Nicolas Pitre nico@linaro.org Acked-by: Dave Martin Dave.Martin@arm.com Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk Signed-off-by: Jon Medhurst tixy@linaro.org --- arch/arm/include/asm/cacheflush.h | 46 ++++++++++++++++++++++++++++++ arch/arm/mach-vexpress/dcscb.c | 56 +++---------------------------------- arch/arm/mach-vexpress/tc2_pm.c | 48 ++----------------------------- 3 files changed, 52 insertions(+), 98 deletions(-)
diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 17d0ae8..beb8702 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -436,4 +436,50 @@ static inline void __sync_cache_range_r(volatile void *p, size_t size) #define sync_cache_w(ptr) __sync_cache_range_w(ptr, sizeof *(ptr)) #define sync_cache_r(ptr) __sync_cache_range_r(ptr, sizeof *(ptr))
+/* + * Disabling cache access for one CPU in an ARMv7 SMP system is tricky. + * To do so we must: + * + * - Clear the SCTLR.C bit to prevent further cache allocations + * - Flush the desired level of cache + * - Clear the ACTLR "SMP" bit to disable local coherency + * + * ... and so without any intervening memory access in between those steps, + * not even to the stack. + * + * WARNING -- After this has been called: + * + * - No ldrex/strex (and similar) instructions must be used. + * - The CPU is obviously no longer coherent with the other CPUs. + * - This is unlikely to work as expected if Linux is running non-secure. + * + * Note: + * + * - This is known to apply to several ARMv7 processor implementations, + * however some exceptions may exist. Caveat emptor. + * + * - The clobber list is dictated by the call to v7_flush_dcache_*. + * fp is preserved to the stack explicitly prior disabling the cache + * since adding it to the clobber list is incompatible with having + * CONFIG_FRAME_POINTER=y. ip is saved as well if ever r12-clobbering + * trampoline are inserted by the linker and to keep sp 64-bit aligned. + */ +#define v7_exit_coherency_flush(level) \ + asm volatile( \ + "stmfd sp!, {fp, ip} \n\t" \ + "mrc p15, 0, r0, c1, c0, 0 @ get SCTLR \n\t" \ + "bic r0, r0, #"__stringify(CR_C)" \n\t" \ + "mcr p15, 0, r0, c1, c0, 0 @ set SCTLR \n\t" \ + "isb \n\t" \ + "bl v7_flush_dcache_"__stringify(level)" \n\t" \ + "clrex \n\t" \ + "mrc p15, 0, r0, c1, c0, 1 @ get ACTLR \n\t" \ + "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" \ + "mcr p15, 0, r0, c1, c0, 1 @ set ACTLR \n\t" \ + "isb \n\t" \ + "dsb \n\t" \ + "ldmfd sp!, {fp, ip}" \ + : : : "r0","r1","r2","r3","r4","r5","r6","r7", \ + "r9","r10","lr","memory" ) + #endif diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c index 19a7d08..b35700f 100644 --- a/arch/arm/mach-vexpress/dcscb.c +++ b/arch/arm/mach-vexpress/dcscb.c @@ -134,38 +134,8 @@ static void dcscb_power_down(void) if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { arch_spin_unlock(&dcscb_lock);
- /* - * Flush all cache levels for this cluster. - * - * To do so we do: - * - Clear the SCTLR.C bit to prevent further cache allocations - * - Flush the whole cache - * - Clear the ACTLR "SMP" bit to disable local coherency - * - * Let's do it in the safest possible way i.e. with - * no memory access within the following sequence - * including to the stack. - * - * Note: fp is preserved to the stack explicitly prior doing - * this since adding it to the clobber list is incompatible - * with having CONFIG_FRAME_POINTER=y. - */ - asm volatile( - "str fp, [sp, #-4]! \n\t" - "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" - "bic r0, r0, #"__stringify(CR_C)" \n\t" - "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" - "isb \n\t" - "bl v7_flush_dcache_all \n\t" - "clrex \n\t" - "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" - "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" - "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" - "isb \n\t" - "dsb \n\t" - "ldr fp, [sp], #4" - : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","lr","memory"); + /* Flush all cache levels for this cluster. */ + v7_exit_coherency_flush(all);
/* * This is a harmless no-op. On platforms with a real @@ -184,26 +154,8 @@ static void dcscb_power_down(void) } else { arch_spin_unlock(&dcscb_lock);
- /* - * Flush the local CPU cache. - * Let's do it in the safest possible way as above. - */ - asm volatile( - "str fp, [sp, #-4]! \n\t" - "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" - "bic r0, r0, #"__stringify(CR_C)" \n\t" - "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" - "isb \n\t" - "bl v7_flush_dcache_louis \n\t" - "clrex \n\t" - "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" - "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" - "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" - "isb \n\t" - "dsb \n\t" - "ldr fp, [sp], #4" - : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","lr","memory"); + /* Disable and flush the local CPU cache. */ + v7_exit_coherency_flush(louis); }
__mcpm_cpu_down(cpu, cluster); diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c index 9221903..9fc264a 100644 --- a/arch/arm/mach-vexpress/tc2_pm.c +++ b/arch/arm/mach-vexpress/tc2_pm.c @@ -147,32 +147,7 @@ static void tc2_pm_down(u64 residency) : : "r" (0x400) ); }
- /* - * We need to disable and flush the whole (L1 and L2) cache. - * Let's do it in the safest possible way i.e. with - * no memory access within the following sequence - * including the stack. - * - * Note: fp is preserved to the stack explicitly prior doing - * this since adding it to the clobber list is incompatible - * with having CONFIG_FRAME_POINTER=y. - */ - asm volatile( - "str fp, [sp, #-4]! \n\t" - "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" - "bic r0, r0, #"__stringify(CR_C)" \n\t" - "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" - "isb \n\t" - "bl v7_flush_dcache_all \n\t" - "clrex \n\t" - "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" - "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" - "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" - "isb \n\t" - "dsb \n\t" - "ldr fp, [sp], #4" - : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","lr","memory"); + v7_exit_coherency_flush(all);
cci_disable_port_by_cpu(mpidr);
@@ -188,26 +163,7 @@ static void tc2_pm_down(u64 residency)
arch_spin_unlock(&tc2_pm_lock);
- /* - * We need to disable and flush only the L1 cache. - * Let's do it in the safest possible way as above. - */ - asm volatile( - "str fp, [sp, #-4]! \n\t" - "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" - "bic r0, r0, #"__stringify(CR_C)" \n\t" - "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" - "isb \n\t" - "bl v7_flush_dcache_louis \n\t" - "clrex \n\t" - "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" - "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" - "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" - "isb \n\t" - "dsb \n\t" - "ldr fp, [sp], #4" - : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","lr","memory"); + v7_exit_coherency_flush(louis); }
__mcpm_cpu_down(cpu, cluster);
On 12/02/2013 09:46 PM, Jon Medhurst (Tixy) wrote:
The upstream versions of the MCPM code for vexpress contains several fixes and tidy-ups in the cache disabling sequences. This patch series backports all of these to LSK.
These patches are also available in a branch to pull...
The following changes since commit 4bb2d496b52029fc12322af09f1a5dda95affdba:
ARM: vexpress: tc2: fix hotplug/idle/kexec race on cluster power down (2013-11-29 16:14:49 +0000)
are available in the git repository at:
git://git.linaro.org/people/tixy/kernel.git for-lsk-tc2
It looks fine for me. I merged them in lsk-test branch, the testing is on going, test report link: https://ci.linaro.org/jenkins/job/linux-linaro-stable-lsk-test/8/ If it past the testing, the patchset will be merged into lsk and lsk-android.
for you to fetch changes up to 27d55bacd5002422f8dab3ee6f941a2e23eb38c5:
ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code (2013-12-02 12:54:17 +0000)
Jon Medhurst (1): ARM: vexpress/TC2: Match mainline cache disabling sequence in tc2_pm_down
Nicolas Pitre (3): ARM: vexpress/dcscb: fix cache disabling sequences ARM: vexpress/MCPM: fix cache disable sequence when CONFIG_FRAME_POINTER=y ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code
arch/arm/include/asm/cacheflush.h | 46 ++++++++++++++++++++++++++++++++++++++++++++++ arch/arm/mach-vexpress/dcscb.c | 32 ++++---------------------------- arch/arm/mach-vexpress/tc2_pm.c | 30 ++++++++++++++---------------- 3 files changed, 64 insertions(+), 44 deletions(-)
On Tue, 2013-12-03 at 14:41 +0800, Alex Shi wrote:
On 12/02/2013 09:46 PM, Jon Medhurst (Tixy) wrote:
The upstream versions of the MCPM code for vexpress contains several fixes and tidy-ups in the cache disabling sequences. This patch series backports all of these to LSK.
These patches are also available in a branch to pull...
The following changes since commit 4bb2d496b52029fc12322af09f1a5dda95affdba:
ARM: vexpress: tc2: fix hotplug/idle/kexec race on cluster power down (2013-11-29 16:14:49 +0000)
are available in the git repository at:
git://git.linaro.org/people/tixy/kernel.git for-lsk-tc2
It looks fine for me. I merged them in lsk-test branch, the testing is on going, test report link: https://ci.linaro.org/jenkins/job/linux-linaro-stable-lsk-test/8/ If it past the testing, the patchset will be merged into lsk and lsk-android.
Thanks.
I was hoping this would also help with this bug: https://bugs.launchpad.net/linaro-big-little-system/+bug/1097213 It's difficult to tell though.
On 12/03/2013 06:09 PM, Jon Medhurst (Tixy) wrote:
Thanks.
I was hoping this would also help with this bug: https://bugs.launchpad.net/linaro-big-little-system/+bug/1097213 It's difficult to tell though.
applied.
But I can't see connections between cpu_idle null ptr and your patchset. If it is a cache/memory coherence issue. it would happened any where.
Anyway this patchset still looks fine and useful. :)
On Wed, 2013-12-04 at 11:19 +0800, Alex Shi wrote:
On 12/03/2013 06:09 PM, Jon Medhurst (Tixy) wrote:
Thanks.
I was hoping this would also help with this bug: https://bugs.launchpad.net/linaro-big-little-system/+bug/1097213 It's difficult to tell though.
applied.
But I can't see connections between cpu_idle null ptr and your patchset. If it is a cache/memory coherence issue. it would happened any where.
They do happen anywhere, the crashes don't have reproducible symptoms and are triggered by test code which is hotplugging cpus. So it seemed reasonable to hope cache coherency bugs in the cpu power down code might be the cause.
linaro-kernel@lists.linaro.org