On 31 October 2012 07:30, Viresh Kumar viresh.kumar@linaro.org wrote:
Following is taken from Documentation/memory-barriers.txt:
(5) LOCK operations.
This acts as a one-way permeable barrier. It guarantees that all memory operations after the LOCK operation will appear to happen after the LOCK operation with respect to the other components of the system. Memory operations that occur before a LOCK operation may appear to happen after it completes.
(6) UNLOCK operations.
This also acts as a one-way permeable barrier. It guarantees that all memory operations before the UNLOCK operation will appear to happen before the UNLOCK operation with respect to the other components of the system. Memory operations that occur after an UNLOCK operation may appear to happen before it completes.
Because ARMv6 and above are weakly ordered, we need to guarantee that the code after the lock must execute after the lock is taken and so a barrier there.
Also in unlock we must guarantee that code before the unlock must execute before the unlocking is done. So a smp_mb() at the beginning.
Adding Russell and Arnd in cc to correct my statements :)
This is the patch that added this behavior:
commit 6d9b37a3a80195d317887ff81aad6a58a66954b5 Author: Russell King rmk@dyn-67.arm.linux.org.uk Date: Tue Jul 26 19:44:26 2005 +0100
[PATCH] ARM SMP: Add ARMv6 memory barriers
Convert explicit gcc asm-based memory barriers into smp_mb() calls. These change between barrier() and the ARMv6 data memory barrier instruction depending on whether ARMv6 SMP is enabled.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- include/asm-arm/bitops.h | 4 ++-- include/asm-arm/locks.h | 36 ++++++++++++++++++++++++------------ include/asm-arm/spinlock.h | 53 ++++++++++++++++++++++++++++++++++++++--------------- include/asm-arm/system.h | 5 +++++ 4 files changed, 69 insertions(+), 29 deletions(-)
diff --git a/include/asm-arm/spinlock.h b/include/asm-arm/spinlock.h index 9705d5e..1f906d0 100644 --- a/include/asm-arm/spinlock.h +++ b/include/asm-arm/spinlock.h @@ -8,9 +8,10 @@ /* * ARMv6 Spin-locking. * - * We (exclusively) read the old value, and decrement it. If it - * hits zero, we may have won the lock, so we try (exclusively) - * storing it. + * We exclusively read the old value. If it is zero, we may have + * won the lock, so we try exclusively storing it. A memory barrier + * is required after we get a lock, and before we release it, because + * V6 CPUs are assumed to have weakly ordered memory. * * Unlocked value: 0 * Locked value: 1 @@ -41,7 +42,9 @@ static inline void _raw_spin_lock(spinlock_t *lock) " bne 1b" : "=&r" (tmp) : "r" (&lock->lock), "r" (1) - : "cc", "memory"); + : "cc"); + + smp_mb(); }