Currently architectures return inconsistent types for atomic64 ops. Some return long (e..g. powerpc), some return long long (e.g. arc), and some return s64 (e.g. x86).
This is a bit messy, and causes unnecessary pain (e.g. as values must be cast before they can be printed [1]).
This series reworks all the atomic64 implementations to use s64 as the base type for atomic64_t (as discussed [2]), and to ensure that this type is consistently used for parameters and return values in the API, avoiding further problems in this area.
This series (based on v5.1-rc1) can also be found in my atomics/type-cleanup branch [3] on kernel.org.
Thanks, Mark.
[1] https://lkml.kernel.org/r/20190310183051.87303-1-cai@lca.pw [2] https://lkml.kernel.org/r/20190313091844.GA24390@hirez.programming.kicks-ass... [3] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git atomics/type-cleanup
Mark Rutland (18): locking/atomic: crypto: nx: prepare for atomic64_read() conversion locking/atomic: s390/pci: prepare for atomic64_read() conversion locking/atomic: generic: use s64 for atomic64 locking/atomic: alpha: use s64 for atomic64 locking/atomic: arc: use s64 for atomic64 locking/atomic: arm: use s64 for atomic64 locking/atomic: arm64: use s64 for atomic64 locking/atomic: ia64: use s64 for atomic64 locking/atomic: mips: use s64 for atomic64 locking/atomic: powerpc: use s64 for atomic64 locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument locking/atomic: riscv: use s64 for atomic64 locking/atomic: s390: use s64 for atomic64 locking/atomic: sparc: use s64 for atomic64 locking/atomic: x86: use s64 for atomic64 locking/atomic: use s64 for atomic64_t on 64-bit locking/atomic: crypto: nx: remove redundant casts locking/atomic: s390/pci: remove redundant casts
arch/alpha/include/asm/atomic.h | 20 +++++------ arch/arc/include/asm/atomic.h | 41 +++++++++++----------- arch/arm/include/asm/atomic.h | 50 +++++++++++++------------- arch/arm64/include/asm/atomic_ll_sc.h | 20 +++++------ arch/arm64/include/asm/atomic_lse.h | 34 +++++++++--------- arch/ia64/include/asm/atomic.h | 20 +++++------ arch/mips/include/asm/atomic.h | 22 ++++++------ arch/powerpc/include/asm/atomic.h | 44 +++++++++++------------ arch/riscv/include/asm/atomic.h | 44 ++++++++++++----------- arch/s390/include/asm/atomic.h | 38 ++++++++++---------- arch/s390/pci/pci_debug.c | 2 +- arch/sparc/include/asm/atomic_64.h | 8 ++--- arch/x86/include/asm/atomic64_32.h | 66 +++++++++++++++++------------------ arch/x86/include/asm/atomic64_64.h | 38 ++++++++++---------- drivers/crypto/nx/nx-842-pseries.c | 6 ++-- include/asm-generic/atomic64.h | 20 +++++------ include/linux/types.h | 2 +- lib/atomic64.c | 32 ++++++++--------- 18 files changed, 252 insertions(+), 255 deletions(-)
The return type of atomic64_read() varies by architecture. It may return long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is somewhat painful, and mandates the use of explicit casts in some cases (e.g. when printing the return value).
To ameliorate matters, subsequent patches will make the atomic64 API consistently use s64.
As a preparatory step, this patch updates the nx-842 code to treat the return value of atomic64_read() as s64, using explicit casts. These casts will be removed once the s64 conversion is complete.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Herbert Xu herbert@gondor.apana.org.au Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- drivers/crypto/nx/nx-842-pseries.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c index 57932848361b..9432e9e42afe 100644 --- a/drivers/crypto/nx/nx-842-pseries.c +++ b/drivers/crypto/nx/nx-842-pseries.c @@ -869,8 +869,8 @@ static ssize_t nx842_##_name##_show(struct device *dev, \ rcu_read_lock(); \ local_devdata = rcu_dereference(devdata); \ if (local_devdata) \ - p = snprintf(buf, PAGE_SIZE, "%ld\n", \ - atomic64_read(&local_devdata->counters->_name)); \ + p = snprintf(buf, PAGE_SIZE, "%lld\n", \ + (s64)atomic64_read(&local_devdata->counters->_name)); \ rcu_read_unlock(); \ return p; \ } @@ -922,17 +922,17 @@ static ssize_t nx842_timehist_show(struct device *dev, }
for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) { - bytes = snprintf(p, bytes_remain, "%u-%uus:\t%ld\n", + bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n", i ? (2<<(i-1)) : 0, (2<<i)-1, - atomic64_read(×[i])); + (s64)atomic64_read(×[i])); bytes_remain -= bytes; p += bytes; } /* The last bucket holds everything over * 2<<(NX842_HIST_SLOTS - 2) us */ - bytes = snprintf(p, bytes_remain, "%uus - :\t%ld\n", + bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n", 2<<(NX842_HIST_SLOTS - 2), - atomic64_read(×[(NX842_HIST_SLOTS - 1)])); + (s64)atomic64_read(×[(NX842_HIST_SLOTS - 1)])); p += bytes;
rcu_read_unlock();
The return type of atomic64_read() varies by architecture. It may return long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is somewhat painful, and mandates the use of explicit casts in some cases (e.g. when printing the return value).
To ameliorate matters, subsequent patches will make the atomic64 API consistently use s64.
As a preparatory step, this patch updates the s390 pci debug code to treat the return value of atomic64_read() as s64, using an explicit cast. This cast will be removed once the s64 conversion is complete.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Heiko Carstens heiko.carstens@de.ibm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- arch/s390/pci/pci_debug.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c index 6b48ca7760a7..45eccf79e990 100644 --- a/arch/s390/pci/pci_debug.c +++ b/arch/s390/pci/pci_debug.c @@ -74,8 +74,8 @@ static void pci_sw_counter_show(struct seq_file *m) int i;
for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++) - seq_printf(m, "%26s:\t%lu\n", pci_sw_names[i], - atomic64_read(counter)); + seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i], + (s64)atomic64_read(counter)); }
static int pci_perf_show(struct seq_file *m, void *v)
As a step towards making the atomic64 API use consistent types treewide, let's have the generic atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long long, matching the generated headers.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- include/asm-generic/atomic64.h | 20 ++++++++++---------- lib/atomic64.c | 32 ++++++++++++++++---------------- 2 files changed, 26 insertions(+), 26 deletions(-)
diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h index 97b28b7f1f29..fc7b831ed632 100644 --- a/include/asm-generic/atomic64.h +++ b/include/asm-generic/atomic64.h @@ -14,24 +14,24 @@ #include <linux/types.h>
typedef struct { - long long counter; + s64 counter; } atomic64_t;
#define ATOMIC64_INIT(i) { (i) }
-extern long long atomic64_read(const atomic64_t *v); -extern void atomic64_set(atomic64_t *v, long long i); +extern s64 atomic64_read(const atomic64_t *v); +extern void atomic64_set(atomic64_t *v, s64 i);
#define atomic64_set_release(v, i) atomic64_set((v), (i))
#define ATOMIC64_OP(op) \ -extern void atomic64_##op(long long a, atomic64_t *v); +extern void atomic64_##op(s64 a, atomic64_t *v);
#define ATOMIC64_OP_RETURN(op) \ -extern long long atomic64_##op##_return(long long a, atomic64_t *v); +extern s64 atomic64_##op##_return(s64 a, atomic64_t *v);
#define ATOMIC64_FETCH_OP(op) \ -extern long long atomic64_fetch_##op(long long a, atomic64_t *v); +extern s64 atomic64_fetch_##op(s64 a, atomic64_t *v);
#define ATOMIC64_OPS(op) ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op)
@@ -50,11 +50,11 @@ ATOMIC64_OPS(xor) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP
-extern long long atomic64_dec_if_positive(atomic64_t *v); +extern s64 atomic64_dec_if_positive(atomic64_t *v); #define atomic64_dec_if_positive atomic64_dec_if_positive -extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n); -extern long long atomic64_xchg(atomic64_t *v, long long new); -extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u); +extern s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n); +extern s64 atomic64_xchg(atomic64_t *v, s64 new); +extern s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u); #define atomic64_fetch_add_unless atomic64_fetch_add_unless
#endif /* _ASM_GENERIC_ATOMIC64_H */ diff --git a/lib/atomic64.c b/lib/atomic64.c index 1d91e31eceec..62f218bf50a0 100644 --- a/lib/atomic64.c +++ b/lib/atomic64.c @@ -46,11 +46,11 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v) return &atomic64_lock[addr & (NR_LOCKS - 1)].lock; }
-long long atomic64_read(const atomic64_t *v) +s64 atomic64_read(const atomic64_t *v) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val;
raw_spin_lock_irqsave(lock, flags); val = v->counter; @@ -59,7 +59,7 @@ long long atomic64_read(const atomic64_t *v) } EXPORT_SYMBOL(atomic64_read);
-void atomic64_set(atomic64_t *v, long long i) +void atomic64_set(atomic64_t *v, s64 i) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); @@ -71,7 +71,7 @@ void atomic64_set(atomic64_t *v, long long i) EXPORT_SYMBOL(atomic64_set);
#define ATOMIC64_OP(op, c_op) \ -void atomic64_##op(long long a, atomic64_t *v) \ +void atomic64_##op(s64 a, atomic64_t *v) \ { \ unsigned long flags; \ raw_spinlock_t *lock = lock_addr(v); \ @@ -83,11 +83,11 @@ void atomic64_##op(long long a, atomic64_t *v) \ EXPORT_SYMBOL(atomic64_##op);
#define ATOMIC64_OP_RETURN(op, c_op) \ -long long atomic64_##op##_return(long long a, atomic64_t *v) \ +s64 atomic64_##op##_return(s64 a, atomic64_t *v) \ { \ unsigned long flags; \ raw_spinlock_t *lock = lock_addr(v); \ - long long val; \ + s64 val; \ \ raw_spin_lock_irqsave(lock, flags); \ val = (v->counter c_op a); \ @@ -97,11 +97,11 @@ long long atomic64_##op##_return(long long a, atomic64_t *v) \ EXPORT_SYMBOL(atomic64_##op##_return);
#define ATOMIC64_FETCH_OP(op, c_op) \ -long long atomic64_fetch_##op(long long a, atomic64_t *v) \ +s64 atomic64_fetch_##op(s64 a, atomic64_t *v) \ { \ unsigned long flags; \ raw_spinlock_t *lock = lock_addr(v); \ - long long val; \ + s64 val; \ \ raw_spin_lock_irqsave(lock, flags); \ val = v->counter; \ @@ -134,11 +134,11 @@ ATOMIC64_OPS(xor, ^=) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP
-long long atomic64_dec_if_positive(atomic64_t *v) +s64 atomic64_dec_if_positive(atomic64_t *v) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val;
raw_spin_lock_irqsave(lock, flags); val = v->counter - 1; @@ -149,11 +149,11 @@ long long atomic64_dec_if_positive(atomic64_t *v) } EXPORT_SYMBOL(atomic64_dec_if_positive);
-long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n) +s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val;
raw_spin_lock_irqsave(lock, flags); val = v->counter; @@ -164,11 +164,11 @@ long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n) } EXPORT_SYMBOL(atomic64_cmpxchg);
-long long atomic64_xchg(atomic64_t *v, long long new) +s64 atomic64_xchg(atomic64_t *v, s64 new) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val;
raw_spin_lock_irqsave(lock, flags); val = v->counter; @@ -178,11 +178,11 @@ long long atomic64_xchg(atomic64_t *v, long long new) } EXPORT_SYMBOL(atomic64_xchg);
-long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u) +s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val;
raw_spin_lock_irqsave(lock, flags); val = v->counter;
On Wed, May 22, 2019 at 3:23 PM Mark Rutland mark.rutland@arm.com wrote:
As a step towards making the atomic64 API use consistent types treewide, let's have the generic atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long long, matching the generated headers.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com
Acked-by: Arnd Bergmann arnd@arndb.de
As a step towards making the atomic64 API use consistent types treewide, let's have the alpha atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Ivan Kokshaysky ink@jurassic.park.msu.ru Cc: Matt Turner mattst88@gmail.com Cc: Peter Zijlstra peterz@infradead.org Cc: Richard Henderson rth@twiddle.net Cc: Will Deacon will.deacon@arm.com --- arch/alpha/include/asm/atomic.h | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 150a1c5d6a2c..2144530d1428 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -93,9 +93,9 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ }
#define ATOMIC64_OP(op, asm_op) \ -static __inline__ void atomic64_##op(long i, atomic64_t * v) \ +static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \ { \ - unsigned long temp; \ + s64 temp; \ __asm__ __volatile__( \ "1: ldq_l %0,%1\n" \ " " #asm_op " %0,%2,%0\n" \ @@ -109,9 +109,9 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v) \ } \
#define ATOMIC64_OP_RETURN(op, asm_op) \ -static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \ +static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \ { \ - long temp, result; \ + s64 temp, result; \ __asm__ __volatile__( \ "1: ldq_l %0,%1\n" \ " " #asm_op " %0,%3,%2\n" \ @@ -128,9 +128,9 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \ }
#define ATOMIC64_FETCH_OP(op, asm_op) \ -static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \ +static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \ { \ - long temp, result; \ + s64 temp, result; \ __asm__ __volatile__( \ "1: ldq_l %2,%1\n" \ " " #asm_op " %2,%3,%0\n" \ @@ -246,9 +246,9 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) +static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - long c, new, old; + s64 c, new, old; smp_mb(); __asm__ __volatile__( "1: ldq_l %[old],%[mem]\n" @@ -276,9 +276,9 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) * The function returns the old value of *v minus 1, even if * the atomic variable, v, was not decremented. */ -static inline long atomic64_dec_if_positive(atomic64_t *v) +static inline s64 atomic64_dec_if_positive(atomic64_t *v) { - long old, tmp; + s64 old, tmp; smp_mb(); __asm__ __volatile__( "1: ldq_l %[old],%[mem]\n"
As a step towards making the atomic64 API use consistent types treewide, let's have the arc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than u64, matching the generated headers.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Vineet Gupta vgupta@synopsys.com Cc: Will Deacon will.deacon@arm.com --- arch/arc/include/asm/atomic.h | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 158af079838d..2c75df55d0d2 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -324,14 +324,14 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) */
typedef struct { - aligned_u64 counter; + s64 __aligned(8) counter; } atomic64_t;
#define ATOMIC64_INIT(a) { (a) }
-static inline long long atomic64_read(const atomic64_t *v) +static inline s64 atomic64_read(const atomic64_t *v) { - unsigned long long val; + s64 val;
__asm__ __volatile__( " ldd %0, [%1] \n" @@ -341,7 +341,7 @@ static inline long long atomic64_read(const atomic64_t *v) return val; }
-static inline void atomic64_set(atomic64_t *v, long long a) +static inline void atomic64_set(atomic64_t *v, s64 a) { /* * This could have been a simple assignment in "C" but would need @@ -362,9 +362,9 @@ static inline void atomic64_set(atomic64_t *v, long long a) }
#define ATOMIC64_OP(op, op1, op2) \ -static inline void atomic64_##op(long long a, atomic64_t *v) \ +static inline void atomic64_##op(s64 a, atomic64_t *v) \ { \ - unsigned long long val; \ + s64 val; \ \ __asm__ __volatile__( \ "1: \n" \ @@ -375,13 +375,13 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ " bnz 1b \n" \ : "=&r"(val) \ : "r"(&v->counter), "ir"(a) \ - : "cc"); \ + : "cc"); \ } \
#define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ +static inline s64 atomic64_##op##_return(s64 a, atomic64_t *v) \ { \ - unsigned long long val; \ + s64 val; \ \ smp_mb(); \ \ @@ -402,9 +402,9 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ }
#define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ +static inline s64 atomic64_fetch_##op(s64 a, atomic64_t *v) \ { \ - unsigned long long val, orig; \ + s64 val, orig; \ \ smp_mb(); \ \ @@ -444,10 +444,10 @@ ATOMIC64_OPS(xor, xor, xor) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP
-static inline long long -atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new) +static inline s64 +atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new) { - long long prev; + s64 prev;
smp_mb();
@@ -467,9 +467,9 @@ atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new) return prev; }
-static inline long long atomic64_xchg(atomic64_t *ptr, long long new) +static inline s64 atomic64_xchg(atomic64_t *ptr, s64 new) { - long long prev; + s64 prev;
smp_mb();
@@ -495,9 +495,9 @@ static inline long long atomic64_xchg(atomic64_t *ptr, long long new) * the atomic variable, v, was not decremented. */
-static inline long long atomic64_dec_if_positive(atomic64_t *v) +static inline s64 atomic64_dec_if_positive(atomic64_t *v) { - long long val; + s64 val;
smp_mb();
@@ -528,10 +528,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) * Atomically adds @a to @v, if it was not @u. * Returns the old value of @v */ -static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, - long long u) +static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - long long old, temp; + s64 old, temp;
smp_mb();
On 5/22/19 6:24 AM, Mark Rutland wrote:
As a step towards making the atomic64 API use consistent types treewide, let's have the arc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than u64, matching the generated headers.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Vineet Gupta vgupta@synopsys.com Cc: Will Deacon will.deacon@arm.com
Thx for the cleanup Mark.
Acked-By: Vineet Gupta vgupta@synopsys.com # for ARC bits
-Vineet
As a step towards making the atomic64 API use consistent types treewide, let's have the arm atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long long, matching the generated headers.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Russell King linux@armlinux.org.uk Cc: Will Deacon will.deacon@arm.com --- arch/arm/include/asm/atomic.h | 50 +++++++++++++++++++++---------------------- 1 file changed, 24 insertions(+), 26 deletions(-)
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index f74756641410..d45c41f6f69c 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -249,15 +249,15 @@ ATOMIC_OPS(xor, ^=, eor)
#ifndef CONFIG_GENERIC_ATOMIC64 typedef struct { - long long counter; + s64 counter; } atomic64_t;
#define ATOMIC64_INIT(i) { (i) }
#ifdef CONFIG_ARM_LPAE -static inline long long atomic64_read(const atomic64_t *v) +static inline s64 atomic64_read(const atomic64_t *v) { - long long result; + s64 result;
__asm__ __volatile__("@ atomic64_read\n" " ldrd %0, %H0, [%1]" @@ -268,7 +268,7 @@ static inline long long atomic64_read(const atomic64_t *v) return result; }
-static inline void atomic64_set(atomic64_t *v, long long i) +static inline void atomic64_set(atomic64_t *v, s64 i) { __asm__ __volatile__("@ atomic64_set\n" " strd %2, %H2, [%1]" @@ -277,9 +277,9 @@ static inline void atomic64_set(atomic64_t *v, long long i) ); } #else -static inline long long atomic64_read(const atomic64_t *v) +static inline s64 atomic64_read(const atomic64_t *v) { - long long result; + s64 result;
__asm__ __volatile__("@ atomic64_read\n" " ldrexd %0, %H0, [%1]" @@ -290,9 +290,9 @@ static inline long long atomic64_read(const atomic64_t *v) return result; }
-static inline void atomic64_set(atomic64_t *v, long long i) +static inline void atomic64_set(atomic64_t *v, s64 i) { - long long tmp; + s64 tmp;
prefetchw(&v->counter); __asm__ __volatile__("@ atomic64_set\n" @@ -307,9 +307,9 @@ static inline void atomic64_set(atomic64_t *v, long long i) #endif
#define ATOMIC64_OP(op, op1, op2) \ -static inline void atomic64_##op(long long i, atomic64_t *v) \ +static inline void atomic64_##op(s64 i, atomic64_t *v) \ { \ - long long result; \ + s64 result; \ unsigned long tmp; \ \ prefetchw(&v->counter); \ @@ -326,10 +326,10 @@ static inline void atomic64_##op(long long i, atomic64_t *v) \ } \
#define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline long long \ -atomic64_##op##_return_relaxed(long long i, atomic64_t *v) \ +static inline s64 \ +atomic64_##op##_return_relaxed(s64 i, atomic64_t *v) \ { \ - long long result; \ + s64 result; \ unsigned long tmp; \ \ prefetchw(&v->counter); \ @@ -349,10 +349,10 @@ atomic64_##op##_return_relaxed(long long i, atomic64_t *v) \ }
#define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline long long \ -atomic64_fetch_##op##_relaxed(long long i, atomic64_t *v) \ +static inline s64 \ +atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v) \ { \ - long long result, val; \ + s64 result, val; \ unsigned long tmp; \ \ prefetchw(&v->counter); \ @@ -406,10 +406,9 @@ ATOMIC64_OPS(xor, eor, eor) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP
-static inline long long -atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new) +static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new) { - long long oldval; + s64 oldval; unsigned long res;
prefetchw(&ptr->counter); @@ -430,9 +429,9 @@ atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new) } #define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed
-static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new) +static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new) { - long long result; + s64 result; unsigned long tmp;
prefetchw(&ptr->counter); @@ -450,9 +449,9 @@ static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new) } #define atomic64_xchg_relaxed atomic64_xchg_relaxed
-static inline long long atomic64_dec_if_positive(atomic64_t *v) +static inline s64 atomic64_dec_if_positive(atomic64_t *v) { - long long result; + s64 result; unsigned long tmp;
smp_mb(); @@ -478,10 +477,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) } #define atomic64_dec_if_positive atomic64_dec_if_positive
-static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, - long long u) +static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - long long oldval, newval; + s64 oldval, newval; unsigned long tmp;
smp_mb();
As a step towards making the atomic64 API use consistent types treewide, let's have the arm64 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch.
Note that in arch_atomic64_dec_if_positive(), the x0 variable is left as long, as this variable is also used to hold the pointer to the atomic64_t.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Catalin Marinas catalin.marinas@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- arch/arm64/include/asm/atomic_ll_sc.h | 20 ++++++++++---------- arch/arm64/include/asm/atomic_lse.h | 34 +++++++++++++++++----------------- 2 files changed, 27 insertions(+), 27 deletions(-)
diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h index e321293e0c89..f3b12d7f431f 100644 --- a/arch/arm64/include/asm/atomic_ll_sc.h +++ b/arch/arm64/include/asm/atomic_ll_sc.h @@ -133,9 +133,9 @@ ATOMIC_OPS(xor, eor)
#define ATOMIC64_OP(op, asm_op) \ __LL_SC_INLINE void \ -__LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \ +__LL_SC_PREFIX(arch_atomic64_##op(s64 i, atomic64_t *v)) \ { \ - long result; \ + s64 result; \ unsigned long tmp; \ \ asm volatile("// atomic64_" #op "\n" \ @@ -150,10 +150,10 @@ __LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \ __LL_SC_EXPORT(arch_atomic64_##op);
#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \ -__LL_SC_INLINE long \ -__LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\ +__LL_SC_INLINE s64 \ +__LL_SC_PREFIX(arch_atomic64_##op##_return##name(s64 i, atomic64_t *v))\ { \ - long result; \ + s64 result; \ unsigned long tmp; \ \ asm volatile("// atomic64_" #op "_return" #name "\n" \ @@ -172,10 +172,10 @@ __LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\ __LL_SC_EXPORT(arch_atomic64_##op##_return##name);
#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \ -__LL_SC_INLINE long \ -__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(long i, atomic64_t *v)) \ +__LL_SC_INLINE s64 \ +__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v)) \ { \ - long result, val; \ + s64 result, val; \ unsigned long tmp; \ \ asm volatile("// atomic64_fetch_" #op #name "\n" \ @@ -225,10 +225,10 @@ ATOMIC64_OPS(xor, eor) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP
-__LL_SC_INLINE long +__LL_SC_INLINE s64 __LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v)) { - long result; + s64 result; unsigned long tmp;
asm volatile("// atomic64_dec_if_positive\n" diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h index 9256a3921e4b..c53832b08af7 100644 --- a/arch/arm64/include/asm/atomic_lse.h +++ b/arch/arm64/include/asm/atomic_lse.h @@ -224,9 +224,9 @@ ATOMIC_FETCH_OP_SUB( , al, "memory")
#define __LL_SC_ATOMIC64(op) __LL_SC_CALL(arch_atomic64_##op) #define ATOMIC64_OP(op, asm_op) \ -static inline void arch_atomic64_##op(long i, atomic64_t *v) \ +static inline void arch_atomic64_##op(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN(__LL_SC_ATOMIC64(op), \ @@ -244,9 +244,9 @@ ATOMIC64_OP(add, stadd) #undef ATOMIC64_OP
#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \ -static inline long arch_atomic64_fetch_##op##name(long i, atomic64_t *v)\ +static inline s64 arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -276,9 +276,9 @@ ATOMIC64_FETCH_OPS(add, ldadd) #undef ATOMIC64_FETCH_OPS
#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \ -static inline long arch_atomic64_add_return##name(long i, atomic64_t *v)\ +static inline s64 arch_atomic64_add_return##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -302,9 +302,9 @@ ATOMIC64_OP_ADD_RETURN( , al, "memory")
#undef ATOMIC64_OP_ADD_RETURN
-static inline void arch_atomic64_and(long i, atomic64_t *v) +static inline void arch_atomic64_and(s64 i, atomic64_t *v) { - register long x0 asm ("x0") = i; + register s64 x0 asm ("x0") = i; register atomic64_t *x1 asm ("x1") = v;
asm volatile(ARM64_LSE_ATOMIC_INSN( @@ -320,9 +320,9 @@ static inline void arch_atomic64_and(long i, atomic64_t *v) }
#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ -static inline long arch_atomic64_fetch_and##name(long i, atomic64_t *v) \ +static inline s64 arch_atomic64_fetch_and##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -346,9 +346,9 @@ ATOMIC64_FETCH_OP_AND( , al, "memory")
#undef ATOMIC64_FETCH_OP_AND
-static inline void arch_atomic64_sub(long i, atomic64_t *v) +static inline void arch_atomic64_sub(s64 i, atomic64_t *v) { - register long x0 asm ("x0") = i; + register s64 x0 asm ("x0") = i; register atomic64_t *x1 asm ("x1") = v;
asm volatile(ARM64_LSE_ATOMIC_INSN( @@ -364,9 +364,9 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v) }
#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \ -static inline long arch_atomic64_sub_return##name(long i, atomic64_t *v)\ +static inline s64 arch_atomic64_sub_return##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -392,9 +392,9 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory") #undef ATOMIC64_OP_SUB_RETURN
#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \ -static inline long arch_atomic64_fetch_sub##name(long i, atomic64_t *v) \ +static inline s64 arch_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -418,7 +418,7 @@ ATOMIC64_FETCH_OP_SUB( , al, "memory")
#undef ATOMIC64_FETCH_OP_SUB
-static inline long arch_atomic64_dec_if_positive(atomic64_t *v) +static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) { register long x0 asm ("x0") = (long)v;
As a step towards making the atomic64 API use consistent types treewide, let's have the ia64 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or __s64, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: Peter Zijlstra peterz@infradead.org Cc: Tony Luck tony.luck@intel.com Cc: Will Deacon will.deacon@arm.com --- arch/ia64/include/asm/atomic.h | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 206530d0751b..50440f3ddc43 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -124,10 +124,10 @@ ATOMIC_FETCH_OP(xor, ^) #undef ATOMIC_OP
#define ATOMIC64_OP(op, c_op) \ -static __inline__ long \ -ia64_atomic64_##op (__s64 i, atomic64_t *v) \ +static __inline__ s64 \ +ia64_atomic64_##op (s64 i, atomic64_t *v) \ { \ - __s64 old, new; \ + s64 old, new; \ CMPXCHG_BUGCHECK_DECL \ \ do { \ @@ -139,10 +139,10 @@ ia64_atomic64_##op (__s64 i, atomic64_t *v) \ }
#define ATOMIC64_FETCH_OP(op, c_op) \ -static __inline__ long \ -ia64_atomic64_fetch_##op (__s64 i, atomic64_t *v) \ +static __inline__ s64 \ +ia64_atomic64_fetch_##op (s64 i, atomic64_t *v) \ { \ - __s64 old, new; \ + s64 old, new; \ CMPXCHG_BUGCHECK_DECL \ \ do { \ @@ -162,7 +162,7 @@ ATOMIC64_OPS(sub, -)
#define atomic64_add_return(i,v) \ ({ \ - long __ia64_aar_i = (i); \ + s64 __ia64_aar_i = (i); \ __ia64_atomic_const(i) \ ? ia64_fetch_and_add(__ia64_aar_i, &(v)->counter) \ : ia64_atomic64_add(__ia64_aar_i, v); \ @@ -170,7 +170,7 @@ ATOMIC64_OPS(sub, -)
#define atomic64_sub_return(i,v) \ ({ \ - long __ia64_asr_i = (i); \ + s64 __ia64_asr_i = (i); \ __ia64_atomic_const(i) \ ? ia64_fetch_and_add(-__ia64_asr_i, &(v)->counter) \ : ia64_atomic64_sub(__ia64_asr_i, v); \ @@ -178,7 +178,7 @@ ATOMIC64_OPS(sub, -)
#define atomic64_fetch_add(i,v) \ ({ \ - long __ia64_aar_i = (i); \ + s64 __ia64_aar_i = (i); \ __ia64_atomic_const(i) \ ? ia64_fetchadd(__ia64_aar_i, &(v)->counter, acq) \ : ia64_atomic64_fetch_add(__ia64_aar_i, v); \ @@ -186,7 +186,7 @@ ATOMIC64_OPS(sub, -)
#define atomic64_fetch_sub(i,v) \ ({ \ - long __ia64_asr_i = (i); \ + s64 __ia64_asr_i = (i); \ __ia64_atomic_const(i) \ ? ia64_fetchadd(-__ia64_asr_i, &(v)->counter, acq) \ : ia64_atomic64_fetch_sub(__ia64_asr_i, v); \
As a step towards making the atomic64 API use consistent types treewide, let's have the mips atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or __s64, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: James Hogan jhogan@kernel.org Cc: Paul Burton paul.burton@mips.com Cc: Peter Zijlstra peterz@infradead.org Cc: Ralf Baechle ralf@linux-mips.org Cc: Will Deacon will.deacon@arm.com --- arch/mips/include/asm/atomic.h | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 94096299fc56..9a82dd11c0e9 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -254,10 +254,10 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) #define atomic64_set(v, i) WRITE_ONCE((v)->counter, (i))
#define ATOMIC64_OP(op, c_op, asm_op) \ -static __inline__ void atomic64_##op(long i, atomic64_t * v) \ +static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \ { \ if (kernel_uses_llsc) { \ - long temp; \ + s64 temp; \ \ loongson_llsc_mb(); \ __asm__ __volatile__( \ @@ -280,12 +280,12 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v) \ }
#define ATOMIC64_OP_RETURN(op, c_op, asm_op) \ -static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \ +static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \ { \ - long result; \ + s64 result; \ \ if (kernel_uses_llsc) { \ - long temp; \ + s64 temp; \ \ loongson_llsc_mb(); \ __asm__ __volatile__( \ @@ -314,12 +314,12 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \ }
#define ATOMIC64_FETCH_OP(op, c_op, asm_op) \ -static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \ +static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \ { \ - long result; \ + s64 result; \ \ if (kernel_uses_llsc) { \ - long temp; \ + s64 temp; \ \ loongson_llsc_mb(); \ __asm__ __volatile__( \ @@ -386,14 +386,14 @@ ATOMIC64_OPS(xor, ^=, xor) * Atomically test @v and subtract @i if @v is greater or equal than @i. * The function returns the old value of @v minus @i. */ -static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v) +static __inline__ s64 atomic64_sub_if_positive(s64 i, atomic64_t * v) { - long result; + s64 result;
smp_mb__before_llsc();
if (kernel_uses_llsc) { - long temp; + s64 temp;
__asm__ __volatile__( " .set push \n"
As a step towards making the atomic64 API use consistent types treewide, let's have the powerpc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- arch/powerpc/include/asm/atomic.h | 44 +++++++++++++++++++-------------------- 1 file changed, 22 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 52eafaf74054..31c231ea56b7 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -297,24 +297,24 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)
#define ATOMIC64_INIT(i) { (i) }
-static __inline__ long atomic64_read(const atomic64_t *v) +static __inline__ s64 atomic64_read(const atomic64_t *v) { - long t; + s64 t;
__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
return t; }
-static __inline__ void atomic64_set(atomic64_t *v, long i) +static __inline__ void atomic64_set(atomic64_t *v, s64 i) { __asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i)); }
#define ATOMIC64_OP(op, asm_op) \ -static __inline__ void atomic64_##op(long a, atomic64_t *v) \ +static __inline__ void atomic64_##op(s64 a, atomic64_t *v) \ { \ - long t; \ + s64 t; \ \ __asm__ __volatile__( \ "1: ldarx %0,0,%3 # atomic64_" #op "\n" \ @@ -327,10 +327,10 @@ static __inline__ void atomic64_##op(long a, atomic64_t *v) \ }
#define ATOMIC64_OP_RETURN_RELAXED(op, asm_op) \ -static inline long \ -atomic64_##op##_return_relaxed(long a, atomic64_t *v) \ +static inline s64 \ +atomic64_##op##_return_relaxed(s64 a, atomic64_t *v) \ { \ - long t; \ + s64 t; \ \ __asm__ __volatile__( \ "1: ldarx %0,0,%3 # atomic64_" #op "_return_relaxed\n" \ @@ -345,10 +345,10 @@ atomic64_##op##_return_relaxed(long a, atomic64_t *v) \ }
#define ATOMIC64_FETCH_OP_RELAXED(op, asm_op) \ -static inline long \ -atomic64_fetch_##op##_relaxed(long a, atomic64_t *v) \ +static inline s64 \ +atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v) \ { \ - long res, t; \ + s64 res, t; \ \ __asm__ __volatile__( \ "1: ldarx %0,0,%4 # atomic64_fetch_" #op "_relaxed\n" \ @@ -396,7 +396,7 @@ ATOMIC64_OPS(xor, xor)
static __inline__ void atomic64_inc(atomic64_t *v) { - long t; + s64 t;
__asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_inc\n\ @@ -409,9 +409,9 @@ static __inline__ void atomic64_inc(atomic64_t *v) } #define atomic64_inc atomic64_inc
-static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v) +static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v) { - long t; + s64 t;
__asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_inc_return_relaxed\n" @@ -427,7 +427,7 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
static __inline__ void atomic64_dec(atomic64_t *v) { - long t; + s64 t;
__asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_dec\n\ @@ -440,9 +440,9 @@ static __inline__ void atomic64_dec(atomic64_t *v) } #define atomic64_dec atomic64_dec
-static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v) +static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v) { - long t; + s64 t;
__asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_dec_return_relaxed\n" @@ -463,9 +463,9 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v) * Atomically test *v and decrement if it is greater than 0. * The function returns the old value of *v minus 1. */ -static __inline__ long atomic64_dec_if_positive(atomic64_t *v) +static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v) { - long t; + s64 t;
__asm__ __volatile__( PPC_ATOMIC_ENTRY_BARRIER @@ -502,9 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) +static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - long t; + s64 t;
__asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER @@ -534,7 +534,7 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) */ static __inline__ int atomic64_inc_not_zero(atomic64_t *v) { - long t1, t2; + s64 t1, t2;
__asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER
Mark Rutland mark.rutland@arm.com writes:
As a step towards making the atomic64 API use consistent types treewide, let's have the powerpc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com
arch/powerpc/include/asm/atomic.h | 44 +++++++++++++++++++-------------------- 1 file changed, 22 insertions(+), 22 deletions(-)
Conversion looks good to me.
Reviewed-by: Michael Ellerman mpe@ellerman.id.au (powerpc)
cheers
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 52eafaf74054..31c231ea56b7 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -297,24 +297,24 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v) #define ATOMIC64_INIT(i) { (i) } -static __inline__ long atomic64_read(const atomic64_t *v) +static __inline__ s64 atomic64_read(const atomic64_t *v) {
- long t;
- s64 t;
__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter)); return t; } -static __inline__ void atomic64_set(atomic64_t *v, long i) +static __inline__ void atomic64_set(atomic64_t *v, s64 i) { __asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i)); } #define ATOMIC64_OP(op, asm_op) \ -static __inline__ void atomic64_##op(long a, atomic64_t *v) \ +static __inline__ void atomic64_##op(s64 a, atomic64_t *v) \ { \
- long t; \
- s64 t; \ \ __asm__ __volatile__( \
"1: ldarx %0,0,%3 # atomic64_" #op "\n" \ @@ -327,10 +327,10 @@ static __inline__ void atomic64_##op(long a, atomic64_t *v) \ } #define ATOMIC64_OP_RETURN_RELAXED(op, asm_op) \ -static inline long \ -atomic64_##op##_return_relaxed(long a, atomic64_t *v) \ +static inline s64 \ +atomic64_##op##_return_relaxed(s64 a, atomic64_t *v) \ { \
- long t; \
- s64 t; \ \ __asm__ __volatile__( \
"1: ldarx %0,0,%3 # atomic64_" #op "_return_relaxed\n" \ @@ -345,10 +345,10 @@ atomic64_##op##_return_relaxed(long a, atomic64_t *v) \ } #define ATOMIC64_FETCH_OP_RELAXED(op, asm_op) \ -static inline long \ -atomic64_fetch_##op##_relaxed(long a, atomic64_t *v) \ +static inline s64 \ +atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v) \ { \
- long res, t; \
- s64 res, t; \ \ __asm__ __volatile__( \
"1: ldarx %0,0,%4 # atomic64_fetch_" #op "_relaxed\n" \ @@ -396,7 +396,7 @@ ATOMIC64_OPS(xor, xor) static __inline__ void atomic64_inc(atomic64_t *v) {
- long t;
- s64 t;
__asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_inc\n\ @@ -409,9 +409,9 @@ static __inline__ void atomic64_inc(atomic64_t *v) } #define atomic64_inc atomic64_inc -static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v) +static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v) {
- long t;
- s64 t;
__asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_inc_return_relaxed\n" @@ -427,7 +427,7 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v) static __inline__ void atomic64_dec(atomic64_t *v) {
- long t;
- s64 t;
__asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_dec\n\ @@ -440,9 +440,9 @@ static __inline__ void atomic64_dec(atomic64_t *v) } #define atomic64_dec atomic64_dec -static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v) +static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v) {
- long t;
- s64 t;
__asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_dec_return_relaxed\n" @@ -463,9 +463,9 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
- Atomically test *v and decrement if it is greater than 0.
- The function returns the old value of *v minus 1.
*/ -static __inline__ long atomic64_dec_if_positive(atomic64_t *v) +static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v) {
- long t;
- s64 t;
__asm__ __volatile__( PPC_ATOMIC_ENTRY_BARRIER @@ -502,9 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
- Atomically adds @a to @v, so long as it was not @u.
- Returns the old value of @v.
*/ -static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) +static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) {
- long t;
- s64 t;
__asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER @@ -534,7 +534,7 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) */ static __inline__ int atomic64_inc_not_zero(atomic64_t *v) {
- long t1, t2;
- s64 t1, t2;
__asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER -- 2.11.0
Presently the riscv implementation of atomic64_sub_if_positive() takes a 32-bit offset value rather than a 64-bit offset value as it should do. Thus, if called with a 64-bit offset, the value will be unexpectedly truncated to 32 bits.
Fix this by taking the offset as a long rather than an int.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Albert Ou aou@eecs.berkeley.edu Cc: Palmer Dabbelt palmer@sifive.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com Cc: stable@vger.kernel.org --- arch/riscv/include/asm/atomic.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 93826771b616..c9e18289d65c 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -336,7 +336,7 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset) #define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)
#ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, int offset) +static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset) { long prev, rc;
On Wed, 22 May 2019 06:22:43 PDT (-0700), mark.rutland@arm.com wrote:
Presently the riscv implementation of atomic64_sub_if_positive() takes a 32-bit offset value rather than a 64-bit offset value as it should do. Thus, if called with a 64-bit offset, the value will be unexpectedly truncated to 32 bits.
Fix this by taking the offset as a long rather than an int.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Albert Ou aou@eecs.berkeley.edu Cc: Palmer Dabbelt palmer@sifive.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com Cc: stable@vger.kernel.org
arch/riscv/include/asm/atomic.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 93826771b616..c9e18289d65c 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -336,7 +336,7 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset) #define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)
#ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, int offset) +static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset) { long prev, rc;
Reviewed-by: Palmer Dabbelt palmer@sifive.com
Thanks!
As a step towards making the atomic64 API use consistent types treewide, let's have the s390 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Albert Ou aou@eecs.berkeley.edu Cc: Palmer Dabbelt palmer@sifive.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- arch/riscv/include/asm/atomic.h | 44 +++++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 21 deletions(-)
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index c9e18289d65c..bffebc57357d 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -42,11 +42,11 @@ static __always_inline void atomic_set(atomic_t *v, int i)
#ifndef CONFIG_GENERIC_ATOMIC64 #define ATOMIC64_INIT(i) { (i) } -static __always_inline long atomic64_read(const atomic64_t *v) +static __always_inline s64 atomic64_read(const atomic64_t *v) { return READ_ONCE(v->counter); } -static __always_inline void atomic64_set(atomic64_t *v, long i) +static __always_inline void atomic64_set(atomic64_t *v, s64 i) { WRITE_ONCE(v->counter, i); } @@ -70,11 +70,11 @@ void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \
#ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS(op, asm_op, I) \ - ATOMIC_OP (op, asm_op, I, w, int, ) + ATOMIC_OP (op, asm_op, I, w, int, ) #else #define ATOMIC_OPS(op, asm_op, I) \ - ATOMIC_OP (op, asm_op, I, w, int, ) \ - ATOMIC_OP (op, asm_op, I, d, long, 64) + ATOMIC_OP (op, asm_op, I, w, int, ) \ + ATOMIC_OP (op, asm_op, I, d, s64, 64) #endif
ATOMIC_OPS(add, add, i) @@ -131,14 +131,14 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v) \
#ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS(op, asm_op, c_op, I) \ - ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) + ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \ + ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) #else #define ATOMIC_OPS(op, asm_op, c_op, I) \ - ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \ - ATOMIC_FETCH_OP( op, asm_op, I, d, long, 64) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, long, 64) + ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \ + ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \ + ATOMIC_FETCH_OP( op, asm_op, I, d, s64, 64) \ + ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, s64, 64) #endif
ATOMIC_OPS(add, add, +, i) @@ -170,11 +170,11 @@ ATOMIC_OPS(sub, add, +, -i)
#ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS(op, asm_op, I) \ - ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) + ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) #else #define ATOMIC_OPS(op, asm_op, I) \ - ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \ - ATOMIC_FETCH_OP(op, asm_op, I, d, long, 64) + ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \ + ATOMIC_FETCH_OP(op, asm_op, I, d, s64, 64) #endif
ATOMIC_OPS(and, and, i) @@ -223,9 +223,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) #define atomic_fetch_add_unless atomic_fetch_add_unless
#ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) +static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - long prev, rc; + s64 prev; + long rc;
__asm__ __volatile__ ( "0: lr.d %[p], %[c]\n" @@ -294,11 +295,11 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \
#ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS() \ - ATOMIC_OP( int, , 4) + ATOMIC_OP(int, , 4) #else #define ATOMIC_OPS() \ - ATOMIC_OP( int, , 4) \ - ATOMIC_OP(long, 64, 8) + ATOMIC_OP(int, , 4) \ + ATOMIC_OP(s64, 64, 8) #endif
ATOMIC_OPS() @@ -336,9 +337,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset) #define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)
#ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset) +static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset) { - long prev, rc; + s64 prev; + long rc;
__asm__ __volatile__ ( "0: lr.d %[p], %[c]\n"
On Wed, 22 May 2019 06:22:44 PDT (-0700), mark.rutland@arm.com wrote:
As a step towards making the atomic64 API use consistent types treewide, let's have the s390 atomic64 implementation use s64 as the underlying
and apparently the RISC-V one as well? :)
type for atomic64_t, rather than long, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Albert Ou aou@eecs.berkeley.edu Cc: Palmer Dabbelt palmer@sifive.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com
arch/riscv/include/asm/atomic.h | 44 +++++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 21 deletions(-)
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index c9e18289d65c..bffebc57357d 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -42,11 +42,11 @@ static __always_inline void atomic_set(atomic_t *v, int i)
#ifndef CONFIG_GENERIC_ATOMIC64 #define ATOMIC64_INIT(i) { (i) } -static __always_inline long atomic64_read(const atomic64_t *v) +static __always_inline s64 atomic64_read(const atomic64_t *v) { return READ_ONCE(v->counter); } -static __always_inline void atomic64_set(atomic64_t *v, long i) +static __always_inline void atomic64_set(atomic64_t *v, s64 i) { WRITE_ONCE(v->counter, i); } @@ -70,11 +70,11 @@ void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \
#ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS(op, asm_op, I) \
ATOMIC_OP (op, asm_op, I, w, int, )
ATOMIC_OP (op, asm_op, I, w, int, )
#else #define ATOMIC_OPS(op, asm_op, I) \
ATOMIC_OP (op, asm_op, I, w, int, ) \
ATOMIC_OP (op, asm_op, I, d, long, 64)
ATOMIC_OP (op, asm_op, I, w, int, ) \
ATOMIC_OP (op, asm_op, I, d, s64, 64)
#endif
ATOMIC_OPS(add, add, i) @@ -131,14 +131,14 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v) \
#ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS(op, asm_op, c_op, I) \
ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, )
ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, )
#else #define ATOMIC_OPS(op, asm_op, c_op, I) \
ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \
ATOMIC_FETCH_OP( op, asm_op, I, d, long, 64) \
ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, long, 64)
ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \
ATOMIC_FETCH_OP( op, asm_op, I, d, s64, 64) \
ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, s64, 64)
#endif
ATOMIC_OPS(add, add, +, i) @@ -170,11 +170,11 @@ ATOMIC_OPS(sub, add, +, -i)
#ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS(op, asm_op, I) \
ATOMIC_FETCH_OP(op, asm_op, I, w, int, )
ATOMIC_FETCH_OP(op, asm_op, I, w, int, )
#else #define ATOMIC_OPS(op, asm_op, I) \
ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \
ATOMIC_FETCH_OP(op, asm_op, I, d, long, 64)
ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \
ATOMIC_FETCH_OP(op, asm_op, I, d, s64, 64)
#endif
ATOMIC_OPS(and, and, i) @@ -223,9 +223,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) #define atomic_fetch_add_unless atomic_fetch_add_unless
#ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) +static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) {
long prev, rc;
s64 prev;
long rc;
__asm__ __volatile__ ( "0: lr.d %[p], %[c]\n"
@@ -294,11 +295,11 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \
#ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS() \
- ATOMIC_OP( int, , 4)
- ATOMIC_OP(int, , 4)
#else #define ATOMIC_OPS() \
- ATOMIC_OP( int, , 4) \
- ATOMIC_OP(long, 64, 8)
- ATOMIC_OP(int, , 4) \
- ATOMIC_OP(s64, 64, 8)
#endif
ATOMIC_OPS() @@ -336,9 +337,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset) #define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)
#ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset) +static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset) {
long prev, rc;
s64 prev;
long rc;
__asm__ __volatile__ ( "0: lr.d %[p], %[c]\n"
Reviwed-by: Palmer Dabbelt palmer@sifive.com
Thanks!
On Wed, May 22, 2019 at 12:06:31PM -0700, Palmer Dabbelt wrote:
On Wed, 22 May 2019 06:22:44 PDT (-0700), mark.rutland@arm.com wrote:
As a step towards making the atomic64 API use consistent types treewide, let's have the s390 atomic64 implementation use s64 as the underlying
and apparently the RISC-V one as well? :)
Heh. You can guess which commit message I wrote first...
Reviwed-by: Palmer Dabbelt palmer@sifive.com
Cheers! I'll add an extra 'e' when I fold this in. :)
Thanks, Mark.
As a step towards making the atomic64 API use consistent types treewide, let's have the s390 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch.
The s390-internal __atomic64_*() ops are also used by the s390 bitops, and expect pointers to long. Since atomic64_t::counter will be converted to s64 in a subsequent patch, pointes to this are explicitly cast to pointers to long when passed to __atomic64_*() ops.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Heiko Carstens heiko.carstens@de.ibm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- arch/s390/include/asm/atomic.h | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index fd20ab5d4cf7..491ad53a0d4e 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -84,9 +84,9 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
#define ATOMIC64_INIT(i) { (i) }
-static inline long atomic64_read(const atomic64_t *v) +static inline s64 atomic64_read(const atomic64_t *v) { - long c; + s64 c;
asm volatile( " lg %0,%1\n" @@ -94,49 +94,49 @@ static inline long atomic64_read(const atomic64_t *v) return c; }
-static inline void atomic64_set(atomic64_t *v, long i) +static inline void atomic64_set(atomic64_t *v, s64 i) { asm volatile( " stg %1,%0\n" : "=Q" (v->counter) : "d" (i)); }
-static inline long atomic64_add_return(long i, atomic64_t *v) +static inline s64 atomic64_add_return(s64 i, atomic64_t *v) { - return __atomic64_add_barrier(i, &v->counter) + i; + return __atomic64_add_barrier(i, (long *)&v->counter) + i; }
-static inline long atomic64_fetch_add(long i, atomic64_t *v) +static inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) { - return __atomic64_add_barrier(i, &v->counter); + return __atomic64_add_barrier(i, (long *)&v->counter); }
-static inline void atomic64_add(long i, atomic64_t *v) +static inline void atomic64_add(s64 i, atomic64_t *v) { #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES if (__builtin_constant_p(i) && (i > -129) && (i < 128)) { - __atomic64_add_const(i, &v->counter); + __atomic64_add_const(i, (long *)&v->counter); return; } #endif - __atomic64_add(i, &v->counter); + __atomic64_add(i, (long *)&v->counter); }
#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
-static inline long atomic64_cmpxchg(atomic64_t *v, long old, long new) +static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { - return __atomic64_cmpxchg(&v->counter, old, new); + return __atomic64_cmpxchg((long *)&v->counter, old, new); }
#define ATOMIC64_OPS(op) \ -static inline void atomic64_##op(long i, atomic64_t *v) \ +static inline void atomic64_##op(s64 i, atomic64_t *v) \ { \ - __atomic64_##op(i, &v->counter); \ + __atomic64_##op(i, (long *)&v->counter); \ } \ -static inline long atomic64_fetch_##op(long i, atomic64_t *v) \ +static inline long atomic64_fetch_##op(s64 i, atomic64_t *v) \ { \ - return __atomic64_##op##_barrier(i, &v->counter); \ + return __atomic64_##op##_barrier(i, (long *)&v->counter); \ }
ATOMIC64_OPS(and) @@ -145,8 +145,8 @@ ATOMIC64_OPS(xor)
#undef ATOMIC64_OPS
-#define atomic64_sub_return(_i, _v) atomic64_add_return(-(long)(_i), _v) -#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(long)(_i), _v) -#define atomic64_sub(_i, _v) atomic64_add(-(long)(_i), _v) +#define atomic64_sub_return(_i, _v) atomic64_add_return(-(s64)(_i), _v) +#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(s64)(_i), _v) +#define atomic64_sub(_i, _v) atomic64_add(-(s64)(_i), _v)
#endif /* __ARCH_S390_ATOMIC__ */
As a step towards making the atomic64 API use consistent types treewide, let's have the sparc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers.
As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: David S. Miller davem@davemloft.net Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- arch/sparc/include/asm/atomic_64.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 6963482c81d8..b60448397d4f 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -23,15 +23,15 @@
#define ATOMIC_OP(op) \ void atomic_##op(int, atomic_t *); \ -void atomic64_##op(long, atomic64_t *); +void atomic64_##op(s64, atomic64_t *);
#define ATOMIC_OP_RETURN(op) \ int atomic_##op##_return(int, atomic_t *); \ -long atomic64_##op##_return(long, atomic64_t *); +s64 atomic64_##op##_return(s64, atomic64_t *);
#define ATOMIC_FETCH_OP(op) \ int atomic_fetch_##op(int, atomic_t *); \ -long atomic64_fetch_##op(long, atomic64_t *); +s64 atomic64_fetch_##op(s64, atomic64_t *);
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_OP_RETURN(op) ATOMIC_FETCH_OP(op)
@@ -61,7 +61,7 @@ static inline int atomic_xchg(atomic_t *v, int new) ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
-long atomic64_dec_if_positive(atomic64_t *v); +s64 atomic64_dec_if_positive(atomic64_t *v); #define atomic64_dec_if_positive atomic64_dec_if_positive
#endif /* !(__ARCH_SPARC64_ATOMIC__) */
As a step towards making the atomic64 API use consistent types treewide, let's have the x86 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or long long, matching the generated headers.
Note that the x86 arch_atomic64 implementation is already wrapped by the generic instrumented atomic64 implementation, which uses s64 consistently.
Otherwise, there should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Borislav Petkov bp@alien8.de Cc: Ingo Molnar mingo@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Russell King linux@armlinux.org.uk Cc: Thomas Gleixner tglx@linutronix.de Cc: Will Deacon will.deacon@arm.com --- arch/x86/include/asm/atomic64_32.h | 66 ++++++++++++++++++-------------------- arch/x86/include/asm/atomic64_64.h | 38 +++++++++++----------- 2 files changed, 51 insertions(+), 53 deletions(-)
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 6a5b0ec460da..52cfaecb13f9 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -9,7 +9,7 @@ /* An 64bit atomic type */
typedef struct { - u64 __aligned(8) counter; + s64 __aligned(8) counter; } atomic64_t;
#define ATOMIC64_INIT(val) { (val) } @@ -71,8 +71,7 @@ ATOMIC64_DECL(add_unless); * the old value. */
-static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o, - long long n) +static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n) { return arch_cmpxchg64(&v->counter, o, n); } @@ -85,9 +84,9 @@ static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o, * Atomically xchgs the value of @v to @n and returns * the old value. */ -static inline long long arch_atomic64_xchg(atomic64_t *v, long long n) +static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n) { - long long o; + s64 o; unsigned high = (unsigned)(n >> 32); unsigned low = (unsigned)n; alternative_atomic64(xchg, "=&A" (o), @@ -103,7 +102,7 @@ static inline long long arch_atomic64_xchg(atomic64_t *v, long long n) * * Atomically sets the value of @v to @n. */ -static inline void arch_atomic64_set(atomic64_t *v, long long i) +static inline void arch_atomic64_set(atomic64_t *v, s64 i) { unsigned high = (unsigned)(i >> 32); unsigned low = (unsigned)i; @@ -118,9 +117,9 @@ static inline void arch_atomic64_set(atomic64_t *v, long long i) * * Atomically reads the value of @v and returns it. */ -static inline long long arch_atomic64_read(const atomic64_t *v) +static inline s64 arch_atomic64_read(const atomic64_t *v) { - long long r; + s64 r; alternative_atomic64(read, "=&A" (r), "c" (v) : "memory"); return r; } @@ -132,7 +131,7 @@ static inline long long arch_atomic64_read(const atomic64_t *v) * * Atomically adds @i to @v and returns @i + *@v */ -static inline long long arch_atomic64_add_return(long long i, atomic64_t *v) +static inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { alternative_atomic64(add_return, ASM_OUTPUT2("+A" (i), "+c" (v)), @@ -143,7 +142,7 @@ static inline long long arch_atomic64_add_return(long long i, atomic64_t *v) /* * Other variants with different arithmetic operators: */ -static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v) +static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v) { alternative_atomic64(sub_return, ASM_OUTPUT2("+A" (i), "+c" (v)), @@ -151,18 +150,18 @@ static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v) return i; }
-static inline long long arch_atomic64_inc_return(atomic64_t *v) +static inline s64 arch_atomic64_inc_return(atomic64_t *v) { - long long a; + s64 a; alternative_atomic64(inc_return, "=&A" (a), "S" (v) : "memory", "ecx"); return a; } #define arch_atomic64_inc_return arch_atomic64_inc_return
-static inline long long arch_atomic64_dec_return(atomic64_t *v) +static inline s64 arch_atomic64_dec_return(atomic64_t *v) { - long long a; + s64 a; alternative_atomic64(dec_return, "=&A" (a), "S" (v) : "memory", "ecx"); return a; @@ -176,7 +175,7 @@ static inline long long arch_atomic64_dec_return(atomic64_t *v) * * Atomically adds @i to @v. */ -static inline long long arch_atomic64_add(long long i, atomic64_t *v) +static inline s64 arch_atomic64_add(s64 i, atomic64_t *v) { __alternative_atomic64(add, add_return, ASM_OUTPUT2("+A" (i), "+c" (v)), @@ -191,7 +190,7 @@ static inline long long arch_atomic64_add(long long i, atomic64_t *v) * * Atomically subtracts @i from @v. */ -static inline long long arch_atomic64_sub(long long i, atomic64_t *v) +static inline s64 arch_atomic64_sub(s64 i, atomic64_t *v) { __alternative_atomic64(sub, sub_return, ASM_OUTPUT2("+A" (i), "+c" (v)), @@ -234,8 +233,7 @@ static inline void arch_atomic64_dec(atomic64_t *v) * Atomically adds @a to @v, so long as it was not @u. * Returns non-zero if the add was done, zero otherwise. */ -static inline int arch_atomic64_add_unless(atomic64_t *v, long long a, - long long u) +static inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { unsigned low = (unsigned)u; unsigned high = (unsigned)(u >> 32); @@ -254,9 +252,9 @@ static inline int arch_atomic64_inc_not_zero(atomic64_t *v) } #define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero
-static inline long long arch_atomic64_dec_if_positive(atomic64_t *v) +static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) { - long long r; + s64 r; alternative_atomic64(dec_if_positive, "=&A" (r), "S" (v) : "ecx", "memory"); return r; @@ -266,17 +264,17 @@ static inline long long arch_atomic64_dec_if_positive(atomic64_t *v) #undef alternative_atomic64 #undef __alternative_atomic64
-static inline void arch_atomic64_and(long long i, atomic64_t *v) +static inline void arch_atomic64_and(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0;
while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c) c = old; }
-static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0;
while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c) c = old; @@ -284,17 +282,17 @@ static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v) return old; }
-static inline void arch_atomic64_or(long long i, atomic64_t *v) +static inline void arch_atomic64_or(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0;
while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c) c = old; }
-static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0;
while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c) c = old; @@ -302,17 +300,17 @@ static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v) return old; }
-static inline void arch_atomic64_xor(long long i, atomic64_t *v) +static inline void arch_atomic64_xor(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0;
while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c) c = old; }
-static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0;
while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c) c = old; @@ -320,9 +318,9 @@ static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v) return old; }
-static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0;
while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c) c = old; diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index dadc20adba21..703b7dfd45e0 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -17,7 +17,7 @@ * Atomically reads the value of @v. * Doesn't imply a read memory barrier. */ -static inline long arch_atomic64_read(const atomic64_t *v) +static inline s64 arch_atomic64_read(const atomic64_t *v) { return READ_ONCE((v)->counter); } @@ -29,7 +29,7 @@ static inline long arch_atomic64_read(const atomic64_t *v) * * Atomically sets the value of @v to @i. */ -static inline void arch_atomic64_set(atomic64_t *v, long i) +static inline void arch_atomic64_set(atomic64_t *v, s64 i) { WRITE_ONCE(v->counter, i); } @@ -41,7 +41,7 @@ static inline void arch_atomic64_set(atomic64_t *v, long i) * * Atomically adds @i to @v. */ -static __always_inline void arch_atomic64_add(long i, atomic64_t *v) +static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "addq %1,%0" : "=m" (v->counter) @@ -55,7 +55,7 @@ static __always_inline void arch_atomic64_add(long i, atomic64_t *v) * * Atomically subtracts @i from @v. */ -static inline void arch_atomic64_sub(long i, atomic64_t *v) +static inline void arch_atomic64_sub(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "subq %1,%0" : "=m" (v->counter) @@ -71,7 +71,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v) * true if the result is zero, or false for all * other cases. */ -static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v) +static inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i); } @@ -142,7 +142,7 @@ static inline bool arch_atomic64_inc_and_test(atomic64_t *v) * if the result is negative, or false when * result is greater than or equal to zero. */ -static inline bool arch_atomic64_add_negative(long i, atomic64_t *v) +static inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i); } @@ -155,43 +155,43 @@ static inline bool arch_atomic64_add_negative(long i, atomic64_t *v) * * Atomically adds @i to @v and returns @i + @v */ -static __always_inline long arch_atomic64_add_return(long i, atomic64_t *v) +static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { return i + xadd(&v->counter, i); }
-static inline long arch_atomic64_sub_return(long i, atomic64_t *v) +static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v) { return arch_atomic64_add_return(-i, v); }
-static inline long arch_atomic64_fetch_add(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v) { return xadd(&v->counter, i); }
-static inline long arch_atomic64_fetch_sub(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_sub(s64 i, atomic64_t *v) { return xadd(&v->counter, -i); }
-static inline long arch_atomic64_cmpxchg(atomic64_t *v, long old, long new) +static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { return arch_cmpxchg(&v->counter, old, new); }
#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg -static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, long new) +static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { return try_cmpxchg(&v->counter, old, new); }
-static inline long arch_atomic64_xchg(atomic64_t *v, long new) +static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 new) { return arch_xchg(&v->counter, new); }
-static inline void arch_atomic64_and(long i, atomic64_t *v) +static inline void arch_atomic64_and(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "andq %1,%0" : "+m" (v->counter) @@ -199,7 +199,7 @@ static inline void arch_atomic64_and(long i, atomic64_t *v) : "memory"); }
-static inline long arch_atomic64_fetch_and(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v) { s64 val = arch_atomic64_read(v);
@@ -208,7 +208,7 @@ static inline long arch_atomic64_fetch_and(long i, atomic64_t *v) return val; }
-static inline void arch_atomic64_or(long i, atomic64_t *v) +static inline void arch_atomic64_or(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "orq %1,%0" : "+m" (v->counter) @@ -216,7 +216,7 @@ static inline void arch_atomic64_or(long i, atomic64_t *v) : "memory"); }
-static inline long arch_atomic64_fetch_or(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v) { s64 val = arch_atomic64_read(v);
@@ -225,7 +225,7 @@ static inline long arch_atomic64_fetch_or(long i, atomic64_t *v) return val; }
-static inline void arch_atomic64_xor(long i, atomic64_t *v) +static inline void arch_atomic64_xor(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "xorq %1,%0" : "+m" (v->counter) @@ -233,7 +233,7 @@ static inline void arch_atomic64_xor(long i, atomic64_t *v) : "memory"); }
-static inline long arch_atomic64_fetch_xor(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v) { s64 val = arch_atomic64_read(v);
Now that all architectures use 64 consistently as the base type for the atomic64 API, let's have the CONFIG_64BIT definition of atomic64_t use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers.
On architectures where atomic64_read(v) is READ_ONCE(v->counter), this patch will cause the return type of atomic64_read() to be s64.
As of this patch, the atomic64 API can be relied upon to consistently return s64 where a value rather than boolean condition is returned. This should make code more robust, and simpler, allowing for the removal of casts previously required to ensure consistent types.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- include/linux/types.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/types.h b/include/linux/types.h index 231114ae38f4..05030f608be3 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -174,7 +174,7 @@ typedef struct {
#ifdef CONFIG_64BIT typedef struct { - long counter; + s64 counter; } atomic64_t; #endif
Now that atomic64_read() returns s64 consistently, we don't need to explicitly cast its return value. Drop the redundant casts.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Herbert Xu herbert@gondor.apana.org.au Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- drivers/crypto/nx/nx-842-pseries.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c index 9432e9e42afe..5cf77729a438 100644 --- a/drivers/crypto/nx/nx-842-pseries.c +++ b/drivers/crypto/nx/nx-842-pseries.c @@ -870,7 +870,7 @@ static ssize_t nx842_##_name##_show(struct device *dev, \ local_devdata = rcu_dereference(devdata); \ if (local_devdata) \ p = snprintf(buf, PAGE_SIZE, "%lld\n", \ - (s64)atomic64_read(&local_devdata->counters->_name)); \ + atomic64_read(&local_devdata->counters->_name)); \ rcu_read_unlock(); \ return p; \ } @@ -924,7 +924,7 @@ static ssize_t nx842_timehist_show(struct device *dev, for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) { bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n", i ? (2<<(i-1)) : 0, (2<<i)-1, - (s64)atomic64_read(×[i])); + atomic64_read(×[i])); bytes_remain -= bytes; p += bytes; } @@ -932,7 +932,7 @@ static ssize_t nx842_timehist_show(struct device *dev, * 2<<(NX842_HIST_SLOTS - 2) us */ bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n", 2<<(NX842_HIST_SLOTS - 2), - (s64)atomic64_read(×[(NX842_HIST_SLOTS - 1)])); + atomic64_read(×[(NX842_HIST_SLOTS - 1)])); p += bytes;
rcu_read_unlock();
Now that atomic64_read() returns s64 consistently, we don't need to explicitly cast its return value. Drop the redundant casts.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Heiko Carstens heiko.carstens@de.ibm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will.deacon@arm.com --- arch/s390/pci/pci_debug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c index 45eccf79e990..3408c0df3ebf 100644 --- a/arch/s390/pci/pci_debug.c +++ b/arch/s390/pci/pci_debug.c @@ -75,7 +75,7 @@ static void pci_sw_counter_show(struct seq_file *m)
for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++) seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i], - (s64)atomic64_read(counter)); + atomic64_read(counter)); }
static int pci_perf_show(struct seq_file *m, void *v)
On Wed, May 22, 2019 at 3:23 PM Mark Rutland mark.rutland@arm.com wrote:
Currently architectures return inconsistent types for atomic64 ops. Some return long (e..g. powerpc), some return long long (e.g. arc), and some return s64 (e.g. x86).
This is a bit messy, and causes unnecessary pain (e.g. as values must be cast before they can be printed [1]).
This series reworks all the atomic64 implementations to use s64 as the base type for atomic64_t (as discussed [2]), and to ensure that this type is consistently used for parameters and return values in the API, avoiding further problems in this area.
This series (based on v5.1-rc1) can also be found in my atomics/type-cleanup branch [3] on kernel.org.
Nice cleanup!
I've provided an explicit Ack for the asm-generic patch if someone wants to pick up the entire series, but I can also put it all into my asm-generic tree if you want, after more people have had a chance to take a look.
Arnd
On Wed, May 22, 2019 at 11:18:59PM +0200, Arnd Bergmann wrote:
On Wed, May 22, 2019 at 3:23 PM Mark Rutland mark.rutland@arm.com wrote:
Currently architectures return inconsistent types for atomic64 ops. Some return long (e..g. powerpc), some return long long (e.g. arc), and some return s64 (e.g. x86).
This is a bit messy, and causes unnecessary pain (e.g. as values must be cast before they can be printed [1]).
This series reworks all the atomic64 implementations to use s64 as the base type for atomic64_t (as discussed [2]), and to ensure that this type is consistently used for parameters and return values in the API, avoiding further problems in this area.
This series (based on v5.1-rc1) can also be found in my atomics/type-cleanup branch [3] on kernel.org.
Nice cleanup!
I've provided an explicit Ack for the asm-generic patch if someone wants to pick up the entire series, but I can also put it all into my asm-generic tree if you want, after more people have had a chance to take a look.
Thanks!
I had assumed that this would go through the tip tree, as previous atomic rework had, but I have no preference as to how this gets merged.
I'm not sure what the policy is, so I'll leave it to Peter and Will to say.
Mark.
Hi Mark,
On Wed, May 22, 2019 at 02:22:32PM +0100, Mark Rutland wrote:
Currently architectures return inconsistent types for atomic64 ops. Some return long (e..g. powerpc), some return long long (e.g. arc), and some return s64 (e.g. x86).
(only partially related, but probably worth asking:)
While reading the series, I realized that the following expression:
atomic64_t v; ... typeof(v.counter) my_val = atomic64_set(&v, VAL);
is a valid expression on some architectures (in part., on architectures which #define atomic64_set() to WRITE_ONCE()) but is invalid on others. (This is due to the fact that WRITE_ONCE() can be used as an rvalue in the above assignment; TBH, I ignore the reasons for having such rvalue?)
IIUC, similar considerations hold for atomic_set().
The question is whether this is a known/"expected" inconsistency in the implementation of atomic64_set() or if this would also need to be fixed /addressed (say in a different patchset)?
Thanks, Andrea
On Thu, May 23, 2019 at 10:30:13AM +0200, Andrea Parri wrote:
Hi Mark,
Hi Andrea,
On Wed, May 22, 2019 at 02:22:32PM +0100, Mark Rutland wrote:
Currently architectures return inconsistent types for atomic64 ops. Some return long (e..g. powerpc), some return long long (e.g. arc), and some return s64 (e.g. x86).
(only partially related, but probably worth asking:)
While reading the series, I realized that the following expression:
atomic64_t v; ... typeof(v.counter) my_val = atomic64_set(&v, VAL);
is a valid expression on some architectures (in part., on architectures which #define atomic64_set() to WRITE_ONCE()) but is invalid on others. (This is due to the fact that WRITE_ONCE() can be used as an rvalue in the above assignment; TBH, I ignore the reasons for having such rvalue?)
IIUC, similar considerations hold for atomic_set().
The question is whether this is a known/"expected" inconsistency in the implementation of atomic64_set() or if this would also need to be fixed /addressed (say in a different patchset)?
In either case, I don't think the intent is that they should be used that way, and from a quick scan, I can only fine a single relevant instance today:
[mark@lakrids:~/src/linux]% git grep '(return|=)\s+atomic(64)?_set' include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val); include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
[mark@lakrids:~/src/linux]% git grep '=\s+atomic_set' | wc -l 0 [mark@lakrids:~/src/linux]% git grep '=\s+atomic64_set' | wc -l 0
Any architectures implementing arch_atomic_* will have both of these functions returning void. Currently that's x86 and arm64, but (time permitting) I intend to migrate other architectures, so I guess we'll have to fix the above up as required.
I think it's best to avoid the construct above.
Thanks, Mark.
While reading the series, I realized that the following expression:
atomic64_t v; ... typeof(v.counter) my_val = atomic64_set(&v, VAL);
is a valid expression on some architectures (in part., on architectures which #define atomic64_set() to WRITE_ONCE()) but is invalid on others. (This is due to the fact that WRITE_ONCE() can be used as an rvalue in the above assignment; TBH, I ignore the reasons for having such rvalue?)
IIUC, similar considerations hold for atomic_set().
The question is whether this is a known/"expected" inconsistency in the implementation of atomic64_set() or if this would also need to be fixed /addressed (say in a different patchset)?
In either case, I don't think the intent is that they should be used that way, and from a quick scan, I can only fine a single relevant instance today:
[mark@lakrids:~/src/linux]% git grep '(return|=)\s+atomic(64)?_set' include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val); include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
[mark@lakrids:~/src/linux]% git grep '=\s+atomic_set' | wc -l 0 [mark@lakrids:~/src/linux]% git grep '=\s+atomic64_set' | wc -l 0
Any architectures implementing arch_atomic_* will have both of these functions returning void. Currently that's x86 and arm64, but (time permitting) I intend to migrate other architectures, so I guess we'll have to fix the above up as required.
I think it's best to avoid the construct above.
Thank you for the clarification, Mark. I agree with you that it'd be better to avoid such constructs. (FWIW, it is not currently possible to use them in litmus tests for the LKMM...)
Thanks, Andrea
On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
[mark@lakrids:~/src/linux]% git grep '(return|=)\s+atomic(64)?_set' include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val); include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
Oh boy, what a load of crap you just did find.
How about something like the below? I've not read how that buffer is used, but the below preserves all broken without using atomic*_t.
--- diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h index 0c06178e4985..8ee472118f54 100644 --- a/include/linux/vmw_vmci_defs.h +++ b/include/linux/vmw_vmci_defs.h @@ -438,8 +438,8 @@ enum { struct vmci_queue_header { /* All fields are 64bit and aligned. */ struct vmci_handle handle; /* Identifier. */ - atomic64_t producer_tail; /* Offset in this queue. */ - atomic64_t consumer_head; /* Offset in peer queue. */ + u64 producer_tail; /* Offset in this queue. */ + u64 consumer_head; /* Offset in peer queue. */ };
/* @@ -740,13 +740,9 @@ static inline void *vmci_event_data_payload(struct vmci_event_data *ev_data) * prefix will be used, so correctness isn't an issue, but using a * 64bit operation still adds unnecessary overhead. */ -static inline u64 vmci_q_read_pointer(atomic64_t *var) +static inline u64 vmci_q_read_pointer(u64 *var) { -#if defined(CONFIG_X86_32) - return atomic_read((atomic_t *)var); -#else - return atomic64_read(var); -#endif + return READ_ONCE(*(unsigned long *)var); }
/* @@ -755,23 +751,17 @@ static inline u64 vmci_q_read_pointer(atomic64_t *var) * never exceeds a 32bit value in this case. On 32bit SMP, using a * locked cmpxchg8b adds unnecessary overhead. */ -static inline void vmci_q_set_pointer(atomic64_t *var, - u64 new_val) +static inline void vmci_q_set_pointer(u64 *var, u64 new_val) { -#if defined(CONFIG_X86_32) - return atomic_set((atomic_t *)var, (u32)new_val); -#else - return atomic64_set(var, new_val); -#endif + /* XXX buggered on big-endian */ + WRITE_ONCE(*(unsigned long *)var, (unsigned long)new_val); }
/* * Helper to add a given offset to a head or tail pointer. Wraps the * value of the pointer around the max size of the queue. */ -static inline void vmci_qp_add_pointer(atomic64_t *var, - size_t add, - u64 size) +static inline void vmci_qp_add_pointer(u64 *var, size_t add, u64 size) { u64 new_val = vmci_q_read_pointer(var);
@@ -848,8 +838,8 @@ static inline void vmci_q_header_init(struct vmci_queue_header *q_header, const struct vmci_handle handle) { q_header->handle = handle; - atomic64_set(&q_header->producer_tail, 0); - atomic64_set(&q_header->consumer_head, 0); + q_header->producer_tail = 0; + q_header->consumer_head = 0; }
/*
On Fri, May 24, 2019 at 12:37:31PM +0200, Peter Zijlstra wrote:
On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
[mark@lakrids:~/src/linux]% git grep '(return|=)\s+atomic(64)?_set' include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val); include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
Oh boy, what a load of crap you just did find.
How about something like the below? I've not read how that buffer is used, but the below preserves all broken without using atomic*_t.
Clarified by something along these lines?
--- Documentation/atomic_t.txt | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index dca3fb0554db..125c95ddbbc0 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release() respectively.
+Therefore, if you find yourself only using the Non-RMW operations of atomic_t, +you do not in fact need atomic_t at all and are doing it wrong. + The one detail to this is that atomic_set{}() should be observable to the RMW ops. That is:
On Fri, May 24, 2019 at 01:18:07PM +0200, Peter Zijlstra wrote:
On Fri, May 24, 2019 at 12:37:31PM +0200, Peter Zijlstra wrote:
On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
[mark@lakrids:~/src/linux]% git grep '(return|=)\s+atomic(64)?_set' include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val); include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
Oh boy, what a load of crap you just did find.
How about something like the below? I've not read how that buffer is used, but the below preserves all broken without using atomic*_t.
Clarified by something along these lines?
Documentation/atomic_t.txt | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index dca3fb0554db..125c95ddbbc0 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release() respectively. +Therefore, if you find yourself only using the Non-RMW operations of atomic_t, +you do not in fact need atomic_t at all and are doing it wrong.
The one detail to this is that atomic_set{}() should be observable to the RMW ops. That is:
I like it!
Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
On Fri, May 24, 2019 at 01:18:07PM +0200, Peter Zijlstra wrote:
On Fri, May 24, 2019 at 12:37:31PM +0200, Peter Zijlstra wrote:
On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
[mark@lakrids:~/src/linux]% git grep '(return|=)\s+atomic(64)?_set' include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val); include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
Oh boy, what a load of crap you just did find.
How about something like the below? I've not read how that buffer is used, but the below preserves all broken without using atomic*_t.
Clarified by something along these lines?
Documentation/atomic_t.txt | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index dca3fb0554db..125c95ddbbc0 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release() respectively.
Not sure you need a new paragraph here.
+Therefore, if you find yourself only using the Non-RMW operations of atomic_t, +you do not in fact need atomic_t at all and are doing it wrong.
That makes sense to me, although I now find that the sentence below is a bit confusing because it sounds like it's a caveat relating to only using Non-RMW ops.
The one detail to this is that atomic_set{}() should be observable to the RMW ops. That is:
How about changing this to be:
"A subtle detail of atomic_set{}() is that it should be observable..."
With that:
Acked-by: Will Deacon will.deacon@arm.com
Will
On Fri, May 24, 2019 at 12:42:20PM +0100, Will Deacon wrote:
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index dca3fb0554db..125c95ddbbc0 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release() respectively.
Not sure you need a new paragraph here.
+Therefore, if you find yourself only using the Non-RMW operations of atomic_t, +you do not in fact need atomic_t at all and are doing it wrong.
That makes sense to me, although I now find that the sentence below is a bit confusing because it sounds like it's a caveat relating to only using Non-RMW ops.
The one detail to this is that atomic_set{}() should be observable to the RMW ops. That is:
How about changing this to be:
"A subtle detail of atomic_set{}() is that it should be observable..."
Done, find below.
--- Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
Clarify that pure non-RMW usage of atomic_t is pointless, there is nothing 'magical' about atomic_set() / atomic_read().
This is something that seems to confuse people, because I happen upon it semi-regularly.
Acked-by: Will Deacon will.deacon@arm.com Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org --- Documentation/atomic_t.txt | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index dca3fb0554db..89eae7f6b360 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -81,9 +81,11 @@ SEMANTICS
The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and -smp_store_release() respectively. +smp_store_release() respectively. Therefore, if you find yourself only using +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all +and are doing it wrong.
-The one detail to this is that atomic_set{}() should be observable to the RMW +A subtle detail of atomic_set{}() is that it should be observable to the RMW ops. That is:
C atomic-set
Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
Clarify that pure non-RMW usage of atomic_t is pointless, there is nothing 'magical' about atomic_set() / atomic_read().
This is something that seems to confuse people, because I happen upon it semi-regularly.
Acked-by: Will Deacon will.deacon@arm.com Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org
Documentation/atomic_t.txt | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index dca3fb0554db..89eae7f6b360 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -81,9 +81,11 @@ SEMANTICS The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and -smp_store_release() respectively. +smp_store_release() respectively. Therefore, if you find yourself only using +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all +and are doing it wrong.
The counterargument (not so theoretic, just look around in the kernel!) is: we all 'forget' to use READ_ONCE() and WRITE_ONCE(), it should be difficult or more difficult to forget to use atomic_read() and atomic_set()... IAC, I wouldn't call any of them 'wrong'.
Andrea
-The one detail to this is that atomic_set{}() should be observable to the RMW +A subtle detail of atomic_set{}() is that it should be observable to the RMW ops. That is: C atomic-set
On Sat, May 25, 2019 at 12:43:40AM +0200, Andrea Parri wrote:
Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
Clarify that pure non-RMW usage of atomic_t is pointless, there is nothing 'magical' about atomic_set() / atomic_read().
This is something that seems to confuse people, because I happen upon it semi-regularly.
Acked-by: Will Deacon will.deacon@arm.com Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org
Documentation/atomic_t.txt | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index dca3fb0554db..89eae7f6b360 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -81,9 +81,11 @@ SEMANTICS The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and -smp_store_release() respectively. +smp_store_release() respectively. Therefore, if you find yourself only using +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all +and are doing it wrong.
The counterargument (not so theoretic, just look around in the kernel!) is: we all 'forget' to use READ_ONCE() and WRITE_ONCE(), it should be difficult or more difficult to forget to use atomic_read() and atomic_set()... IAC, I wouldn't call any of them 'wrong'.
I'm thinking you mean that the type system isn't helping us with READ/WRITE_ONCE() like it does with atomic_t ? And while I agree that there is room for improvement there, that doesn't mean we should start using atomic*_t all over the place for that.
Part of the problem with READ/WRITE_ONCE() is that it serves a dual purpose; we've tried to untangle that at some point, but Linus wasn't having it.
On Tue, May 28, 2019 at 12:47:19PM +0200, Peter Zijlstra wrote:
On Sat, May 25, 2019 at 12:43:40AM +0200, Andrea Parri wrote:
Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
Clarify that pure non-RMW usage of atomic_t is pointless, there is nothing 'magical' about atomic_set() / atomic_read().
This is something that seems to confuse people, because I happen upon it semi-regularly.
Acked-by: Will Deacon will.deacon@arm.com Reviewed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org
Documentation/atomic_t.txt | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index dca3fb0554db..89eae7f6b360 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -81,9 +81,11 @@ SEMANTICS The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and -smp_store_release() respectively. +smp_store_release() respectively. Therefore, if you find yourself only using +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all +and are doing it wrong.
The counterargument (not so theoretic, just look around in the kernel!) is: we all 'forget' to use READ_ONCE() and WRITE_ONCE(), it should be difficult or more difficult to forget to use atomic_read() and atomic_set()... IAC, I wouldn't call any of them 'wrong'.
I'm thinking you mean that the type system isn't helping us with READ/WRITE_ONCE() like it does with atomic_t ?
Yep.
And while I agree that there is room for improvement there, that doesn't mean we should start using atomic*_t all over the place for that.
Agreed. But this still doesn't explain that "and are doing it wrong", AFAICT; maybe just remove that part?
Andrea
Part of the problem with READ/WRITE_ONCE() is that it serves a dual purpose; we've tried to untangle that at some point, but Linus wasn't having it.
linux-stable-mirror@lists.linaro.org