atomics/treewide: Make atomic64_inc_not_zero() optional
We define a trivial fallback for atomic_inc_not_zero(), but don't do the same for atomic64_inc_not_zero(), leading most architectures to define the same boilerplate. Let's add a fallback in <linux/atomic.h>, and remove the redundant implementations. Note that atomic64_add_unless() is always defined in <linux/atomic.h>, and promotes its arguments to the requisite types, so we need not do this explicitly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Palmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-6-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:

committed by
Ingo Molnar

parent
ade5ef9280
commit
bef828204a
@@ -375,13 +375,6 @@ static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u)
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifndef CONFIG_GENERIC_ATOMIC64
|
||||
static __always_inline long atomic64_inc_not_zero(atomic64_t *v)
|
||||
{
|
||||
return atomic64_add_unless(v, 1, 0);
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
|
||||
* {cmp,}xchg and the operations that return, so they need a full barrier.
|
||||
|
Reference in New Issue
Block a user