Merge tag 'locking-urgent-2020-06-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull atomics rework from Thomas Gleixner: "Peter Zijlstras rework of atomics and fallbacks. This solves two problems: 1) Compilers uninline small atomic_* static inline functions which can expose them to instrumentation. 2) The instrumentation of atomic primitives was done at the architecture level while composites or fallbacks were provided at the generic level. As a result there are no uninstrumented variants of the fallbacks. Both issues were in the way of fully isolating fragile entry code pathes and especially the text poke int3 handler which is prone to an endless recursion problem when anything in that code path is about to be instrumented. This was always a problem, but got elevated due to the new batch mode updates of tracing. The solution is to mark the functions __always_inline and to flip the fallback and instrumentation so the non-instrumented variants are at the architecture level and the instrumentation is done in generic code. The latter introduces another fallback variant which will go away once all architectures have been moved over to arch_atomic_*" * tag 'locking-urgent-2020-06-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: locking/atomics: Flip fallbacks and instrumentation asm-generic/atomic: Use __always_inline for fallback wrappers
This commit is contained in:
2291
include/linux/atomic-arch-fallback.h
Normal file
2291
include/linux/atomic-arch-fallback.h
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -25,6 +25,12 @@
|
||||
* See Documentation/memory-barriers.txt for ACQUIRE/RELEASE definitions.
|
||||
*/
|
||||
|
||||
#define atomic_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c))
|
||||
#define atomic_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c))
|
||||
|
||||
#define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c))
|
||||
#define atomic64_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c))
|
||||
|
||||
/*
|
||||
* The idea here is to build acquire/release variants by adding explicit
|
||||
* barriers on top of the relaxed variant. In the case where the relaxed
|
||||
@@ -71,7 +77,12 @@
|
||||
__ret; \
|
||||
})
|
||||
|
||||
#ifdef ARCH_ATOMIC
|
||||
#include <linux/atomic-arch-fallback.h>
|
||||
#include <asm-generic/atomic-instrumented.h>
|
||||
#else
|
||||
#include <linux/atomic-fallback.h>
|
||||
#endif
|
||||
|
||||
#include <asm-generic/atomic-long.h>
|
||||
|
||||
|
Reference in New Issue
Block a user