Merge tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
Pull module updates from Rusty Russell: "Main excitement here is Peter Zijlstra's lockless rbtree optimization to speed module address lookup. He found some abusers of the module lock doing that too. A little bit of parameter work here too; including Dan Streetman's breaking up the big param mutex so writing a parameter can load another module (yeah, really). Unfortunately that broke the usual suspects, !CONFIG_MODULES and !CONFIG_SYSFS, so those fixes were appended too" * tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (26 commits) modules: only use mod->param_lock if CONFIG_MODULES param: fix module param locks when !CONFIG_SYSFS. rcu: merge fix for Convert ACCESS_ONCE() to READ_ONCE() and WRITE_ONCE() module: add per-module param_lock module: make perm const params: suppress unused variable error, warn once just in case code changes. modules: clarify CONFIG_MODULE_COMPRESS help, suggest 'N'. kernel/module.c: avoid ifdefs for sig_enforce declaration kernel/workqueue.c: remove ifdefs over wq_power_efficient kernel/params.c: export param_ops_bool_enable_only kernel/params.c: generalize bool_enable_only kernel/module.c: use generic module param operaters for sig_enforce kernel/params: constify struct kernel_param_ops uses sysfs: tightened sysfs permission checks module: Rework module_addr_{min,max} module: Use __module_address() for module_address_lookup() module: Make the mod_tree stuff conditional on PERF_EVENTS || TRACING module: Optimize __module_address() using a latched RB-tree rbtree: Implement generic latch_tree seqlock: Introduce raw_read_seqcount_latch() ...
This commit is contained in:
@@ -35,6 +35,7 @@
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/preempt.h>
|
||||
#include <linux/lockdep.h>
|
||||
#include <linux/compiler.h>
|
||||
#include <asm/processor.h>
|
||||
|
||||
/*
|
||||
@@ -274,9 +275,87 @@ static inline void raw_write_seqcount_barrier(seqcount_t *s)
|
||||
s->sequence++;
|
||||
}
|
||||
|
||||
/*
|
||||
static inline int raw_read_seqcount_latch(seqcount_t *s)
|
||||
{
|
||||
return lockless_dereference(s->sequence);
|
||||
}
|
||||
|
||||
/**
|
||||
* raw_write_seqcount_latch - redirect readers to even/odd copy
|
||||
* @s: pointer to seqcount_t
|
||||
*
|
||||
* The latch technique is a multiversion concurrency control method that allows
|
||||
* queries during non-atomic modifications. If you can guarantee queries never
|
||||
* interrupt the modification -- e.g. the concurrency is strictly between CPUs
|
||||
* -- you most likely do not need this.
|
||||
*
|
||||
* Where the traditional RCU/lockless data structures rely on atomic
|
||||
* modifications to ensure queries observe either the old or the new state the
|
||||
* latch allows the same for non-atomic updates. The trade-off is doubling the
|
||||
* cost of storage; we have to maintain two copies of the entire data
|
||||
* structure.
|
||||
*
|
||||
* Very simply put: we first modify one copy and then the other. This ensures
|
||||
* there is always one copy in a stable state, ready to give us an answer.
|
||||
*
|
||||
* The basic form is a data structure like:
|
||||
*
|
||||
* struct latch_struct {
|
||||
* seqcount_t seq;
|
||||
* struct data_struct data[2];
|
||||
* };
|
||||
*
|
||||
* Where a modification, which is assumed to be externally serialized, does the
|
||||
* following:
|
||||
*
|
||||
* void latch_modify(struct latch_struct *latch, ...)
|
||||
* {
|
||||
* smp_wmb(); <- Ensure that the last data[1] update is visible
|
||||
* latch->seq++;
|
||||
* smp_wmb(); <- Ensure that the seqcount update is visible
|
||||
*
|
||||
* modify(latch->data[0], ...);
|
||||
*
|
||||
* smp_wmb(); <- Ensure that the data[0] update is visible
|
||||
* latch->seq++;
|
||||
* smp_wmb(); <- Ensure that the seqcount update is visible
|
||||
*
|
||||
* modify(latch->data[1], ...);
|
||||
* }
|
||||
*
|
||||
* The query will have a form like:
|
||||
*
|
||||
* struct entry *latch_query(struct latch_struct *latch, ...)
|
||||
* {
|
||||
* struct entry *entry;
|
||||
* unsigned seq, idx;
|
||||
*
|
||||
* do {
|
||||
* seq = lockless_dereference(latch->seq);
|
||||
*
|
||||
* idx = seq & 0x01;
|
||||
* entry = data_query(latch->data[idx], ...);
|
||||
*
|
||||
* smp_rmb();
|
||||
* } while (seq != latch->seq);
|
||||
*
|
||||
* return entry;
|
||||
* }
|
||||
*
|
||||
* So during the modification, queries are first redirected to data[1]. Then we
|
||||
* modify data[0]. When that is complete, we redirect queries back to data[0]
|
||||
* and we can modify data[1].
|
||||
*
|
||||
* NOTE: The non-requirement for atomic modifications does _NOT_ include
|
||||
* the publishing of new entries in the case where data is a dynamic
|
||||
* data structure.
|
||||
*
|
||||
* An iteration might start in data[0] and get suspended long enough
|
||||
* to miss an entire modification sequence, once it resumes it might
|
||||
* observe the new entry.
|
||||
*
|
||||
* NOTE: When data is a dynamic data structure; one should use regular RCU
|
||||
* patterns to manage the lifetimes of the objects within.
|
||||
*/
|
||||
static inline void raw_write_seqcount_latch(seqcount_t *s)
|
||||
{
|
||||
|
Reference in New Issue
Block a user