Merge tag 'sched-core-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar: - reorganize & clean up the SD* flags definitions and add a bunch of sanity checks. These new checks caught quite a few bugs or at least inconsistencies, resulting in another set of patches. - rseq updates, add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ - add a new tracepoint to improve CPU capacity tracking - improve overloaded SMP system load-balancing behavior - tweak SMT balancing - energy-aware scheduling updates - NUMA balancing improvements - deadline scheduler fixes and improvements - CPU isolation fixes - misc cleanups, simplifications and smaller optimizations * tag 'sched-core-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (42 commits) sched/deadline: Unthrottle PI boosted threads while enqueuing sched/debug: Add new tracepoint to track cpu_capacity sched/fair: Tweak pick_next_entity() rseq/selftests: Test MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ rseq/selftests,x86_64: Add rseq_offset_deref_addv() rseq/membarrier: Add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ sched/fair: Use dst group while checking imbalance for NUMA balancer sched/fair: Reduce busy load balance interval sched/fair: Minimize concurrent LBs between domain level sched/fair: Reduce minimal imbalance threshold sched/fair: Relax constraint on task's load during load balance sched/fair: Remove the force parameter of update_tg_load_avg() sched/fair: Fix wrong cpu selecting from isolated domain sched: Remove unused inline function uclamp_bucket_base_value() sched/rt: Disable RT_RUNTIME_SHARE by default sched/deadline: Fix stale throttling on de-/boosted tasks sched/numa: Use runnable_avg to classify node sched/topology: Move sd_flag_debug out of #ifdef CONFIG_SYSCTL MAINTAINERS: Add myself as SCHED_DEADLINE reviewer sched/topology: Move SD_DEGENERATE_GROUPS_MASK out of linux/sched/topology.h ...
This commit is contained in:
@@ -114,6 +114,26 @@
|
||||
* If this command is not implemented by an
|
||||
* architecture, -EINVAL is returned.
|
||||
* Returns 0 on success.
|
||||
* @MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
|
||||
* Ensure the caller thread, upon return from
|
||||
* system call, that all its running thread
|
||||
* siblings have any currently running rseq
|
||||
* critical sections restarted if @flags
|
||||
* parameter is 0; if @flags parameter is
|
||||
* MEMBARRIER_CMD_FLAG_CPU,
|
||||
* then this operation is performed only
|
||||
* on CPU indicated by @cpu_id. If this command is
|
||||
* not implemented by an architecture, -EINVAL
|
||||
* is returned. A process needs to register its
|
||||
* intent to use the private expedited rseq
|
||||
* command prior to using it, otherwise
|
||||
* this command returns -EPERM.
|
||||
* @MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ:
|
||||
* Register the process intent to use
|
||||
* MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ.
|
||||
* If this command is not implemented by an
|
||||
* architecture, -EINVAL is returned.
|
||||
* Returns 0 on success.
|
||||
* @MEMBARRIER_CMD_SHARED:
|
||||
* Alias to MEMBARRIER_CMD_GLOBAL. Provided for
|
||||
* header backward compatibility.
|
||||
@@ -131,9 +151,15 @@ enum membarrier_cmd {
|
||||
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4),
|
||||
MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 5),
|
||||
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 6),
|
||||
MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ = (1 << 7),
|
||||
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ = (1 << 8),
|
||||
|
||||
/* Alias for header backward compatibility. */
|
||||
MEMBARRIER_CMD_SHARED = MEMBARRIER_CMD_GLOBAL,
|
||||
};
|
||||
|
||||
enum membarrier_cmd_flag {
|
||||
MEMBARRIER_CMD_FLAG_CPU = (1 << 0),
|
||||
};
|
||||
|
||||
#endif /* _UAPI_LINUX_MEMBARRIER_H */
|
||||
|
Reference in New Issue
Block a user