Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar: "These were the main changes in this cycle: - More -rt motivated separation of CONFIG_PREEMPT and CONFIG_PREEMPTION. - Add more low level scheduling topology sanity checks and warnings to filter out nonsensical topologies that break scheduling. - Extend uclamp constraints to influence wakeup CPU placement - Make the RT scheduler more aware of asymmetric topologies and CPU capacities, via uclamp metrics, if CONFIG_UCLAMP_TASK=y - Make idle CPU selection more consistent - Various fixes, smaller cleanups, updates and enhancements - please see the git log for details" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (58 commits) sched/fair: Define sched_idle_cpu() only for SMP configurations sched/topology: Assert non-NUMA topology masks don't (partially) overlap idle: fix spelling mistake "iterrupts" -> "interrupts" sched/fair: Remove redundant call to cpufreq_update_util() sched/psi: create /proc/pressure and /proc/pressure/{io|memory|cpu} only when psi enabled sched/fair: Fix sgc->{min,max}_capacity calculation for SD_OVERLAP sched/fair: calculate delta runnable load only when it's needed sched/cputime: move rq parameter in irqtime_account_process_tick stop_machine: Make stop_cpus() static sched/debug: Reset watchdog on all CPUs while processing sysrq-t sched/core: Fix size of rq::uclamp initialization sched/uclamp: Fix a bug in propagating uclamp value in new cgroups sched/fair: Load balance aggressively for SCHED_IDLE CPUs sched/fair : Improve update_sd_pick_busiest for spare capacity case watchdog: Remove soft_lockup_hrtimer_cnt and related code sched/rt: Make RT capacity-aware sched/fair: Make EAS wakeup placement consider uclamp restrictions sched/fair: Make task_fits_capacity() consider uclamp restrictions sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with() sched/uclamp: Make uclamp util helpers use and return UL values ...
This commit is contained in:
13
kernel/cpu.c
13
kernel/cpu.c
@@ -525,8 +525,7 @@ static int bringup_wait_for_ap(unsigned int cpu)
|
||||
if (WARN_ON_ONCE((!cpu_online(cpu))))
|
||||
return -ECANCELED;
|
||||
|
||||
/* Unpark the stopper thread and the hotplug thread of the target cpu */
|
||||
stop_machine_unpark(cpu);
|
||||
/* Unpark the hotplug thread of the target cpu */
|
||||
kthread_unpark(st->thread);
|
||||
|
||||
/*
|
||||
@@ -1089,8 +1088,8 @@ void notify_cpu_starting(unsigned int cpu)
|
||||
|
||||
/*
|
||||
* Called from the idle task. Wake up the controlling task which brings the
|
||||
* stopper and the hotplug thread of the upcoming CPU up and then delegates
|
||||
* the rest of the online bringup to the hotplug thread.
|
||||
* hotplug thread of the upcoming CPU up and then delegates the rest of the
|
||||
* online bringup to the hotplug thread.
|
||||
*/
|
||||
void cpuhp_online_idle(enum cpuhp_state state)
|
||||
{
|
||||
@@ -1100,6 +1099,12 @@ void cpuhp_online_idle(enum cpuhp_state state)
|
||||
if (state != CPUHP_AP_ONLINE_IDLE)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Unpart the stopper thread before we start the idle loop (and start
|
||||
* scheduling); this ensures the stopper task is always available.
|
||||
*/
|
||||
stop_machine_unpark(smp_processor_id());
|
||||
|
||||
st->state = CPUHP_AP_ONLINE_IDLE;
|
||||
complete_ap_thread(st, true);
|
||||
}
|
||||
|
Reference in New Issue
Block a user