sched-capacity.rst 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441
  1. =========================
  2. Capacity Aware Scheduling
  3. =========================
  4. 1. CPU Capacity
  5. ===============
  6. 1.1 Introduction
  7. ----------------
  8. Conventional, homogeneous SMP platforms are composed of purely identical
  9. CPUs. Heterogeneous platforms on the other hand are composed of CPUs with
  10. different performance characteristics - on such platforms, not all CPUs can be
  11. considered equal.
  12. CPU capacity is a measure of the performance a CPU can reach, normalized against
  13. the most performant CPU in the system. Heterogeneous systems are also called
  14. asymmetric CPU capacity systems, as they contain CPUs of different capacities.
  15. Disparity in maximum attainable performance (IOW in maximum CPU capacity) stems
  16. from two factors:
  17. - not all CPUs may have the same microarchitecture (µarch).
  18. - with Dynamic Voltage and Frequency Scaling (DVFS), not all CPUs may be
  19. physically able to attain the higher Operating Performance Points (OPP).
  20. Arm big.LITTLE systems are an example of both. The big CPUs are more
  21. performance-oriented than the LITTLE ones (more pipeline stages, bigger caches,
  22. smarter predictors, etc), and can usually reach higher OPPs than the LITTLE ones
  23. can.
  24. CPU performance is usually expressed in Millions of Instructions Per Second
  25. (MIPS), which can also be expressed as a given amount of instructions attainable
  26. per Hz, leading to::
  27. capacity(cpu) = work_per_hz(cpu) * max_freq(cpu)
  28. 1.2 Scheduler terms
  29. -------------------
  30. Two different capacity values are used within the scheduler. A CPU's
  31. ``capacity_orig`` is its maximum attainable capacity, i.e. its maximum
  32. attainable performance level. A CPU's ``capacity`` is its ``capacity_orig`` to
  33. which some loss of available performance (e.g. time spent handling IRQs) is
  34. subtracted.
  35. Note that a CPU's ``capacity`` is solely intended to be used by the CFS class,
  36. while ``capacity_orig`` is class-agnostic. The rest of this document will use
  37. the term ``capacity`` interchangeably with ``capacity_orig`` for the sake of
  38. brevity.
  39. 1.3 Platform examples
  40. ---------------------
  41. 1.3.1 Identical OPPs
  42. ~~~~~~~~~~~~~~~~~~~~
  43. Consider an hypothetical dual-core asymmetric CPU capacity system where
  44. - work_per_hz(CPU0) = W
  45. - work_per_hz(CPU1) = W/2
  46. - all CPUs are running at the same fixed frequency
  47. By the above definition of capacity:
  48. - capacity(CPU0) = C
  49. - capacity(CPU1) = C/2
  50. To draw the parallel with Arm big.LITTLE, CPU0 would be a big while CPU1 would
  51. be a LITTLE.
  52. With a workload that periodically does a fixed amount of work, you will get an
  53. execution trace like so::
  54. CPU0 work ^
  55. | ____ ____ ____
  56. | | | | | | |
  57. +----+----+----+----+----+----+----+----+----+----+-> time
  58. CPU1 work ^
  59. | _________ _________ ____
  60. | | | | | |
  61. +----+----+----+----+----+----+----+----+----+----+-> time
  62. CPU0 has the highest capacity in the system (C), and completes a fixed amount of
  63. work W in T units of time. On the other hand, CPU1 has half the capacity of
  64. CPU0, and thus only completes W/2 in T.
  65. 1.3.2 Different max OPPs
  66. ~~~~~~~~~~~~~~~~~~~~~~~~
  67. Usually, CPUs of different capacity values also have different maximum
  68. OPPs. Consider the same CPUs as above (i.e. same work_per_hz()) with:
  69. - max_freq(CPU0) = F
  70. - max_freq(CPU1) = 2/3 * F
  71. This yields:
  72. - capacity(CPU0) = C
  73. - capacity(CPU1) = C/3
  74. Executing the same workload as described in 1.3.1, which each CPU running at its
  75. maximum frequency results in::
  76. CPU0 work ^
  77. | ____ ____ ____
  78. | | | | | | |
  79. +----+----+----+----+----+----+----+----+----+----+-> time
  80. workload on CPU1
  81. CPU1 work ^
  82. | ______________ ______________ ____
  83. | | | | | |
  84. +----+----+----+----+----+----+----+----+----+----+-> time
  85. 1.4 Representation caveat
  86. -------------------------
  87. It should be noted that having a *single* value to represent differences in CPU
  88. performance is somewhat of a contentious point. The relative performance
  89. difference between two different µarchs could be X% on integer operations, Y% on
  90. floating point operations, Z% on branches, and so on. Still, results using this
  91. simple approach have been satisfactory for now.
  92. 2. Task utilization
  93. ===================
  94. 2.1 Introduction
  95. ----------------
  96. Capacity aware scheduling requires an expression of a task's requirements with
  97. regards to CPU capacity. Each scheduler class can express this differently, and
  98. while task utilization is specific to CFS, it is convenient to describe it here
  99. in order to introduce more generic concepts.
  100. Task utilization is a percentage meant to represent the throughput requirements
  101. of a task. A simple approximation of it is the task's duty cycle, i.e.::
  102. task_util(p) = duty_cycle(p)
  103. On an SMP system with fixed frequencies, 100% utilization suggests the task is a
  104. busy loop. Conversely, 10% utilization hints it is a small periodic task that
  105. spends more time sleeping than executing. Variable CPU frequencies and
  106. asymmetric CPU capacities complexify this somewhat; the following sections will
  107. expand on these.
  108. 2.2 Frequency invariance
  109. ------------------------
  110. One issue that needs to be taken into account is that a workload's duty cycle is
  111. directly impacted by the current OPP the CPU is running at. Consider running a
  112. periodic workload at a given frequency F::
  113. CPU work ^
  114. | ____ ____ ____
  115. | | | | | | |
  116. +----+----+----+----+----+----+----+----+----+----+-> time
  117. This yields duty_cycle(p) == 25%.
  118. Now, consider running the *same* workload at frequency F/2::
  119. CPU work ^
  120. | _________ _________ ____
  121. | | | | | |
  122. +----+----+----+----+----+----+----+----+----+----+-> time
  123. This yields duty_cycle(p) == 50%, despite the task having the exact same
  124. behaviour (i.e. executing the same amount of work) in both executions.
  125. The task utilization signal can be made frequency invariant using the following
  126. formula::
  127. task_util_freq_inv(p) = duty_cycle(p) * (curr_frequency(cpu) / max_frequency(cpu))
  128. Applying this formula to the two examples above yields a frequency invariant
  129. task utilization of 25%.
  130. 2.3 CPU invariance
  131. ------------------
  132. CPU capacity has a similar effect on task utilization in that running an
  133. identical workload on CPUs of different capacity values will yield different
  134. duty cycles.
  135. Consider the system described in 1.3.2., i.e.::
  136. - capacity(CPU0) = C
  137. - capacity(CPU1) = C/3
  138. Executing a given periodic workload on each CPU at their maximum frequency would
  139. result in::
  140. CPU0 work ^
  141. | ____ ____ ____
  142. | | | | | | |
  143. +----+----+----+----+----+----+----+----+----+----+-> time
  144. CPU1 work ^
  145. | ______________ ______________ ____
  146. | | | | | |
  147. +----+----+----+----+----+----+----+----+----+----+-> time
  148. IOW,
  149. - duty_cycle(p) == 25% if p runs on CPU0 at its maximum frequency
  150. - duty_cycle(p) == 75% if p runs on CPU1 at its maximum frequency
  151. The task utilization signal can be made CPU invariant using the following
  152. formula::
  153. task_util_cpu_inv(p) = duty_cycle(p) * (capacity(cpu) / max_capacity)
  154. with ``max_capacity`` being the highest CPU capacity value in the
  155. system. Applying this formula to the above example above yields a CPU
  156. invariant task utilization of 25%.
  157. 2.4 Invariant task utilization
  158. ------------------------------
  159. Both frequency and CPU invariance need to be applied to task utilization in
  160. order to obtain a truly invariant signal. The pseudo-formula for a task
  161. utilization that is both CPU and frequency invariant is thus, for a given
  162. task p::
  163. curr_frequency(cpu) capacity(cpu)
  164. task_util_inv(p) = duty_cycle(p) * ------------------- * -------------
  165. max_frequency(cpu) max_capacity
  166. In other words, invariant task utilization describes the behaviour of a task as
  167. if it were running on the highest-capacity CPU in the system, running at its
  168. maximum frequency.
  169. Any mention of task utilization in the following sections will imply its
  170. invariant form.
  171. 2.5 Utilization estimation
  172. --------------------------
  173. Without a crystal ball, task behaviour (and thus task utilization) cannot
  174. accurately be predicted the moment a task first becomes runnable. The CFS class
  175. maintains a handful of CPU and task signals based on the Per-Entity Load
  176. Tracking (PELT) mechanism, one of those yielding an *average* utilization (as
  177. opposed to instantaneous).
  178. This means that while the capacity aware scheduling criteria will be written
  179. considering a "true" task utilization (using a crystal ball), the implementation
  180. will only ever be able to use an estimator thereof.
  181. 3. Capacity aware scheduling requirements
  182. =========================================
  183. 3.1 CPU capacity
  184. ----------------
  185. Linux cannot currently figure out CPU capacity on its own, this information thus
  186. needs to be handed to it. Architectures must define arch_scale_cpu_capacity()
  187. for that purpose.
  188. The arm and arm64 architectures directly map this to the arch_topology driver
  189. CPU scaling data, which is derived from the capacity-dmips-mhz CPU binding; see
  190. Documentation/devicetree/bindings/arm/cpu-capacity.txt.
  191. 3.2 Frequency invariance
  192. ------------------------
  193. As stated in 2.2, capacity-aware scheduling requires a frequency-invariant task
  194. utilization. Architectures must define arch_scale_freq_capacity(cpu) for that
  195. purpose.
  196. Implementing this function requires figuring out at which frequency each CPU
  197. have been running at. One way to implement this is to leverage hardware counters
  198. whose increment rate scale with a CPU's current frequency (APERF/MPERF on x86,
  199. AMU on arm64). Another is to directly hook into cpufreq frequency transitions,
  200. when the kernel is aware of the switched-to frequency (also employed by
  201. arm/arm64).
  202. 4. Scheduler topology
  203. =====================
  204. During the construction of the sched domains, the scheduler will figure out
  205. whether the system exhibits asymmetric CPU capacities. Should that be the
  206. case:
  207. - The sched_asym_cpucapacity static key will be enabled.
  208. - The SD_ASYM_CPUCAPACITY_FULL flag will be set at the lowest sched_domain
  209. level that spans all unique CPU capacity values.
  210. - The SD_ASYM_CPUCAPACITY flag will be set for any sched_domain that spans
  211. CPUs with any range of asymmetry.
  212. The sched_asym_cpucapacity static key is intended to guard sections of code that
  213. cater to asymmetric CPU capacity systems. Do note however that said key is
  214. *system-wide*. Imagine the following setup using cpusets::
  215. capacity C/2 C
  216. ________ ________
  217. / \ / \
  218. CPUs 0 1 2 3 4 5 6 7
  219. \__/ \______________/
  220. cpusets cs0 cs1
  221. Which could be created via:
  222. .. code-block:: sh
  223. mkdir /sys/fs/cgroup/cpuset/cs0
  224. echo 0-1 > /sys/fs/cgroup/cpuset/cs0/cpuset.cpus
  225. echo 0 > /sys/fs/cgroup/cpuset/cs0/cpuset.mems
  226. mkdir /sys/fs/cgroup/cpuset/cs1
  227. echo 2-7 > /sys/fs/cgroup/cpuset/cs1/cpuset.cpus
  228. echo 0 > /sys/fs/cgroup/cpuset/cs1/cpuset.mems
  229. echo 0 > /sys/fs/cgroup/cpuset/cpuset.sched_load_balance
  230. Since there *is* CPU capacity asymmetry in the system, the
  231. sched_asym_cpucapacity static key will be enabled. However, the sched_domain
  232. hierarchy of CPUs 0-1 spans a single capacity value: SD_ASYM_CPUCAPACITY isn't
  233. set in that hierarchy, it describes an SMP island and should be treated as such.
  234. Therefore, the 'canonical' pattern for protecting codepaths that cater to
  235. asymmetric CPU capacities is to:
  236. - Check the sched_asym_cpucapacity static key
  237. - If it is enabled, then also check for the presence of SD_ASYM_CPUCAPACITY in
  238. the sched_domain hierarchy (if relevant, i.e. the codepath targets a specific
  239. CPU or group thereof)
  240. 5. Capacity aware scheduling implementation
  241. ===========================================
  242. 5.1 CFS
  243. -------
  244. 5.1.1 Capacity fitness
  245. ~~~~~~~~~~~~~~~~~~~~~~
  246. The main capacity scheduling criterion of CFS is::
  247. task_util(p) < capacity(task_cpu(p))
  248. This is commonly called the capacity fitness criterion, i.e. CFS must ensure a
  249. task "fits" on its CPU. If it is violated, the task will need to achieve more
  250. work than what its CPU can provide: it will be CPU-bound.
  251. Furthermore, uclamp lets userspace specify a minimum and a maximum utilization
  252. value for a task, either via sched_setattr() or via the cgroup interface (see
  253. Documentation/admin-guide/cgroup-v2.rst). As its name imply, this can be used to
  254. clamp task_util() in the previous criterion.
  255. 5.1.2 Wakeup CPU selection
  256. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  257. CFS task wakeup CPU selection follows the capacity fitness criterion described
  258. above. On top of that, uclamp is used to clamp the task utilization values,
  259. which lets userspace have more leverage over the CPU selection of CFS
  260. tasks. IOW, CFS wakeup CPU selection searches for a CPU that satisfies::
  261. clamp(task_util(p), task_uclamp_min(p), task_uclamp_max(p)) < capacity(cpu)
  262. By using uclamp, userspace can e.g. allow a busy loop (100% utilization) to run
  263. on any CPU by giving it a low uclamp.max value. Conversely, it can force a small
  264. periodic task (e.g. 10% utilization) to run on the highest-performance CPUs by
  265. giving it a high uclamp.min value.
  266. .. note::
  267. Wakeup CPU selection in CFS can be eclipsed by Energy Aware Scheduling
  268. (EAS), which is described in Documentation/scheduler/sched-energy.rst.
  269. 5.1.3 Load balancing
  270. ~~~~~~~~~~~~~~~~~~~~
  271. A pathological case in the wakeup CPU selection occurs when a task rarely
  272. sleeps, if at all - it thus rarely wakes up, if at all. Consider::
  273. w == wakeup event
  274. capacity(CPU0) = C
  275. capacity(CPU1) = C / 3
  276. workload on CPU0
  277. CPU work ^
  278. | _________ _________ ____
  279. | | | | | |
  280. +----+----+----+----+----+----+----+----+----+----+-> time
  281. w w w
  282. workload on CPU1
  283. CPU work ^
  284. | ____________________________________________
  285. | |
  286. +----+----+----+----+----+----+----+----+----+----+->
  287. w
  288. This workload should run on CPU0, but if the task either:
  289. - was improperly scheduled from the start (inaccurate initial
  290. utilization estimation)
  291. - was properly scheduled from the start, but suddenly needs more
  292. processing power
  293. then it might become CPU-bound, IOW ``task_util(p) > capacity(task_cpu(p))``;
  294. the CPU capacity scheduling criterion is violated, and there may not be any more
  295. wakeup event to fix this up via wakeup CPU selection.
  296. Tasks that are in this situation are dubbed "misfit" tasks, and the mechanism
  297. put in place to handle this shares the same name. Misfit task migration
  298. leverages the CFS load balancer, more specifically the active load balance part
  299. (which caters to migrating currently running tasks). When load balance happens,
  300. a misfit active load balance will be triggered if a misfit task can be migrated
  301. to a CPU with more capacity than its current one.
  302. 5.2 RT
  303. ------
  304. 5.2.1 Wakeup CPU selection
  305. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  306. RT task wakeup CPU selection searches for a CPU that satisfies::
  307. task_uclamp_min(p) <= capacity(task_cpu(cpu))
  308. while still following the usual priority constraints. If none of the candidate
  309. CPUs can satisfy this capacity criterion, then strict priority based scheduling
  310. is followed and CPU capacities are ignored.
  311. 5.3 DL
  312. ------
  313. 5.3.1 Wakeup CPU selection
  314. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  315. DL task wakeup CPU selection searches for a CPU that satisfies::
  316. task_bandwidth(p) < capacity(task_cpu(p))
  317. while still respecting the usual bandwidth and deadline constraints. If
  318. none of the candidate CPUs can satisfy this capacity criterion, then the
  319. task will remain on its current CPU.