simple.txt 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270
  1. This document provides options for those wishing to keep their
  2. memory-ordering lives simple, as is necessary for those whose domain
  3. is complex. After all, there are bugs other than memory-ordering bugs,
  4. and the time spent gaining memory-ordering knowledge is not available
  5. for gaining domain knowledge. Furthermore Linux-kernel memory model
  6. (LKMM) is quite complex, with subtle differences in code often having
  7. dramatic effects on correctness.
  8. The options near the beginning of this list are quite simple. The idea
  9. is not that kernel hackers don't already know about them, but rather
  10. that they might need the occasional reminder.
  11. Please note that this is a generic guide, and that specific subsystems
  12. will often have special requirements or idioms. For example, developers
  13. of MMIO-based device drivers will often need to use mb(), rmb(), and
  14. wmb(), and therefore might find smp_mb(), smp_rmb(), and smp_wmb()
  15. to be more natural than smp_load_acquire() and smp_store_release().
  16. On the other hand, those coming in from other environments will likely
  17. be more familiar with these last two.
  18. Single-threaded code
  19. ====================
  20. In single-threaded code, there is no reordering, at least assuming
  21. that your toolchain and hardware are working correctly. In addition,
  22. it is generally a mistake to assume your code will only run in a single
  23. threaded context as the kernel can enter the same code path on multiple
  24. CPUs at the same time. One important exception is a function that makes
  25. no external data references.
  26. In the general case, you will need to take explicit steps to ensure that
  27. your code really is executed within a single thread that does not access
  28. shared variables. A simple way to achieve this is to define a global lock
  29. that you acquire at the beginning of your code and release at the end,
  30. taking care to ensure that all references to your code's shared data are
  31. also carried out under that same lock. Because only one thread can hold
  32. this lock at a given time, your code will be executed single-threaded.
  33. This approach is called "code locking".
  34. Code locking can severely limit both performance and scalability, so it
  35. should be used with caution, and only on code paths that execute rarely.
  36. After all, a huge amount of effort was required to remove the Linux
  37. kernel's old "Big Kernel Lock", so let's please be very careful about
  38. adding new "little kernel locks".
  39. One of the advantages of locking is that, in happy contrast with the
  40. year 1981, almost all kernel developers are very familiar with locking.
  41. The Linux kernel's lockdep (CONFIG_PROVE_LOCKING=y) is very helpful with
  42. the formerly feared deadlock scenarios.
  43. Please use the standard locking primitives provided by the kernel rather
  44. than rolling your own. For one thing, the standard primitives interact
  45. properly with lockdep. For another thing, these primitives have been
  46. tuned to deal better with high contention. And for one final thing, it is
  47. surprisingly hard to correctly code production-quality lock acquisition
  48. and release functions. After all, even simple non-production-quality
  49. locking functions must carefully prevent both the CPU and the compiler
  50. from moving code in either direction across the locking function.
  51. Despite the scalability limitations of single-threaded code, RCU
  52. takes this approach for much of its grace-period processing and also
  53. for early-boot operation. The reason RCU is able to scale despite
  54. single-threaded grace-period processing is use of batching, where all
  55. updates that accumulated during one grace period are handled by the
  56. next one. In other words, slowing down grace-period processing makes
  57. it more efficient. Nor is RCU unique: Similar batching optimizations
  58. are used in many I/O operations.
  59. Packaged code
  60. =============
  61. Even if performance and scalability concerns prevent your code from
  62. being completely single-threaded, it is often possible to use library
  63. functions that handle the concurrency nearly or entirely on their own.
  64. This approach delegates any LKMM worries to the library maintainer.
  65. In the kernel, what is the "library"? Quite a bit. It includes the
  66. contents of the lib/ directory, much of the include/linux/ directory along
  67. with a lot of other heavily used APIs. But heavily used examples include
  68. the list macros (for example, include/linux/{,rcu}list.h), workqueues,
  69. smp_call_function(), and the various hash tables and search trees.
  70. Data locking
  71. ============
  72. With code locking, we use single-threaded code execution to guarantee
  73. serialized access to the data that the code is accessing. However,
  74. we can also achieve this by instead associating the lock with specific
  75. instances of the data structures. This creates a "critical section"
  76. in the code execution that will execute as though it is single threaded.
  77. By placing all the accesses and modifications to a shared data structure
  78. inside a critical section, we ensure that the execution context that
  79. holds the lock has exclusive access to the shared data.
  80. The poster boy for this approach is the hash table, where placing a lock
  81. in each hash bucket allows operations on different buckets to proceed
  82. concurrently. This works because the buckets do not overlap with each
  83. other, so that an operation on one bucket does not interfere with any
  84. other bucket.
  85. As the number of buckets increases, data locking scales naturally.
  86. In particular, if the amount of data increases with the number of CPUs,
  87. increasing the number of buckets as the number of CPUs increase results
  88. in a naturally scalable data structure.
  89. Per-CPU processing
  90. ==================
  91. Partitioning processing and data over CPUs allows each CPU to take
  92. a single-threaded approach while providing excellent performance and
  93. scalability. Of course, there is no free lunch: The dark side of this
  94. excellence is substantially increased memory footprint.
  95. In addition, it is sometimes necessary to occasionally update some global
  96. view of this processing and data, in which case something like locking
  97. must be used to protect this global view. This is the approach taken
  98. by the percpu_counter infrastructure. In many cases, there are already
  99. generic/library variants of commonly used per-cpu constructs available.
  100. Please use them rather than rolling your own.
  101. RCU uses DEFINE_PER_CPU*() declaration to create a number of per-CPU
  102. data sets. For example, each CPU does private quiescent-state processing
  103. within its instance of the per-CPU rcu_data structure, and then uses data
  104. locking to report quiescent states up the grace-period combining tree.
  105. Packaged primitives: Sequence locking
  106. =====================================
  107. Lockless programming is considered by many to be more difficult than
  108. lock-based programming, but there are a few lockless design patterns that
  109. have been built out into an API. One of these APIs is sequence locking.
  110. Although this APIs can be used in extremely complex ways, there are simple
  111. and effective ways of using it that avoid the need to pay attention to
  112. memory ordering.
  113. The basic keep-things-simple rule for sequence locking is "do not write
  114. in read-side code". Yes, you can do writes from within sequence-locking
  115. readers, but it won't be so simple. For example, such writes will be
  116. lockless and should be idempotent.
  117. For more sophisticated use cases, LKMM can guide you, including use
  118. cases involving combining sequence locking with other synchronization
  119. primitives. (LKMM does not yet know about sequence locking, so it is
  120. currently necessary to open-code it in your litmus tests.)
  121. Additional information may be found in include/linux/seqlock.h.
  122. Packaged primitives: RCU
  123. ========================
  124. Another lockless design pattern that has been baked into an API
  125. is RCU. The Linux kernel makes sophisticated use of RCU, but the
  126. keep-things-simple rules for RCU are "do not write in read-side code"
  127. and "do not update anything that is visible to and accessed by readers",
  128. and "protect updates with locking".
  129. These rules are illustrated by the functions foo_update_a() and
  130. foo_get_a() shown in Documentation/RCU/whatisRCU.rst. Additional
  131. RCU usage patterns maybe found in Documentation/RCU and in the
  132. source code.
  133. Packaged primitives: Atomic operations
  134. ======================================
  135. Back in the day, the Linux kernel had three types of atomic operations:
  136. 1. Initialization and read-out, such as atomic_set() and atomic_read().
  137. 2. Operations that did not return a value and provided no ordering,
  138. such as atomic_inc() and atomic_dec().
  139. 3. Operations that returned a value and provided full ordering, such as
  140. atomic_add_return() and atomic_dec_and_test(). Note that some
  141. value-returning operations provide full ordering only conditionally.
  142. For example, cmpxchg() provides ordering only upon success.
  143. More recent kernels have operations that return a value but do not
  144. provide full ordering. These are flagged with either a _relaxed()
  145. suffix (providing no ordering), or an _acquire() or _release() suffix
  146. (providing limited ordering).
  147. Additional information may be found in these files:
  148. Documentation/atomic_t.txt
  149. Documentation/atomic_bitops.txt
  150. Documentation/core-api/refcount-vs-atomic.rst
  151. Reading code using these primitives is often also quite helpful.
  152. Lockless, fully ordered
  153. =======================
  154. When using locking, there often comes a time when it is necessary
  155. to access some variable or another without holding the data lock
  156. that serializes access to that variable.
  157. If you want to keep things simple, use the initialization and read-out
  158. operations from the previous section only when there are no racing
  159. accesses. Otherwise, use only fully ordered operations when accessing
  160. or modifying the variable. This approach guarantees that code prior
  161. to a given access to that variable will be seen by all CPUs has having
  162. happened before any code following any later access to that same variable.
  163. Please note that per-CPU functions are not atomic operations and
  164. hence they do not provide any ordering guarantees at all.
  165. If the lockless accesses are frequently executed reads that are used
  166. only for heuristics, or if they are frequently executed writes that
  167. are used only for statistics, please see the next section.
  168. Lockless statistics and heuristics
  169. ==================================
  170. Unordered primitives such as atomic_read(), atomic_set(), READ_ONCE(), and
  171. WRITE_ONCE() can safely be used in some cases. These primitives provide
  172. no ordering, but they do prevent the compiler from carrying out a number
  173. of destructive optimizations (for which please see the next section).
  174. One example use for these primitives is statistics, such as per-CPU
  175. counters exemplified by the rt_cache_stat structure's routing-cache
  176. statistics counters. Another example use case is heuristics, such as
  177. the jiffies_till_first_fqs and jiffies_till_next_fqs kernel parameters
  178. controlling how often RCU scans for idle CPUs.
  179. But be careful. "Unordered" really does mean "unordered". It is all
  180. too easy to assume ordering, and this assumption must be avoided when
  181. using these primitives.
  182. Don't let the compiler trip you up
  183. ==================================
  184. It can be quite tempting to use plain C-language accesses for lockless
  185. loads from and stores to shared variables. Although this is both
  186. possible and quite common in the Linux kernel, it does require a
  187. surprising amount of analysis, care, and knowledge about the compiler.
  188. Yes, some decades ago it was not unfair to consider a C compiler to be
  189. an assembler with added syntax and better portability, but the advent of
  190. sophisticated optimizing compilers mean that those days are long gone.
  191. Today's optimizing compilers can profoundly rewrite your code during the
  192. translation process, and have long been ready, willing, and able to do so.
  193. Therefore, if you really need to use C-language assignments instead of
  194. READ_ONCE(), WRITE_ONCE(), and so on, you will need to have a very good
  195. understanding of both the C standard and your compiler. Here are some
  196. introductory references and some tooling to start you on this noble quest:
  197. Who's afraid of a big bad optimizing compiler?
  198. https://lwn.net/Articles/793253/
  199. Calibrating your fear of big bad optimizing compilers
  200. https://lwn.net/Articles/799218/
  201. Concurrency bugs should fear the big bad data-race detector (part 1)
  202. https://lwn.net/Articles/816850/
  203. Concurrency bugs should fear the big bad data-race detector (part 2)
  204. https://lwn.net/Articles/816854/
  205. More complex use cases
  206. ======================
  207. If the alternatives above do not do what you need, please look at the
  208. recipes-pairs.txt file to peel off the next layer of the memory-ordering
  209. onion.