access-marking.txt 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598
  1. MARKING SHARED-MEMORY ACCESSES
  2. ==============================
  3. This document provides guidelines for marking intentionally concurrent
  4. normal accesses to shared memory, that is "normal" as in accesses that do
  5. not use read-modify-write atomic operations. It also describes how to
  6. document these accesses, both with comments and with special assertions
  7. processed by the Kernel Concurrency Sanitizer (KCSAN). This discussion
  8. builds on an earlier LWN article [1].
  9. ACCESS-MARKING OPTIONS
  10. ======================
  11. The Linux kernel provides the following access-marking options:
  12. 1. Plain C-language accesses (unmarked), for example, "a = b;"
  13. 2. Data-race marking, for example, "data_race(a = b);"
  14. 3. READ_ONCE(), for example, "a = READ_ONCE(b);"
  15. The various forms of atomic_read() also fit in here.
  16. 4. WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"
  17. The various forms of atomic_set() also fit in here.
  18. These may be used in combination, as shown in this admittedly improbable
  19. example:
  20. WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));
  21. Neither plain C-language accesses nor data_race() (#1 and #2 above) place
  22. any sort of constraint on the compiler's choice of optimizations [2].
  23. In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the
  24. compiler's use of code-motion and common-subexpression optimizations.
  25. Therefore, if a given access is involved in an intentional data race,
  26. using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
  27. preferable to data_race(), which in turn is usually preferable to plain
  28. C-language accesses. It is permissible to combine #2 and #3, for example,
  29. data_race(READ_ONCE(a)), which will both restrict compiler optimizations
  30. and disable KCSAN diagnostics.
  31. KCSAN will complain about many types of data races involving plain
  32. C-language accesses, but marking all accesses involved in a given data
  33. race with one of data_race(), READ_ONCE(), or WRITE_ONCE(), will prevent
  34. KCSAN from complaining. Of course, lack of KCSAN complaints does not
  35. imply correct code. Therefore, please take a thoughtful approach
  36. when responding to KCSAN complaints. Churning the code base with
  37. ill-considered additions of data_race(), READ_ONCE(), and WRITE_ONCE()
  38. is unhelpful.
  39. In fact, the following sections describe situations where use of
  40. data_race() and even plain C-language accesses is preferable to
  41. READ_ONCE() and WRITE_ONCE().
  42. Use of the data_race() Macro
  43. ----------------------------
  44. Here are some situations where data_race() should be used instead of
  45. READ_ONCE() and WRITE_ONCE():
  46. 1. Data-racy loads from shared variables whose values are used only
  47. for diagnostic purposes.
  48. 2. Data-racy reads whose values are checked against marked reload.
  49. 3. Reads whose values feed into error-tolerant heuristics.
  50. 4. Writes setting values that feed into error-tolerant heuristics.
  51. Data-Racy Reads for Approximate Diagnostics
  52. Approximate diagnostics include lockdep reports, monitoring/statistics
  53. (including /proc and /sys output), WARN*()/BUG*() checks whose return
  54. values are ignored, and other situations where reads from shared variables
  55. are not an integral part of the core concurrency design.
  56. In fact, use of data_race() instead READ_ONCE() for these diagnostic
  57. reads can enable better checking of the remaining accesses implementing
  58. the core concurrency design. For example, suppose that the core design
  59. prevents any non-diagnostic reads from shared variable x from running
  60. concurrently with updates to x. Then using plain C-language writes
  61. to x allows KCSAN to detect reads from x from within regions of code
  62. that fail to exclude the updates. In this case, it is important to use
  63. data_race() for the diagnostic reads because otherwise KCSAN would give
  64. false-positive warnings about these diagnostic reads.
  65. If it is necessary to both restrict compiler optimizations and disable
  66. KCSAN diagnostics, use both data_race() and READ_ONCE(), for example,
  67. data_race(READ_ONCE(a)).
  68. In theory, plain C-language loads can also be used for this use case.
  69. However, in practice this will have the disadvantage of causing KCSAN
  70. to generate false positives because KCSAN will have no way of knowing
  71. that the resulting data race was intentional.
  72. Data-Racy Reads That Are Checked Against Marked Reload
  73. The values from some reads are not implicitly trusted. They are instead
  74. fed into some operation that checks the full value against a later marked
  75. load from memory, which means that the occasional arbitrarily bogus value
  76. is not a problem. For example, if a bogus value is fed into cmpxchg(),
  77. all that happens is that this cmpxchg() fails, which normally results
  78. in a retry. Unless the race condition that resulted in the bogus value
  79. recurs, this retry will with high probability succeed, so no harm done.
  80. However, please keep in mind that a data_race() load feeding into
  81. a cmpxchg_relaxed() might still be subject to load fusing on some
  82. architectures. Therefore, it is best to capture the return value from
  83. the failing cmpxchg() for the next iteration of the loop, an approach
  84. that provides the compiler much less scope for mischievous optimizations.
  85. Capturing the return value from cmpxchg() also saves a memory reference
  86. in many cases.
  87. In theory, plain C-language loads can also be used for this use case.
  88. However, in practice this will have the disadvantage of causing KCSAN
  89. to generate false positives because KCSAN will have no way of knowing
  90. that the resulting data race was intentional.
  91. Reads Feeding Into Error-Tolerant Heuristics
  92. Values from some reads feed into heuristics that can tolerate occasional
  93. errors. Such reads can use data_race(), thus allowing KCSAN to focus on
  94. the other accesses to the relevant shared variables. But please note
  95. that data_race() loads are subject to load fusing, which can result in
  96. consistent errors, which in turn are quite capable of breaking heuristics.
  97. Therefore use of data_race() should be limited to cases where some other
  98. code (such as a barrier() call) will force the occasional reload.
  99. Note that this use case requires that the heuristic be able to handle
  100. any possible error. In contrast, if the heuristics might be fatally
  101. confused by one or more of the possible erroneous values, use READ_ONCE()
  102. instead of data_race().
  103. In theory, plain C-language loads can also be used for this use case.
  104. However, in practice this will have the disadvantage of causing KCSAN
  105. to generate false positives because KCSAN will have no way of knowing
  106. that the resulting data race was intentional.
  107. Writes Setting Values Feeding Into Error-Tolerant Heuristics
  108. The values read into error-tolerant heuristics come from somewhere,
  109. for example, from sysfs. This means that some code in sysfs writes
  110. to this same variable, and these writes can also use data_race().
  111. After all, if the heuristic can tolerate the occasional bogus value
  112. due to compiler-mangled reads, it can also tolerate the occasional
  113. compiler-mangled write, at least assuming that the proper value is in
  114. place once the write completes.
  115. Plain C-language stores can also be used for this use case. However,
  116. in kernels built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, this
  117. will have the disadvantage of causing KCSAN to generate false positives
  118. because KCSAN will have no way of knowing that the resulting data race
  119. was intentional.
  120. Use of Plain C-Language Accesses
  121. --------------------------------
  122. Here are some example situations where plain C-language accesses should
  123. used instead of READ_ONCE(), WRITE_ONCE(), and data_race():
  124. 1. Accesses protected by mutual exclusion, including strict locking
  125. and sequence locking.
  126. 2. Initialization-time and cleanup-time accesses. This covers a
  127. wide variety of situations, including the uniprocessor phase of
  128. system boot, variables to be used by not-yet-spawned kthreads,
  129. structures not yet published to reference-counted or RCU-protected
  130. data structures, and the cleanup side of any of these situations.
  131. 3. Per-CPU variables that are not accessed from other CPUs.
  132. 4. Private per-task variables, including on-stack variables, some
  133. fields in the task_struct structure, and task-private heap data.
  134. 5. Any other loads for which there is not supposed to be a concurrent
  135. store to that same variable.
  136. 6. Any other stores for which there should be neither concurrent
  137. loads nor concurrent stores to that same variable.
  138. But note that KCSAN makes two explicit exceptions to this rule
  139. by default, refraining from flagging plain C-language stores:
  140. a. No matter what. You can override this default by building
  141. with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.
  142. b. When the store writes the value already contained in
  143. that variable. You can override this default by building
  144. with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
  145. c. When one of the stores is in an interrupt handler and
  146. the other in the interrupted code. You can override this
  147. default by building with CONFIG_KCSAN_INTERRUPT_WATCHER=y.
  148. Note that it is important to use plain C-language accesses in these cases,
  149. because doing otherwise prevents KCSAN from detecting violations of your
  150. code's synchronization rules.
  151. ACCESS-DOCUMENTATION OPTIONS
  152. ============================
  153. It is important to comment marked accesses so that people reading your
  154. code, yourself included, are reminded of the synchronization design.
  155. However, it is even more important to comment plain C-language accesses
  156. that are intentionally involved in data races. Such comments are
  157. needed to remind people reading your code, again, yourself included,
  158. of how the compiler has been prevented from optimizing those accesses
  159. into concurrency bugs.
  160. It is also possible to tell KCSAN about your synchronization design.
  161. For example, ASSERT_EXCLUSIVE_ACCESS(foo) tells KCSAN that any
  162. concurrent access to variable foo by any other CPU is an error, even
  163. if that concurrent access is marked with READ_ONCE(). In addition,
  164. ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that although it is OK for there
  165. to be concurrent reads from foo from other CPUs, it is an error for some
  166. other CPU to be concurrently writing to foo, even if that concurrent
  167. write is marked with data_race() or WRITE_ONCE().
  168. Note that although KCSAN will call out data races involving either
  169. ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_WRITER() on the one hand
  170. and data_race() writes on the other, KCSAN will not report the location
  171. of these data_race() writes.
  172. EXAMPLES
  173. ========
  174. As noted earlier, the goal is to prevent the compiler from destroying
  175. your concurrent algorithm, to help the human reader, and to inform
  176. KCSAN of aspects of your concurrency design. This section looks at a
  177. few examples showing how this can be done.
  178. Lock Protection With Lockless Diagnostic Access
  179. -----------------------------------------------
  180. For example, suppose a shared variable "foo" is read only while a
  181. reader-writer spinlock is read-held, written only while that same
  182. spinlock is write-held, except that it is also read locklessly for
  183. diagnostic purposes. The code might look as follows:
  184. int foo;
  185. DEFINE_RWLOCK(foo_rwlock);
  186. void update_foo(int newval)
  187. {
  188. write_lock(&foo_rwlock);
  189. foo = newval;
  190. do_something(newval);
  191. write_unlock(&foo_rwlock);
  192. }
  193. int read_foo(void)
  194. {
  195. int ret;
  196. read_lock(&foo_rwlock);
  197. do_something_else();
  198. ret = foo;
  199. read_unlock(&foo_rwlock);
  200. return ret;
  201. }
  202. void read_foo_diagnostic(void)
  203. {
  204. pr_info("Current value of foo: %d\n", data_race(foo));
  205. }
  206. The reader-writer lock prevents the compiler from introducing concurrency
  207. bugs into any part of the main algorithm using foo, which means that
  208. the accesses to foo within both update_foo() and read_foo() can (and
  209. should) be plain C-language accesses. One benefit of making them be
  210. plain C-language accesses is that KCSAN can detect any erroneous lockless
  211. reads from or updates to foo. The data_race() in read_foo_diagnostic()
  212. tells KCSAN that data races are expected, and should be silently
  213. ignored. This data_race() also tells the human reading the code that
  214. read_foo_diagnostic() might sometimes return a bogus value.
  215. If it is necessary to suppress compiler optimization and also detect
  216. buggy lockless writes, read_foo_diagnostic() can be updated as follows:
  217. void read_foo_diagnostic(void)
  218. {
  219. pr_info("Current value of foo: %d\n", data_race(READ_ONCE(foo)));
  220. }
  221. Alternatively, given that KCSAN is to ignore all accesses in this function,
  222. this function can be marked __no_kcsan and the data_race() can be dropped:
  223. void __no_kcsan read_foo_diagnostic(void)
  224. {
  225. pr_info("Current value of foo: %d\n", READ_ONCE(foo));
  226. }
  227. However, in order for KCSAN to detect buggy lockless writes, your kernel
  228. must be built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n. If you
  229. need KCSAN to detect such a write even if that write did not change
  230. the value of foo, you also need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
  231. If you need KCSAN to detect such a write happening in an interrupt handler
  232. running on the same CPU doing the legitimate lock-protected write, you
  233. also need CONFIG_KCSAN_INTERRUPT_WATCHER=y. With some or all of these
  234. Kconfig options set properly, KCSAN can be quite helpful, although
  235. it is not necessarily a full replacement for hardware watchpoints.
  236. On the other hand, neither are hardware watchpoints a full replacement
  237. for KCSAN because it is not always easy to tell hardware watchpoint to
  238. conditionally trap on accesses.
  239. Lock-Protected Writes With Lockless Reads
  240. -----------------------------------------
  241. For another example, suppose a shared variable "foo" is updated only
  242. while holding a spinlock, but is read locklessly. The code might look
  243. as follows:
  244. int foo;
  245. DEFINE_SPINLOCK(foo_lock);
  246. void update_foo(int newval)
  247. {
  248. spin_lock(&foo_lock);
  249. WRITE_ONCE(foo, newval);
  250. ASSERT_EXCLUSIVE_WRITER(foo);
  251. do_something(newval);
  252. spin_unlock(&foo_wlock);
  253. }
  254. int read_foo(void)
  255. {
  256. do_something_else();
  257. return READ_ONCE(foo);
  258. }
  259. Because foo is read locklessly, all accesses are marked. The purpose
  260. of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
  261. concurrent lockless write.
  262. Lock-Protected Writes With Heuristic Lockless Reads
  263. ---------------------------------------------------
  264. For another example, suppose that the code can normally make use of
  265. a per-data-structure lock, but there are times when a global lock
  266. is required. These times are indicated via a global flag. The code
  267. might look as follows, and is based loosely on nf_conntrack_lock(),
  268. nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
  269. bool global_flag;
  270. DEFINE_SPINLOCK(global_lock);
  271. struct foo {
  272. spinlock_t f_lock;
  273. int f_data;
  274. };
  275. /* All foo structures are in the following array. */
  276. int nfoo;
  277. struct foo *foo_array;
  278. void do_something_locked(struct foo *fp)
  279. {
  280. /* This works even if data_race() returns nonsense. */
  281. if (!data_race(global_flag)) {
  282. spin_lock(&fp->f_lock);
  283. if (!smp_load_acquire(&global_flag)) {
  284. do_something(fp);
  285. spin_unlock(&fp->f_lock);
  286. return;
  287. }
  288. spin_unlock(&fp->f_lock);
  289. }
  290. spin_lock(&global_lock);
  291. /* global_lock held, thus global flag cannot be set. */
  292. spin_lock(&fp->f_lock);
  293. spin_unlock(&global_lock);
  294. /*
  295. * global_flag might be set here, but begin_global()
  296. * will wait for ->f_lock to be released.
  297. */
  298. do_something(fp);
  299. spin_unlock(&fp->f_lock);
  300. }
  301. void begin_global(void)
  302. {
  303. int i;
  304. spin_lock(&global_lock);
  305. WRITE_ONCE(global_flag, true);
  306. for (i = 0; i < nfoo; i++) {
  307. /*
  308. * Wait for pre-existing local locks. One at
  309. * a time to avoid lockdep limitations.
  310. */
  311. spin_lock(&fp->f_lock);
  312. spin_unlock(&fp->f_lock);
  313. }
  314. }
  315. void end_global(void)
  316. {
  317. smp_store_release(&global_flag, false);
  318. spin_unlock(&global_lock);
  319. }
  320. All code paths leading from the do_something_locked() function's first
  321. read from global_flag acquire a lock, so endless load fusing cannot
  322. happen.
  323. If the value read from global_flag is true, then global_flag is
  324. rechecked while holding ->f_lock, which, if global_flag is now false,
  325. prevents begin_global() from completing. It is therefore safe to invoke
  326. do_something().
  327. Otherwise, if either value read from global_flag is true, then after
  328. global_lock is acquired global_flag must be false. The acquisition of
  329. ->f_lock will prevent any call to begin_global() from returning, which
  330. means that it is safe to release global_lock and invoke do_something().
  331. For this to work, only those foo structures in foo_array[] may be passed
  332. to do_something_locked(). The reason for this is that the synchronization
  333. with begin_global() relies on momentarily holding the lock of each and
  334. every foo structure.
  335. The smp_load_acquire() and smp_store_release() are required because
  336. changes to a foo structure between calls to begin_global() and
  337. end_global() are carried out without holding that structure's ->f_lock.
  338. The smp_load_acquire() and smp_store_release() ensure that the next
  339. invocation of do_something() from do_something_locked() will see those
  340. changes.
  341. Lockless Reads and Writes
  342. -------------------------
  343. For another example, suppose a shared variable "foo" is both read and
  344. updated locklessly. The code might look as follows:
  345. int foo;
  346. int update_foo(int newval)
  347. {
  348. int ret;
  349. ret = xchg(&foo, newval);
  350. do_something(newval);
  351. return ret;
  352. }
  353. int read_foo(void)
  354. {
  355. do_something_else();
  356. return READ_ONCE(foo);
  357. }
  358. Because foo is accessed locklessly, all accesses are marked. It does
  359. not make sense to use ASSERT_EXCLUSIVE_WRITER() in this case because
  360. there really can be concurrent lockless writers. KCSAN would
  361. flag any concurrent plain C-language reads from foo, and given
  362. CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, also any concurrent plain
  363. C-language writes to foo.
  364. Lockless Reads and Writes, But With Single-Threaded Initialization
  365. ------------------------------------------------------------------
  366. For yet another example, suppose that foo is initialized in a
  367. single-threaded manner, but that a number of kthreads are then created
  368. that locklessly and concurrently access foo. Some snippets of this code
  369. might look as follows:
  370. int foo;
  371. void initialize_foo(int initval, int nkthreads)
  372. {
  373. int i;
  374. foo = initval;
  375. ASSERT_EXCLUSIVE_ACCESS(foo);
  376. for (i = 0; i < nkthreads; i++)
  377. kthread_run(access_foo_concurrently, ...);
  378. }
  379. /* Called from access_foo_concurrently(). */
  380. int update_foo(int newval)
  381. {
  382. int ret;
  383. ret = xchg(&foo, newval);
  384. do_something(newval);
  385. return ret;
  386. }
  387. /* Also called from access_foo_concurrently(). */
  388. int read_foo(void)
  389. {
  390. do_something_else();
  391. return READ_ONCE(foo);
  392. }
  393. The initialize_foo() uses a plain C-language write to foo because there
  394. are not supposed to be concurrent accesses during initialization. The
  395. ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag buggy concurrent unmarked
  396. reads, and the ASSERT_EXCLUSIVE_ACCESS() call further allows KCSAN to
  397. flag buggy concurrent writes, even if: (1) Those writes are marked or
  398. (2) The kernel was built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
  399. Checking Stress-Test Race Coverage
  400. ----------------------------------
  401. When designing stress tests it is important to ensure that race conditions
  402. of interest really do occur. For example, consider the following code
  403. fragment:
  404. int foo;
  405. int update_foo(int newval)
  406. {
  407. return xchg(&foo, newval);
  408. }
  409. int xor_shift_foo(int shift, int mask)
  410. {
  411. int old, new, newold;
  412. newold = data_race(foo); /* Checked by cmpxchg(). */
  413. do {
  414. old = newold;
  415. new = (old << shift) ^ mask;
  416. newold = cmpxchg(&foo, old, new);
  417. } while (newold != old);
  418. return old;
  419. }
  420. int read_foo(void)
  421. {
  422. return READ_ONCE(foo);
  423. }
  424. If it is possible for update_foo(), xor_shift_foo(), and read_foo() to be
  425. invoked concurrently, the stress test should force this concurrency to
  426. actually happen. KCSAN can evaluate the stress test when the above code
  427. is modified to read as follows:
  428. int foo;
  429. int update_foo(int newval)
  430. {
  431. ASSERT_EXCLUSIVE_ACCESS(foo);
  432. return xchg(&foo, newval);
  433. }
  434. int xor_shift_foo(int shift, int mask)
  435. {
  436. int old, new, newold;
  437. newold = data_race(foo); /* Checked by cmpxchg(). */
  438. do {
  439. old = newold;
  440. new = (old << shift) ^ mask;
  441. ASSERT_EXCLUSIVE_ACCESS(foo);
  442. newold = cmpxchg(&foo, old, new);
  443. } while (newold != old);
  444. return old;
  445. }
  446. int read_foo(void)
  447. {
  448. ASSERT_EXCLUSIVE_ACCESS(foo);
  449. return READ_ONCE(foo);
  450. }
  451. If a given stress-test run does not result in KCSAN complaints from
  452. each possible pair of ASSERT_EXCLUSIVE_ACCESS() invocations, the
  453. stress test needs improvement. If the stress test was to be evaluated
  454. on a regular basis, it would be wise to place the above instances of
  455. ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that they did not result in
  456. false positives when not evaluating the stress test.
  457. REFERENCES
  458. ==========
  459. [1] "Concurrency bugs should fear the big bad data-race detector (part 2)"
  460. https://lwn.net/Articles/816854/
  461. [2] "Who's afraid of a big bad optimizing compiler?"
  462. https://lwn.net/Articles/793253/