entry.rst 9.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279
  1. Entry/exit handling for exceptions, interrupts, syscalls and KVM
  2. ================================================================
  3. All transitions between execution domains require state updates which are
  4. subject to strict ordering constraints. State updates are required for the
  5. following:
  6. * Lockdep
  7. * RCU / Context tracking
  8. * Preemption counter
  9. * Tracing
  10. * Time accounting
  11. The update order depends on the transition type and is explained below in
  12. the transition type sections: `Syscalls`_, `KVM`_, `Interrupts and regular
  13. exceptions`_, `NMI and NMI-like exceptions`_.
  14. Non-instrumentable code - noinstr
  15. ---------------------------------
  16. Most instrumentation facilities depend on RCU, so intrumentation is prohibited
  17. for entry code before RCU starts watching and exit code after RCU stops
  18. watching. In addition, many architectures must save and restore register state,
  19. which means that (for example) a breakpoint in the breakpoint entry code would
  20. overwrite the debug registers of the initial breakpoint.
  21. Such code must be marked with the 'noinstr' attribute, placing that code into a
  22. special section inaccessible to instrumentation and debug facilities. Some
  23. functions are partially instrumentable, which is handled by marking them
  24. noinstr and using instrumentation_begin() and instrumentation_end() to flag the
  25. instrumentable ranges of code:
  26. .. code-block:: c
  27. noinstr void entry(void)
  28. {
  29. handle_entry(); // <-- must be 'noinstr' or '__always_inline'
  30. ...
  31. instrumentation_begin();
  32. handle_context(); // <-- instrumentable code
  33. instrumentation_end();
  34. ...
  35. handle_exit(); // <-- must be 'noinstr' or '__always_inline'
  36. }
  37. This allows verification of the 'noinstr' restrictions via objtool on
  38. supported architectures.
  39. Invoking non-instrumentable functions from instrumentable context has no
  40. restrictions and is useful to protect e.g. state switching which would
  41. cause malfunction if instrumented.
  42. All non-instrumentable entry/exit code sections before and after the RCU
  43. state transitions must run with interrupts disabled.
  44. Syscalls
  45. --------
  46. Syscall-entry code starts in assembly code and calls out into low-level C code
  47. after establishing low-level architecture-specific state and stack frames. This
  48. low-level C code must not be instrumented. A typical syscall handling function
  49. invoked from low-level assembly code looks like this:
  50. .. code-block:: c
  51. noinstr void syscall(struct pt_regs *regs, int nr)
  52. {
  53. arch_syscall_enter(regs);
  54. nr = syscall_enter_from_user_mode(regs, nr);
  55. instrumentation_begin();
  56. if (!invoke_syscall(regs, nr) && nr != -1)
  57. result_reg(regs) = __sys_ni_syscall(regs);
  58. instrumentation_end();
  59. syscall_exit_to_user_mode(regs);
  60. }
  61. syscall_enter_from_user_mode() first invokes enter_from_user_mode() which
  62. establishes state in the following order:
  63. * Lockdep
  64. * RCU / Context tracking
  65. * Tracing
  66. and then invokes the various entry work functions like ptrace, seccomp, audit,
  67. syscall tracing, etc. After all that is done, the instrumentable invoke_syscall
  68. function can be invoked. The instrumentable code section then ends, after which
  69. syscall_exit_to_user_mode() is invoked.
  70. syscall_exit_to_user_mode() handles all work which needs to be done before
  71. returning to user space like tracing, audit, signals, task work etc. After
  72. that it invokes exit_to_user_mode() which again handles the state
  73. transition in the reverse order:
  74. * Tracing
  75. * RCU / Context tracking
  76. * Lockdep
  77. syscall_enter_from_user_mode() and syscall_exit_to_user_mode() are also
  78. available as fine grained subfunctions in cases where the architecture code
  79. has to do extra work between the various steps. In such cases it has to
  80. ensure that enter_from_user_mode() is called first on entry and
  81. exit_to_user_mode() is called last on exit.
  82. Do not nest syscalls. Nested systcalls will cause RCU and/or context tracking
  83. to print a warning.
  84. KVM
  85. ---
  86. Entering or exiting guest mode is very similar to syscalls. From the host
  87. kernel point of view the CPU goes off into user space when entering the
  88. guest and returns to the kernel on exit.
  89. kvm_guest_enter_irqoff() is a KVM-specific variant of exit_to_user_mode()
  90. and kvm_guest_exit_irqoff() is the KVM variant of enter_from_user_mode().
  91. The state operations have the same ordering.
  92. Task work handling is done separately for guest at the boundary of the
  93. vcpu_run() loop via xfer_to_guest_mode_handle_work() which is a subset of
  94. the work handled on return to user space.
  95. Do not nest KVM entry/exit transitions because doing so is nonsensical.
  96. Interrupts and regular exceptions
  97. ---------------------------------
  98. Interrupts entry and exit handling is slightly more complex than syscalls
  99. and KVM transitions.
  100. If an interrupt is raised while the CPU executes in user space, the entry
  101. and exit handling is exactly the same as for syscalls.
  102. If the interrupt is raised while the CPU executes in kernel space the entry and
  103. exit handling is slightly different. RCU state is only updated when the
  104. interrupt is raised in the context of the CPU's idle task. Otherwise, RCU will
  105. already be watching. Lockdep and tracing have to be updated unconditionally.
  106. irqentry_enter() and irqentry_exit() provide the implementation for this.
  107. The architecture-specific part looks similar to syscall handling:
  108. .. code-block:: c
  109. noinstr void interrupt(struct pt_regs *regs, int nr)
  110. {
  111. arch_interrupt_enter(regs);
  112. state = irqentry_enter(regs);
  113. instrumentation_begin();
  114. irq_enter_rcu();
  115. invoke_irq_handler(regs, nr);
  116. irq_exit_rcu();
  117. instrumentation_end();
  118. irqentry_exit(regs, state);
  119. }
  120. Note that the invocation of the actual interrupt handler is within a
  121. irq_enter_rcu() and irq_exit_rcu() pair.
  122. irq_enter_rcu() updates the preemption count which makes in_hardirq()
  123. return true, handles NOHZ tick state and interrupt time accounting. This
  124. means that up to the point where irq_enter_rcu() is invoked in_hardirq()
  125. returns false.
  126. irq_exit_rcu() handles interrupt time accounting, undoes the preemption
  127. count update and eventually handles soft interrupts and NOHZ tick state.
  128. In theory, the preemption count could be updated in irqentry_enter(). In
  129. practice, deferring this update to irq_enter_rcu() allows the preemption-count
  130. code to be traced, while also maintaining symmetry with irq_exit_rcu() and
  131. irqentry_exit(), which are described in the next paragraph. The only downside
  132. is that the early entry code up to irq_enter_rcu() must be aware that the
  133. preemption count has not yet been updated with the HARDIRQ_OFFSET state.
  134. Note that irq_exit_rcu() must remove HARDIRQ_OFFSET from the preemption count
  135. before it handles soft interrupts, whose handlers must run in BH context rather
  136. than irq-disabled context. In addition, irqentry_exit() might schedule, which
  137. also requires that HARDIRQ_OFFSET has been removed from the preemption count.
  138. Even though interrupt handlers are expected to run with local interrupts
  139. disabled, interrupt nesting is common from an entry/exit perspective. For
  140. example, softirq handling happens within an irqentry_{enter,exit}() block with
  141. local interrupts enabled. Also, although uncommon, nothing prevents an
  142. interrupt handler from re-enabling interrupts.
  143. Interrupt entry/exit code doesn't strictly need to handle reentrancy, since it
  144. runs with local interrupts disabled. But NMIs can happen anytime, and a lot of
  145. the entry code is shared between the two.
  146. NMI and NMI-like exceptions
  147. ---------------------------
  148. NMIs and NMI-like exceptions (machine checks, double faults, debug
  149. interrupts, etc.) can hit any context and must be extra careful with
  150. the state.
  151. State changes for debug exceptions and machine-check exceptions depend on
  152. whether these exceptions happened in user-space (breakpoints or watchpoints) or
  153. in kernel mode (code patching). From user-space, they are treated like
  154. interrupts, while from kernel mode they are treated like NMIs.
  155. NMIs and other NMI-like exceptions handle state transitions without
  156. distinguishing between user-mode and kernel-mode origin.
  157. The state update on entry is handled in irqentry_nmi_enter() which updates
  158. state in the following order:
  159. * Preemption counter
  160. * Lockdep
  161. * RCU / Context tracking
  162. * Tracing
  163. The exit counterpart irqentry_nmi_exit() does the reverse operation in the
  164. reverse order.
  165. Note that the update of the preemption counter has to be the first
  166. operation on enter and the last operation on exit. The reason is that both
  167. lockdep and RCU rely on in_nmi() returning true in this case. The
  168. preemption count modification in the NMI entry/exit case must not be
  169. traced.
  170. Architecture-specific code looks like this:
  171. .. code-block:: c
  172. noinstr void nmi(struct pt_regs *regs)
  173. {
  174. arch_nmi_enter(regs);
  175. state = irqentry_nmi_enter(regs);
  176. instrumentation_begin();
  177. nmi_handler(regs);
  178. instrumentation_end();
  179. irqentry_nmi_exit(regs);
  180. }
  181. and for e.g. a debug exception it can look like this:
  182. .. code-block:: c
  183. noinstr void debug(struct pt_regs *regs)
  184. {
  185. arch_nmi_enter(regs);
  186. debug_regs = save_debug_regs();
  187. if (user_mode(regs)) {
  188. state = irqentry_enter(regs);
  189. instrumentation_begin();
  190. user_mode_debug_handler(regs, debug_regs);
  191. instrumentation_end();
  192. irqentry_exit(regs, state);
  193. } else {
  194. state = irqentry_nmi_enter(regs);
  195. instrumentation_begin();
  196. kernel_mode_debug_handler(regs, debug_regs);
  197. instrumentation_end();
  198. irqentry_nmi_exit(regs, state);
  199. }
  200. }
  201. There is no combined irqentry_nmi_if_kernel() function available as the
  202. above cannot be handled in an exception-agnostic way.
  203. NMIs can happen in any context. For example, an NMI-like exception triggered
  204. while handling an NMI. So NMI entry code has to be reentrant and state updates
  205. need to handle nesting.