reliable-stacktrace.rst 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309
  1. ===================
  2. Reliable Stacktrace
  3. ===================
  4. This document outlines basic information about reliable stacktracing.
  5. .. Table of Contents:
  6. .. contents:: :local:
  7. 1. Introduction
  8. ===============
  9. The kernel livepatch consistency model relies on accurately identifying which
  10. functions may have live state and therefore may not be safe to patch. One way
  11. to identify which functions are live is to use a stacktrace.
  12. Existing stacktrace code may not always give an accurate picture of all
  13. functions with live state, and best-effort approaches which can be helpful for
  14. debugging are unsound for livepatching. Livepatching depends on architectures
  15. to provide a *reliable* stacktrace which ensures it never omits any live
  16. functions from a trace.
  17. 2. Requirements
  18. ===============
  19. Architectures must implement one of the reliable stacktrace functions.
  20. Architectures using CONFIG_ARCH_STACKWALK must implement
  21. 'arch_stack_walk_reliable', and other architectures must implement
  22. 'save_stack_trace_tsk_reliable'.
  23. Principally, the reliable stacktrace function must ensure that either:
  24. * The trace includes all functions that the task may be returned to, and the
  25. return code is zero to indicate that the trace is reliable.
  26. * The return code is non-zero to indicate that the trace is not reliable.
  27. .. note::
  28. In some cases it is legitimate to omit specific functions from the trace,
  29. but all other functions must be reported. These cases are described in
  30. futher detail below.
  31. Secondly, the reliable stacktrace function must be robust to cases where
  32. the stack or other unwind state is corrupt or otherwise unreliable. The
  33. function should attempt to detect such cases and return a non-zero error
  34. code, and should not get stuck in an infinite loop or access memory in
  35. an unsafe way. Specific cases are described in further detail below.
  36. 3. Compile-time analysis
  37. ========================
  38. To ensure that kernel code can be correctly unwound in all cases,
  39. architectures may need to verify that code has been compiled in a manner
  40. expected by the unwinder. For example, an unwinder may expect that
  41. functions manipulate the stack pointer in a limited way, or that all
  42. functions use specific prologue and epilogue sequences. Architectures
  43. with such requirements should verify the kernel compilation using
  44. objtool.
  45. In some cases, an unwinder may require metadata to correctly unwind.
  46. Where necessary, this metadata should be generated at build time using
  47. objtool.
  48. 4. Considerations
  49. =================
  50. The unwinding process varies across architectures, their respective procedure
  51. call standards, and kernel configurations. This section describes common
  52. details that architectures should consider.
  53. 4.1 Identifying successful termination
  54. --------------------------------------
  55. Unwinding may terminate early for a number of reasons, including:
  56. * Stack or frame pointer corruption.
  57. * Missing unwind support for an uncommon scenario, or a bug in the unwinder.
  58. * Dynamically generated code (e.g. eBPF) or foreign code (e.g. EFI runtime
  59. services) not following the conventions expected by the unwinder.
  60. To ensure that this does not result in functions being omitted from the trace,
  61. even if not caught by other checks, it is strongly recommended that
  62. architectures verify that a stacktrace ends at an expected location, e.g.
  63. * Within a specific function that is an entry point to the kernel.
  64. * At a specific location on a stack expected for a kernel entry point.
  65. * On a specific stack expected for a kernel entry point (e.g. if the
  66. architecture has separate task and IRQ stacks).
  67. 4.2 Identifying unwindable code
  68. -------------------------------
  69. Unwinding typically relies on code following specific conventions (e.g.
  70. manipulating a frame pointer), but there can be code which may not follow these
  71. conventions and may require special handling in the unwinder, e.g.
  72. * Exception vectors and entry assembly.
  73. * Procedure Linkage Table (PLT) entries and veneer functions.
  74. * Trampoline assembly (e.g. ftrace, kprobes).
  75. * Dynamically generated code (e.g. eBPF, optprobe trampolines).
  76. * Foreign code (e.g. EFI runtime services).
  77. To ensure that such cases do not result in functions being omitted from a
  78. trace, it is strongly recommended that architectures positively identify code
  79. which is known to be reliable to unwind from, and reject unwinding from all
  80. other code.
  81. Kernel code including modules and eBPF can be distinguished from foreign code
  82. using '__kernel_text_address()'. Checking for this also helps to detect stack
  83. corruption.
  84. There are several ways an architecture may identify kernel code which is deemed
  85. unreliable to unwind from, e.g.
  86. * Placing such code into special linker sections, and rejecting unwinding from
  87. any code in these sections.
  88. * Identifying specific portions of code using bounds information.
  89. 4.3 Unwinding across interrupts and exceptions
  90. ----------------------------------------------
  91. At function call boundaries the stack and other unwind state is expected to be
  92. in a consistent state suitable for reliable unwinding, but this may not be the
  93. case part-way through a function. For example, during a function prologue or
  94. epilogue a frame pointer may be transiently invalid, or during the function
  95. body the return address may be held in an arbitrary general purpose register.
  96. For some architectures this may change at runtime as a result of dynamic
  97. instrumentation.
  98. If an interrupt or other exception is taken while the stack or other unwind
  99. state is in an inconsistent state, it may not be possible to reliably unwind,
  100. and it may not be possible to identify whether such unwinding will be reliable.
  101. See below for examples.
  102. Architectures which cannot identify when it is reliable to unwind such cases
  103. (or where it is never reliable) must reject unwinding across exception
  104. boundaries. Note that it may be reliable to unwind across certain
  105. exceptions (e.g. IRQ) but unreliable to unwind across other exceptions
  106. (e.g. NMI).
  107. Architectures which can identify when it is reliable to unwind such cases (or
  108. have no such cases) should attempt to unwind across exception boundaries, as
  109. doing so can prevent unnecessarily stalling livepatch consistency checks and
  110. permits livepatch transitions to complete more quickly.
  111. 4.4 Rewriting of return addresses
  112. ---------------------------------
  113. Some trampolines temporarily modify the return address of a function in order
  114. to intercept when that function returns with a return trampoline, e.g.
  115. * An ftrace trampoline may modify the return address so that function graph
  116. tracing can intercept returns.
  117. * A kprobes (or optprobes) trampoline may modify the return address so that
  118. kretprobes can intercept returns.
  119. When this happens, the original return address will not be in its usual
  120. location. For trampolines which are not subject to live patching, where an
  121. unwinder can reliably determine the original return address and no unwind state
  122. is altered by the trampoline, the unwinder may report the original return
  123. address in place of the trampoline and report this as reliable. Otherwise, an
  124. unwinder must report these cases as unreliable.
  125. Special care is required when identifying the original return address, as this
  126. information is not in a consistent location for the duration of the entry
  127. trampoline or return trampoline. For example, considering the x86_64
  128. 'return_to_handler' return trampoline:
  129. .. code-block:: none
  130. SYM_CODE_START(return_to_handler)
  131. UNWIND_HINT_EMPTY
  132. subq $24, %rsp
  133. /* Save the return values */
  134. movq %rax, (%rsp)
  135. movq %rdx, 8(%rsp)
  136. movq %rbp, %rdi
  137. call ftrace_return_to_handler
  138. movq %rax, %rdi
  139. movq 8(%rsp), %rdx
  140. movq (%rsp), %rax
  141. addq $24, %rsp
  142. JMP_NOSPEC rdi
  143. SYM_CODE_END(return_to_handler)
  144. While the traced function runs its return address on the stack points to
  145. the start of return_to_handler, and the original return address is stored in
  146. the task's cur_ret_stack. During this time the unwinder can find the return
  147. address using ftrace_graph_ret_addr().
  148. When the traced function returns to return_to_handler, there is no longer a
  149. return address on the stack, though the original return address is still stored
  150. in the task's cur_ret_stack. Within ftrace_return_to_handler(), the original
  151. return address is removed from cur_ret_stack and is transiently moved
  152. arbitrarily by the compiler before being returned in rax. The return_to_handler
  153. trampoline moves this into rdi before jumping to it.
  154. Architectures might not always be able to unwind such sequences, such as when
  155. ftrace_return_to_handler() has removed the address from cur_ret_stack, and the
  156. location of the return address cannot be reliably determined.
  157. It is recommended that architectures unwind cases where return_to_handler has
  158. not yet been returned to, but architectures are not required to unwind from the
  159. middle of return_to_handler and can report this as unreliable. Architectures
  160. are not required to unwind from other trampolines which modify the return
  161. address.
  162. 4.5 Obscuring of return addresses
  163. ---------------------------------
  164. Some trampolines do not rewrite the return address in order to intercept
  165. returns, but do transiently clobber the return address or other unwind state.
  166. For example, the x86_64 implementation of optprobes patches the probed function
  167. with a JMP instruction which targets the associated optprobe trampoline. When
  168. the probe is hit, the CPU will branch to the optprobe trampoline, and the
  169. address of the probed function is not held in any register or on the stack.
  170. Similarly, the arm64 implementation of DYNAMIC_FTRACE_WITH_REGS patches traced
  171. functions with the following:
  172. .. code-block:: none
  173. MOV X9, X30
  174. BL <trampoline>
  175. The MOV saves the link register (X30) into X9 to preserve the return address
  176. before the BL clobbers the link register and branches to the trampoline. At the
  177. start of the trampoline, the address of the traced function is in X9 rather
  178. than the link register as would usually be the case.
  179. Architectures must either ensure that unwinders either reliably unwind
  180. such cases, or report the unwinding as unreliable.
  181. 4.6 Link register unreliability
  182. -------------------------------
  183. On some other architectures, 'call' instructions place the return address into a
  184. link register, and 'return' instructions consume the return address from the
  185. link register without modifying the register. On these architectures software
  186. must save the return address to the stack prior to making a function call. Over
  187. the duration of a function call, the return address may be held in the link
  188. register alone, on the stack alone, or in both locations.
  189. Unwinders typically assume the link register is always live, but this
  190. assumption can lead to unreliable stack traces. For example, consider the
  191. following arm64 assembly for a simple function:
  192. .. code-block:: none
  193. function:
  194. STP X29, X30, [SP, -16]!
  195. MOV X29, SP
  196. BL <other_function>
  197. LDP X29, X30, [SP], #16
  198. RET
  199. At entry to the function, the link register (x30) points to the caller, and the
  200. frame pointer (X29) points to the caller's frame including the caller's return
  201. address. The first two instructions create a new stackframe and update the
  202. frame pointer, and at this point the link register and the frame pointer both
  203. describe this function's return address. A trace at this point may describe
  204. this function twice, and if the function return is being traced, the unwinder
  205. may consume two entries from the fgraph return stack rather than one entry.
  206. The BL invokes 'other_function' with the link register pointing to this
  207. function's LDR and the frame pointer pointing to this function's stackframe.
  208. When 'other_function' returns, the link register is left pointing at the BL,
  209. and so a trace at this point could result in 'function' appearing twice in the
  210. backtrace.
  211. Similarly, a function may deliberately clobber the LR, e.g.
  212. .. code-block:: none
  213. caller:
  214. STP X29, X30, [SP, -16]!
  215. MOV X29, SP
  216. ADR LR, <callee>
  217. BLR LR
  218. LDP X29, X30, [SP], #16
  219. RET
  220. The ADR places the address of 'callee' into the LR, before the BLR branches to
  221. this address. If a trace is made immediately after the ADR, 'callee' will
  222. appear to be the parent of 'caller', rather than the child.
  223. Due to cases such as the above, it may only be possible to reliably consume a
  224. link register value at a function call boundary. Architectures where this is
  225. the case must reject unwinding across exception boundaries unless they can
  226. reliably identify when the LR or stack value should be used (e.g. using
  227. metadata generated by objtool).