bpf: Allow to resolve bpf trampoline and dispatcher in unwind

When unwinding the stack we need to identify each address
to successfully continue. Adding latch tree to keep trampolines
for quick lookup during the unwind.

The patch uses first 48 bytes for latch tree node, leaving 4048
bytes from the rest of the page for trampoline or dispatcher
generated code.

It's still enough not to affect trampoline and dispatcher progs
maximum counts.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200123161508.915203-3-jolsa@kernel.org
This commit is contained in:
Jiri Olsa
2020-01-23 17:15:07 +01:00
committed by Alexei Starovoitov
parent 84ad7a7ab6
commit e9b4e606c2
4 changed files with 90 additions and 13 deletions

View File

@@ -131,8 +131,9 @@ int kernel_text_address(unsigned long addr)
* triggers a stack trace, or a WARN() that happens during
* coming back from idle, or cpu on or offlining.
*
* is_module_text_address() as well as the kprobe slots
* and is_bpf_text_address() require RCU to be watching.
* is_module_text_address() as well as the kprobe slots,
* is_bpf_text_address() and is_bpf_image_address require
* RCU to be watching.
*/
no_rcu = !rcu_is_watching();
@@ -148,6 +149,8 @@ int kernel_text_address(unsigned long addr)
goto out;
if (is_bpf_text_address(addr))
goto out;
if (is_bpf_image_address(addr))
goto out;
ret = 0;
out:
if (no_rcu)