Pull perf updates from Ingo Molnar:
"Kernel side changes:
- Intel Knights Landing support. (Harish Chegondi)
- Intel Broadwell-EP uncore PMU support. (Kan Liang)
- Core code improvements. (Peter Zijlstra.)
- Event filter, LBR and PEBS fixes. (Stephane Eranian)
- Enable cycles:pp on Intel Atom. (Stephane Eranian)
- Add cycles:ppp support for Skylake. (Andi Kleen)
- Various x86 NMI overhead optimizations. (Andi Kleen)
- Intel PT enhancements. (Takao Indoh)
- AMD cache events fix. (Vince Weaver)
Tons of tooling changes:
- Show random perf tool tips in the 'perf report' bottom line
(Namhyung Kim)
- perf report now defaults to --group if the perf.data file has
grouped events, try it with:
# perf record -e '{cycles,instructions}' -a sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 1.093 MB perf.data (1247 samples) ]
# perf report
# Samples: 1K of event 'anon group { cycles, instructions }'
# Event count (approx.): 1955219195
#
# Overhead Command Shared Object Symbol
2.86% 0.22% swapper [kernel.kallsyms] [k] intel_idle
1.05% 0.33% firefox libxul.so [.] js::SetObjectElement
1.05% 0.00% kworker/0:3 [kernel.kallsyms] [k] gen6_ring_get_seqno
0.88% 0.17% chrome chrome [.] 0x0000000000ee27ab
0.65% 0.86% firefox libxul.so [.] js::ValueToId<(js::AllowGC)1>
0.64% 0.23% JS Helper libxul.so [.] js::SplayTree<js::jit::LiveRange*, js::jit::LiveRange>::splay
0.62% 1.27% firefox libxul.so [.] js::GetIterator
0.61% 1.74% firefox libxul.so [.] js::NativeSetProperty
0.61% 0.31% firefox libxul.so [.] js::SetPropertyByDefining
- Introduce the 'perf stat record/report' workflow:
Generate perf.data files from 'perf stat', to tap into the
scripting capabilities perf has instead of defining a 'perf stat'
specific scripting support to calculate event ratios, etc.
Simple example:
$ perf stat record -e cycles usleep 1
Performance counter stats for 'usleep 1':
1,134,996 cycles
0.000670644 seconds time elapsed
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record -e cycles usleep 1':
1,134,996 cycles
0.000670644 seconds time elapsed
$
It generates PERF_RECORD_ userspace records to store the details:
$ perf report -D | grep PERF_RECORD
0xf0 [0x28]: PERF_RECORD_THREAD_MAP nr: 1 thread: 27637
0x118 [0x12]: PERF_RECORD_CPU_MAP nr: 1 cpu: 65535
0x12a [0x40]: PERF_RECORD_STAT_CONFIG
0x16a [0x30]: PERF_RECORD_STAT
-1 -1 0x19a [0x40]: PERF_RECORD_MMAP -1/0: [0xffffffff81000000(0x1f000000) @ 0xffffffff81000000]: x [kernel.kallsyms]_text
0x1da [0x18]: PERF_RECORD_STAT_ROUND
[acme@ssdandy linux]$
An effort was made to make perf.data files generated like this to
not generate cryptic messages when processed by older tools.
The 'perf script' bits need rebasing, will go up later.
- Make command line options always available, even when they depend
on some feature being enabled, warning the user about use of such
options (Wang Nan)
- Support hw breakpoint events (mem:0xAddress) in the default output
mode in 'perf script' (Wang Nan)
- Fixes and improvements for supporting annotating ARM binaries,
support ARM call and jump instructions, more work needed to have
arch specific stuff separated into tools/perf/arch/*/annotate/
(Russell King)
- Add initial 'perf config' command, for now just with a --list
command to the contents of the configuration file in use and a
basic man page describing its format, commands for doing edits and
detailed documentation are being reviewed and proof-read. (Taeung
Song)
- Allows BPF scriptlets specify arguments to be fetched using DWARF
info, using a prologue generated at compile/build time (He Kuang,
Wang Nan)
- Allow attaching BPF scriptlets to module symbols (Wang Nan)
- Allow attaching BPF scriptlets to userspace code using uprobe (Wang
Nan)
- BPF programs now can specify 'perf probe' tunables via its section
name, separating key=val values using semicolons (Wang Nan)
Testing some of these new BPF features:
Use case: get callchains when receiving SSL packets, filter then in the
kernel, at arbitrary place.
# cat ssl.bpf.c
#define SEC(NAME) __attribute__((section(NAME), used))
struct pt_regs;
SEC("func=__inet_lookup_established hnum")
int func(struct pt_regs *ctx, int err, unsigned short port)
{
return err == 0 && port == 443;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;
#
# perf record -a -g -e ssl.bpf.c
^C[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.787 MB perf.data (3 samples) ]
# perf script | head -30
swapper 0 [000] 58783.268118: perf_bpf_probe:func: (ffffffff816a0f60) hnum=0x1bb
8a0f61 __inet_lookup_established (/lib/modules/4.3.0+/build/vmlinux)
896def ip_rcv_finish (/lib/modules/4.3.0+/build/vmlinux)
8976c2 ip_rcv (/lib/modules/4.3.0+/build/vmlinux)
855eba __netif_receive_skb_core (/lib/modules/4.3.0+/build/vmlinux)
8565d8 __netif_receive_skb (/lib/modules/4.3.0+/build/vmlinux)
8572a8 process_backlog (/lib/modules/4.3.0+/build/vmlinux)
856b11 net_rx_action (/lib/modules/4.3.0+/build/vmlinux)
2a284b __do_softirq (/lib/modules/4.3.0+/build/vmlinux)
2a2ba3 irq_exit (/lib/modules/4.3.0+/build/vmlinux)
96b7a4 do_IRQ (/lib/modules/4.3.0+/build/vmlinux)
969807 ret_from_intr (/lib/modules/4.3.0+/build/vmlinux)
2dede5 cpu_startup_entry (/lib/modules/4.3.0+/build/vmlinux)
95d5bc rest_init (/lib/modules/4.3.0+/build/vmlinux)
1163ffa start_kernel ([kernel.vmlinux].init.text)
11634d7 x86_64_start_reservations ([kernel.vmlinux].init.text)
1163623 x86_64_start_kernel ([kernel.vmlinux].init.text)
qemu-system-x86 9178 [003] 58785.792417: perf_bpf_probe:func: (ffffffff816a0f60) hnum=0x1bb
8a0f61 __inet_lookup_established (/lib/modules/4.3.0+/build/vmlinux)
896def ip_rcv_finish (/lib/modules/4.3.0+/build/vmlinux)
8976c2 ip_rcv (/lib/modules/4.3.0+/build/vmlinux)
855eba __netif_receive_skb_core (/lib/modules/4.3.0+/build/vmlinux)
8565d8 __netif_receive_skb (/lib/modules/4.3.0+/build/vmlinux)
856660 netif_receive_skb_internal (/lib/modules/4.3.0+/build/vmlinux)
8566ec netif_receive_skb_sk (/lib/modules/4.3.0+/build/vmlinux)
430a br_handle_frame_finish ([bridge])
48bc br_handle_frame ([bridge])
855f44 __netif_receive_skb_core (/lib/modules/4.3.0+/build/vmlinux)
8565d8 __netif_receive_skb (/lib/modules/4.3.0+/build/vmlinux)
#
- Use 'perf probe' various options to list functions, see what
variables can be collected at any given point, experiment first
collecting without a filter, then filter, use it together with
'perf trace', 'perf top', with or without callchains, if it
explodes, please tell us!
- Introduce a new callchain mode: "folded", that will list per line
representations of all callchains for a give histogram entry,
facilitating 'perf report' output processing by other tools, such
as Brendan Gregg's flamegraph tools (Namhyung Kim)
E.g:
# perf report | grep -v ^# | head
18.37% 0.00% swapper [kernel.kallsyms] [k] cpu_startup_entry
|
---cpu_startup_entry
|
|--12.07%--start_secondary
|
--6.30%--rest_init
start_kernel
x86_64_start_reservations
x86_64_start_kernel
#
Becomes, in "folded" mode:
# perf report -g folded | grep -v ^# | head -5
18.37% 0.00% swapper [kernel.kallsyms] [k] cpu_startup_entry
12.07% cpu_startup_entry;start_secondary
6.30% cpu_startup_entry;rest_init;start_kernel;x86_64_start_reservations;x86_64_start_kernel
16.90% 0.00% swapper [kernel.kallsyms] [k] call_cpuidle
11.23% call_cpuidle;cpu_startup_entry;start_secondary
5.67% call_cpuidle;cpu_startup_entry;rest_init;start_kernel;x86_64_start_reservations;x86_64_start_kernel
16.90% 0.00% swapper [kernel.kallsyms] [k] cpuidle_enter
11.23% cpuidle_enter;call_cpuidle;cpu_startup_entry;start_secondary
5.67% cpuidle_enter;call_cpuidle;cpu_startup_entry;rest_init;start_kernel;x86_64_start_reservations;x86_64_start_kernel
15.12% 0.00% swapper [kernel.kallsyms] [k] cpuidle_enter_state
#
The user can also select one of "count", "period" or "percent" as
the first column.
... and lots of infrastructure enhancements, plus fixes and other
changes, features I failed to list - see the shortlog and the git log
for details"
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (271 commits)
perf evlist: Add --trace-fields option to show trace fields
perf record: Store data mmaps for dwarf unwind
perf libdw: Check for mmaps also in MAP__VARIABLE tree
perf unwind: Check for mmaps also in MAP__VARIABLE tree
perf unwind: Use find_map function in access_dso_mem
perf evlist: Remove perf_evlist__(enable|disable)_event functions
perf evlist: Make perf_evlist__open() open evsels with their cpus and threads (like perf record does)
perf report: Show random usage tip on the help line
perf hists: Export a couple of hist functions
perf diff: Use perf_hpp__register_sort_field interface
perf tools: Add overhead/overhead_children keys defaults via string
perf tools: Remove list entry from struct sort_entry
perf tools: Include all tools/lib directory for tags/cscope/TAGS targets
perf script: Align event name properly
perf tools: Add missing headers in perf's MANIFEST
perf tools: Do not show trace command if it's not compiled in
perf report: Change default to use event group view
perf top: Decay periods in callchains
tools lib: Move bitmap.[ch] from tools/perf/ to tools/{lib,include}/
tools lib: Sync tools/lib/find_bit.c with the kernel
...
492 lines
16 KiB
C
492 lines
16 KiB
C
#ifndef _LINUX_TRACEPOINT_H
|
|
#define _LINUX_TRACEPOINT_H
|
|
|
|
/*
|
|
* Kernel Tracepoint API.
|
|
*
|
|
* See Documentation/trace/tracepoints.txt.
|
|
*
|
|
* Copyright (C) 2008-2014 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
|
*
|
|
* Heavily inspired from the Linux Kernel Markers.
|
|
*
|
|
* This file is released under the GPLv2.
|
|
* See the file COPYING for more details.
|
|
*/
|
|
|
|
#include <linux/errno.h>
|
|
#include <linux/types.h>
|
|
#include <linux/rcupdate.h>
|
|
#include <linux/tracepoint-defs.h>
|
|
|
|
struct module;
|
|
struct tracepoint;
|
|
struct notifier_block;
|
|
|
|
struct trace_enum_map {
|
|
const char *system;
|
|
const char *enum_string;
|
|
unsigned long enum_value;
|
|
};
|
|
|
|
#define TRACEPOINT_DEFAULT_PRIO 10
|
|
|
|
extern int
|
|
tracepoint_probe_register(struct tracepoint *tp, void *probe, void *data);
|
|
extern int
|
|
tracepoint_probe_register_prio(struct tracepoint *tp, void *probe, void *data,
|
|
int prio);
|
|
extern int
|
|
tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data);
|
|
extern void
|
|
for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
|
|
void *priv);
|
|
|
|
#ifdef CONFIG_MODULES
|
|
struct tp_module {
|
|
struct list_head list;
|
|
struct module *mod;
|
|
};
|
|
|
|
bool trace_module_has_bad_taint(struct module *mod);
|
|
extern int register_tracepoint_module_notifier(struct notifier_block *nb);
|
|
extern int unregister_tracepoint_module_notifier(struct notifier_block *nb);
|
|
#else
|
|
static inline bool trace_module_has_bad_taint(struct module *mod)
|
|
{
|
|
return false;
|
|
}
|
|
static inline
|
|
int register_tracepoint_module_notifier(struct notifier_block *nb)
|
|
{
|
|
return 0;
|
|
}
|
|
static inline
|
|
int unregister_tracepoint_module_notifier(struct notifier_block *nb)
|
|
{
|
|
return 0;
|
|
}
|
|
#endif /* CONFIG_MODULES */
|
|
|
|
/*
|
|
* tracepoint_synchronize_unregister must be called between the last tracepoint
|
|
* probe unregistration and the end of module exit to make sure there is no
|
|
* caller executing a probe when it is freed.
|
|
*/
|
|
static inline void tracepoint_synchronize_unregister(void)
|
|
{
|
|
synchronize_sched();
|
|
}
|
|
|
|
#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
|
|
extern void syscall_regfunc(void);
|
|
extern void syscall_unregfunc(void);
|
|
#endif /* CONFIG_HAVE_SYSCALL_TRACEPOINTS */
|
|
|
|
#define PARAMS(args...) args
|
|
|
|
#define TRACE_DEFINE_ENUM(x)
|
|
|
|
#endif /* _LINUX_TRACEPOINT_H */
|
|
|
|
/*
|
|
* Note: we keep the TRACE_EVENT and DECLARE_TRACE outside the include
|
|
* file ifdef protection.
|
|
* This is due to the way trace events work. If a file includes two
|
|
* trace event headers under one "CREATE_TRACE_POINTS" the first include
|
|
* will override the TRACE_EVENT and break the second include.
|
|
*/
|
|
|
|
#ifndef DECLARE_TRACE
|
|
|
|
#define TP_PROTO(args...) args
|
|
#define TP_ARGS(args...) args
|
|
#define TP_CONDITION(args...) args
|
|
|
|
/*
|
|
* Individual subsystem my have a separate configuration to
|
|
* enable their tracepoints. By default, this file will create
|
|
* the tracepoints if CONFIG_TRACEPOINT is defined. If a subsystem
|
|
* wants to be able to disable its tracepoints from being created
|
|
* it can define NOTRACE before including the tracepoint headers.
|
|
*/
|
|
#if defined(CONFIG_TRACEPOINTS) && !defined(NOTRACE)
|
|
#define TRACEPOINTS_ENABLED
|
|
#endif
|
|
|
|
#ifdef TRACEPOINTS_ENABLED
|
|
|
|
/*
|
|
* it_func[0] is never NULL because there is at least one element in the array
|
|
* when the array itself is non NULL.
|
|
*
|
|
* Note, the proto and args passed in includes "__data" as the first parameter.
|
|
* The reason for this is to handle the "void" prototype. If a tracepoint
|
|
* has a "void" prototype, then it is invalid to declare a function
|
|
* as "(void *, void)". The DECLARE_TRACE_NOARGS() will pass in just
|
|
* "void *data", where as the DECLARE_TRACE() will pass in "void *data, proto".
|
|
*/
|
|
#define __DO_TRACE(tp, proto, args, cond, prercu, postrcu) \
|
|
do { \
|
|
struct tracepoint_func *it_func_ptr; \
|
|
void *it_func; \
|
|
void *__data; \
|
|
\
|
|
if (!(cond)) \
|
|
return; \
|
|
prercu; \
|
|
rcu_read_lock_sched_notrace(); \
|
|
it_func_ptr = rcu_dereference_sched((tp)->funcs); \
|
|
if (it_func_ptr) { \
|
|
do { \
|
|
it_func = (it_func_ptr)->func; \
|
|
__data = (it_func_ptr)->data; \
|
|
((void(*)(proto))(it_func))(args); \
|
|
} while ((++it_func_ptr)->func); \
|
|
} \
|
|
rcu_read_unlock_sched_notrace(); \
|
|
postrcu; \
|
|
} while (0)
|
|
|
|
#ifndef MODULE
|
|
#define __DECLARE_TRACE_RCU(name, proto, args, cond, data_proto, data_args) \
|
|
static inline void trace_##name##_rcuidle(proto) \
|
|
{ \
|
|
if (static_key_false(&__tracepoint_##name.key)) \
|
|
__DO_TRACE(&__tracepoint_##name, \
|
|
TP_PROTO(data_proto), \
|
|
TP_ARGS(data_args), \
|
|
TP_CONDITION(cond), \
|
|
rcu_irq_enter_irqson(), \
|
|
rcu_irq_exit_irqson()); \
|
|
}
|
|
#else
|
|
#define __DECLARE_TRACE_RCU(name, proto, args, cond, data_proto, data_args)
|
|
#endif
|
|
|
|
/*
|
|
* Make sure the alignment of the structure in the __tracepoints section will
|
|
* not add unwanted padding between the beginning of the section and the
|
|
* structure. Force alignment to the same alignment as the section start.
|
|
*
|
|
* When lockdep is enabled, we make sure to always do the RCU portions of
|
|
* the tracepoint code, regardless of whether tracing is on. However,
|
|
* don't check if the condition is false, due to interaction with idle
|
|
* instrumentation. This lets us find RCU issues triggered with tracepoints
|
|
* even when this tracepoint is off. This code has no purpose other than
|
|
* poking RCU a bit.
|
|
*/
|
|
#define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
|
|
extern struct tracepoint __tracepoint_##name; \
|
|
static inline void trace_##name(proto) \
|
|
{ \
|
|
if (static_key_false(&__tracepoint_##name.key)) \
|
|
__DO_TRACE(&__tracepoint_##name, \
|
|
TP_PROTO(data_proto), \
|
|
TP_ARGS(data_args), \
|
|
TP_CONDITION(cond),,); \
|
|
if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \
|
|
rcu_read_lock_sched_notrace(); \
|
|
rcu_dereference_sched(__tracepoint_##name.funcs);\
|
|
rcu_read_unlock_sched_notrace(); \
|
|
} \
|
|
} \
|
|
__DECLARE_TRACE_RCU(name, PARAMS(proto), PARAMS(args), \
|
|
PARAMS(cond), PARAMS(data_proto), PARAMS(data_args)) \
|
|
static inline int \
|
|
register_trace_##name(void (*probe)(data_proto), void *data) \
|
|
{ \
|
|
return tracepoint_probe_register(&__tracepoint_##name, \
|
|
(void *)probe, data); \
|
|
} \
|
|
static inline int \
|
|
register_trace_prio_##name(void (*probe)(data_proto), void *data,\
|
|
int prio) \
|
|
{ \
|
|
return tracepoint_probe_register_prio(&__tracepoint_##name, \
|
|
(void *)probe, data, prio); \
|
|
} \
|
|
static inline int \
|
|
unregister_trace_##name(void (*probe)(data_proto), void *data) \
|
|
{ \
|
|
return tracepoint_probe_unregister(&__tracepoint_##name,\
|
|
(void *)probe, data); \
|
|
} \
|
|
static inline void \
|
|
check_trace_callback_type_##name(void (*cb)(data_proto)) \
|
|
{ \
|
|
} \
|
|
static inline bool \
|
|
trace_##name##_enabled(void) \
|
|
{ \
|
|
return static_key_false(&__tracepoint_##name.key); \
|
|
}
|
|
|
|
/*
|
|
* We have no guarantee that gcc and the linker won't up-align the tracepoint
|
|
* structures, so we create an array of pointers that will be used for iteration
|
|
* on the tracepoints.
|
|
*/
|
|
#define DEFINE_TRACE_FN(name, reg, unreg) \
|
|
static const char __tpstrtab_##name[] \
|
|
__attribute__((section("__tracepoints_strings"))) = #name; \
|
|
struct tracepoint __tracepoint_##name \
|
|
__attribute__((section("__tracepoints"))) = \
|
|
{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL };\
|
|
static struct tracepoint * const __tracepoint_ptr_##name __used \
|
|
__attribute__((section("__tracepoints_ptrs"))) = \
|
|
&__tracepoint_##name;
|
|
|
|
#define DEFINE_TRACE(name) \
|
|
DEFINE_TRACE_FN(name, NULL, NULL);
|
|
|
|
#define EXPORT_TRACEPOINT_SYMBOL_GPL(name) \
|
|
EXPORT_SYMBOL_GPL(__tracepoint_##name)
|
|
#define EXPORT_TRACEPOINT_SYMBOL(name) \
|
|
EXPORT_SYMBOL(__tracepoint_##name)
|
|
|
|
#else /* !TRACEPOINTS_ENABLED */
|
|
#define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
|
|
static inline void trace_##name(proto) \
|
|
{ } \
|
|
static inline void trace_##name##_rcuidle(proto) \
|
|
{ } \
|
|
static inline int \
|
|
register_trace_##name(void (*probe)(data_proto), \
|
|
void *data) \
|
|
{ \
|
|
return -ENOSYS; \
|
|
} \
|
|
static inline int \
|
|
unregister_trace_##name(void (*probe)(data_proto), \
|
|
void *data) \
|
|
{ \
|
|
return -ENOSYS; \
|
|
} \
|
|
static inline void check_trace_callback_type_##name(void (*cb)(data_proto)) \
|
|
{ \
|
|
} \
|
|
static inline bool \
|
|
trace_##name##_enabled(void) \
|
|
{ \
|
|
return false; \
|
|
}
|
|
|
|
#define DEFINE_TRACE_FN(name, reg, unreg)
|
|
#define DEFINE_TRACE(name)
|
|
#define EXPORT_TRACEPOINT_SYMBOL_GPL(name)
|
|
#define EXPORT_TRACEPOINT_SYMBOL(name)
|
|
|
|
#endif /* TRACEPOINTS_ENABLED */
|
|
|
|
#ifdef CONFIG_TRACING
|
|
/**
|
|
* tracepoint_string - register constant persistent string to trace system
|
|
* @str - a constant persistent string that will be referenced in tracepoints
|
|
*
|
|
* If constant strings are being used in tracepoints, it is faster and
|
|
* more efficient to just save the pointer to the string and reference
|
|
* that with a printf "%s" instead of saving the string in the ring buffer
|
|
* and wasting space and time.
|
|
*
|
|
* The problem with the above approach is that userspace tools that read
|
|
* the binary output of the trace buffers do not have access to the string.
|
|
* Instead they just show the address of the string which is not very
|
|
* useful to users.
|
|
*
|
|
* With tracepoint_string(), the string will be registered to the tracing
|
|
* system and exported to userspace via the debugfs/tracing/printk_formats
|
|
* file that maps the string address to the string text. This way userspace
|
|
* tools that read the binary buffers have a way to map the pointers to
|
|
* the ASCII strings they represent.
|
|
*
|
|
* The @str used must be a constant string and persistent as it would not
|
|
* make sense to show a string that no longer exists. But it is still fine
|
|
* to be used with modules, because when modules are unloaded, if they
|
|
* had tracepoints, the ring buffers are cleared too. As long as the string
|
|
* does not change during the life of the module, it is fine to use
|
|
* tracepoint_string() within a module.
|
|
*/
|
|
#define tracepoint_string(str) \
|
|
({ \
|
|
static const char *___tp_str __tracepoint_string = str; \
|
|
___tp_str; \
|
|
})
|
|
#define __tracepoint_string __attribute__((section("__tracepoint_str")))
|
|
#else
|
|
/*
|
|
* tracepoint_string() is used to save the string address for userspace
|
|
* tracing tools. When tracing isn't configured, there's no need to save
|
|
* anything.
|
|
*/
|
|
# define tracepoint_string(str) str
|
|
# define __tracepoint_string
|
|
#endif
|
|
|
|
/*
|
|
* The need for the DECLARE_TRACE_NOARGS() is to handle the prototype
|
|
* (void). "void" is a special value in a function prototype and can
|
|
* not be combined with other arguments. Since the DECLARE_TRACE()
|
|
* macro adds a data element at the beginning of the prototype,
|
|
* we need a way to differentiate "(void *data, proto)" from
|
|
* "(void *data, void)". The second prototype is invalid.
|
|
*
|
|
* DECLARE_TRACE_NOARGS() passes "void" as the tracepoint prototype
|
|
* and "void *__data" as the callback prototype.
|
|
*
|
|
* DECLARE_TRACE() passes "proto" as the tracepoint protoype and
|
|
* "void *__data, proto" as the callback prototype.
|
|
*/
|
|
#define DECLARE_TRACE_NOARGS(name) \
|
|
__DECLARE_TRACE(name, void, , 1, void *__data, __data)
|
|
|
|
#define DECLARE_TRACE(name, proto, args) \
|
|
__DECLARE_TRACE(name, PARAMS(proto), PARAMS(args), 1, \
|
|
PARAMS(void *__data, proto), \
|
|
PARAMS(__data, args))
|
|
|
|
#define DECLARE_TRACE_CONDITION(name, proto, args, cond) \
|
|
__DECLARE_TRACE(name, PARAMS(proto), PARAMS(args), PARAMS(cond), \
|
|
PARAMS(void *__data, proto), \
|
|
PARAMS(__data, args))
|
|
|
|
#define TRACE_EVENT_FLAGS(event, flag)
|
|
|
|
#define TRACE_EVENT_PERF_PERM(event, expr...)
|
|
|
|
#endif /* DECLARE_TRACE */
|
|
|
|
#ifndef TRACE_EVENT
|
|
/*
|
|
* For use with the TRACE_EVENT macro:
|
|
*
|
|
* We define a tracepoint, its arguments, its printk format
|
|
* and its 'fast binary record' layout.
|
|
*
|
|
* Firstly, name your tracepoint via TRACE_EVENT(name : the
|
|
* 'subsystem_event' notation is fine.
|
|
*
|
|
* Think about this whole construct as the
|
|
* 'trace_sched_switch() function' from now on.
|
|
*
|
|
*
|
|
* TRACE_EVENT(sched_switch,
|
|
*
|
|
* *
|
|
* * A function has a regular function arguments
|
|
* * prototype, declare it via TP_PROTO():
|
|
* *
|
|
*
|
|
* TP_PROTO(struct rq *rq, struct task_struct *prev,
|
|
* struct task_struct *next),
|
|
*
|
|
* *
|
|
* * Define the call signature of the 'function'.
|
|
* * (Design sidenote: we use this instead of a
|
|
* * TP_PROTO1/TP_PROTO2/TP_PROTO3 ugliness.)
|
|
* *
|
|
*
|
|
* TP_ARGS(rq, prev, next),
|
|
*
|
|
* *
|
|
* * Fast binary tracing: define the trace record via
|
|
* * TP_STRUCT__entry(). You can think about it like a
|
|
* * regular C structure local variable definition.
|
|
* *
|
|
* * This is how the trace record is structured and will
|
|
* * be saved into the ring buffer. These are the fields
|
|
* * that will be exposed to user-space in
|
|
* * /sys/kernel/debug/tracing/events/<*>/format.
|
|
* *
|
|
* * The declared 'local variable' is called '__entry'
|
|
* *
|
|
* * __field(pid_t, prev_prid) is equivalent to a standard declariton:
|
|
* *
|
|
* * pid_t prev_pid;
|
|
* *
|
|
* * __array(char, prev_comm, TASK_COMM_LEN) is equivalent to:
|
|
* *
|
|
* * char prev_comm[TASK_COMM_LEN];
|
|
* *
|
|
*
|
|
* TP_STRUCT__entry(
|
|
* __array( char, prev_comm, TASK_COMM_LEN )
|
|
* __field( pid_t, prev_pid )
|
|
* __field( int, prev_prio )
|
|
* __array( char, next_comm, TASK_COMM_LEN )
|
|
* __field( pid_t, next_pid )
|
|
* __field( int, next_prio )
|
|
* ),
|
|
*
|
|
* *
|
|
* * Assign the entry into the trace record, by embedding
|
|
* * a full C statement block into TP_fast_assign(). You
|
|
* * can refer to the trace record as '__entry' -
|
|
* * otherwise you can put arbitrary C code in here.
|
|
* *
|
|
* * Note: this C code will execute every time a trace event
|
|
* * happens, on an active tracepoint.
|
|
* *
|
|
*
|
|
* TP_fast_assign(
|
|
* memcpy(__entry->next_comm, next->comm, TASK_COMM_LEN);
|
|
* __entry->prev_pid = prev->pid;
|
|
* __entry->prev_prio = prev->prio;
|
|
* memcpy(__entry->prev_comm, prev->comm, TASK_COMM_LEN);
|
|
* __entry->next_pid = next->pid;
|
|
* __entry->next_prio = next->prio;
|
|
* ),
|
|
*
|
|
* *
|
|
* * Formatted output of a trace record via TP_printk().
|
|
* * This is how the tracepoint will appear under ftrace
|
|
* * plugins that make use of this tracepoint.
|
|
* *
|
|
* * (raw-binary tracing wont actually perform this step.)
|
|
* *
|
|
*
|
|
* TP_printk("task %s:%d [%d] ==> %s:%d [%d]",
|
|
* __entry->prev_comm, __entry->prev_pid, __entry->prev_prio,
|
|
* __entry->next_comm, __entry->next_pid, __entry->next_prio),
|
|
*
|
|
* );
|
|
*
|
|
* This macro construct is thus used for the regular printk format
|
|
* tracing setup, it is used to construct a function pointer based
|
|
* tracepoint callback (this is used by programmatic plugins and
|
|
* can also by used by generic instrumentation like SystemTap), and
|
|
* it is also used to expose a structured trace record in
|
|
* /sys/kernel/debug/tracing/events/.
|
|
*
|
|
* A set of (un)registration functions can be passed to the variant
|
|
* TRACE_EVENT_FN to perform any (un)registration work.
|
|
*/
|
|
|
|
#define DECLARE_EVENT_CLASS(name, proto, args, tstruct, assign, print)
|
|
#define DEFINE_EVENT(template, name, proto, args) \
|
|
DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))
|
|
#define DEFINE_EVENT_FN(template, name, proto, args, reg, unreg)\
|
|
DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))
|
|
#define DEFINE_EVENT_PRINT(template, name, proto, args, print) \
|
|
DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))
|
|
#define DEFINE_EVENT_CONDITION(template, name, proto, \
|
|
args, cond) \
|
|
DECLARE_TRACE_CONDITION(name, PARAMS(proto), \
|
|
PARAMS(args), PARAMS(cond))
|
|
|
|
#define TRACE_EVENT(name, proto, args, struct, assign, print) \
|
|
DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))
|
|
#define TRACE_EVENT_FN(name, proto, args, struct, \
|
|
assign, print, reg, unreg) \
|
|
DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))
|
|
#define TRACE_EVENT_CONDITION(name, proto, args, cond, \
|
|
struct, assign, print) \
|
|
DECLARE_TRACE_CONDITION(name, PARAMS(proto), \
|
|
PARAMS(args), PARAMS(cond))
|
|
|
|
#define TRACE_EVENT_FLAGS(event, flag)
|
|
|
|
#define TRACE_EVENT_PERF_PERM(event, expr...)
|
|
|
|
#endif /* ifdef TRACE_EVENT (see note above) */
|