Before this patch, looking at 'perf bench sched pipe' behavior over
'top' only told us that something related to perf is running:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19934 mingo 20 0 54836 1296 952 R 18.6 0.0 0:00.56 perf
19935 mingo 20 0 54836 384 36 S 18.6 0.0 0:00.56 perf
After the patch it's clearly visible what's going on:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19744 mingo 20 0 125m 3536 2644 R 68.2 0.0 0:01.12 sched-pipe
19745 mingo 20 0 125m 1172 276 R 68.2 0.0 0:01.12 sched-pipe
The benchmark-subsystem name is concatenated with the individual
testcase name.
Unfortunately 'perf top' does not show the reconfigured name, possibly
because it caches ->comm[] values and does not recognize changes to
them?
Also clean up a few bits in builtin-bench.c while at it and reorganize
the code and the output strings to be consistent.
Use iterators to access the various arrays. Rename 'suites' concept to
'benchmark collection' and the 'bench_suite' to 'benchmark/bench'. The
many repetitions of 'suite' made the code harder to read and understand.
The new output is:
comet:~/tip/tools/perf> ./perf bench
Usage:
perf bench [<common options>] <collection> <benchmark> [<options>]
# List of all available benchmark collections:
sched: Scheduler and IPC benchmarks
mem: Memory access benchmarks
numa: NUMA scheduling and MM benchmarks
all: All benchmarks
comet:~/tip/tools/perf> ./perf bench sched
# List of available benchmarks for collection 'sched':
messaging: Benchmark for scheduling and IPC
pipe: Benchmark for pipe() between two processes
all: Test all scheduler benchmarks
comet:~/tip/tools/perf> ./perf bench mem
# List of available benchmarks for collection 'mem':
memcpy: Benchmark for memcpy()
memset: Benchmark for memset() tests
all: Test all memory benchmarks
comet:~/tip/tools/perf> ./perf bench numa
# List of available benchmarks for collection 'numa':
mem: Benchmark for NUMA workloads
all: Test all NUMA benchmarks
Individual benchmark modules were not touched.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Hitoshi Mitake <h.mitake@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20131023123756.GA17871@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
At this point, --fentry (mcount function entry) option for gcc fuzzes
the debuginfo variable locations by skipping the mcount instruction
offset (on x86, this is a 5 byte call instruction).
This makes variable searching fail at the entry of functions which
are mcount'ed.
e.g.)
Available variables at vfs_read
@<vfs_read+0>
(No matched variables)
This patch adds additional location search at the function entry point
to solve this issue, which tries to find the earliest address for the
variable location.
Note that this only works with function parameters (formal parameters)
because any local variables should not exist on the function entry
address (those are not initialized yet).
With this patch, perf probe shows correct parameters if possible;
# perf probe --vars vfs_read
Available variables at vfs_read
@<vfs_read+0>
char* buf
loff_t* pos
size_t count
struct file* file
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20131011071025.15557.13275.stgit@udc4-manage.rcp.hitachi.co.jp
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
* Convert callchain children list to rbtree, greatly reducing the time
taken for callchain processing, from Namhyung Kim.
* Add --max-stack option to limit callchain stack scan in 'top' and 'report',
improving callchain processing when reducing the stack depth is an option,
from Waiman Long.
* Compare dso's also when comparing symbols, to avoid grouping together
symbols with the same name but on different DSOs, fix from Namhyung Kim.
* 'perf trace' now can can use a 'perf probe' wannabe tracepoint to hook into
the userspace -> kernel pathname copy so that it can map fds to pathnames
without reading /proc/pid/fd/ symlinks. From Arnaldo Carvalho de Melo.
* 'perf trace' now emits hints as to why tracing is not possible, helping the
user to setup the system to allow tracing in the desired permission
granularity, telling if the problem is due to debugfs not being mounted or
with not enough permission for !root, /proc/sys/kernel/perf_event_paranoit
value, etc. From Arnaldo Carvalho de Melo.
* Add missing 'mmap2' in evsel debug print, from Adrian Hunter.
* Add missing decrement in id sample parsing, not a fix per se, just to
avoid a problem whem somebody adds another field, from Adrian Hunter.
* Improve write_output error message in 'perf record', from Adrian Hunter.
* Add missing sample flush for piped events, fix from Adrian Hunter.
* Add missing members to perf_event__attr_swap(), fix from Adrian Hunter.
* Assorted fixes for 32-bit build, from Adrian Hunter
* Print addr by default for BTS in 'perf script', from Adrian Juntmer
* Separating data file properties from session, code reorganization from
Jiri Olsa.
* Show error in 'perf list' if tracepoints not available, from Pekka Enberg.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
When callgraph data was included in the perf data file, it may take a
long time to scan all those data and merge them together especially if
the stored callchains are long and the perf data file itself is large,
like a Gbyte or so.
The callchain stack is currently limited to PERF_MAX_STACK_DEPTH (127).
This is a large value. Usually the callgraph data that developers are
most interested in are the first few levels, the rests are usually not
looked at.
This patch adds a new --max-stack option to perf-report to limit the
depth of callchain stack data to look at to reduce the time it takes for
perf-report to finish its processing. It trades the presence of trailing
stack information with faster speed.
The following table shows the elapsed time of doing perf-report on a
perf.data file of size 985,531,828 bytes.
--max_stack Elapsed Time Output data size
----------- ------------ ----------------
not set 88.0s 124,422,651
64 87.5s 116,303,213
32 87.2s 112,023,804
16 86.6s 94,326,380
8 59.9s 33,697,248
4 40.7s 10,116,637
-g none 27.1s 2,555,810
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1382107129-2010-4-git-send-email-Waiman.Long@hp.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Linus reported that sometimes 'perf report -s symbol' exits without any
message on TUI. David and Jiri found that it's because it failed to add
a hist entry due to an invalid symbol length.
It turns out that sorting by symbol (address) was broken since it only
compares symbol addresses. The symbol address is a relative address
within a dso thus just checking its address can result in merging
unrelated symbols together. Fix it by checking dso before comparing
symbol address.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1381802517-18812-1-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Current collapse stage has a scalability problem which can be reproduced
easily with a parallel kernel build.
This is because it needs to traverse every children of callchains
linearly during the collapse/merge stage.
Converting it to a rbtree reduced the overhead significantly.
On my 400MB perf.data file which recorded with make -j32 kernel build:
$ time perf --no-pager report --stdio > /dev/null
before:
real 6m22.073s
user 6m18.683s
sys 0m0.706s
after:
real 0m20.780s
user 0m19.962s
sys 0m0.689s
During the perf report the overhead on append_chain_children went down
from 96.69% to 18.16%:
- 18.16% perf perf [.] append_chain_children
- append_chain_children
- 77.48% append_chain_children
+ 69.79% merge_chain_branch
- 22.96% append_chain_children
+ 67.44% merge_chain_branch
+ 30.15% append_chain_children
+ 2.41% callchain_append
+ 7.25% callchain_append
+ 12.26% callchain_append
+ 10.22% merge_chain_branch
+ 11.58% perf perf [.] dso__find_symbol
+ 8.02% perf perf [.] sort__comm_cmp
+ 5.48% perf libc-2.17.so [.] malloc_consolidate
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1381468543-25334-2-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Initially it tries to find a probe:vfs_getname that should be setup
with:
perf probe 'vfs_getname=getname_flags:65 pathname=result->name:string'
or with slight changes to cope with code flux in the getname_flags code.
In the future, if a "vfs:getname" tracepoint becomes available, then it
will be preferred.
This is not strictly required and more expensive method of reading the
/proc/pid/fd/ symlink will be used when the fd->path array entry is not
populated by a previous vfs_getname + open syscall ret sequence.
As with any other 'perf probe' probe the setup must be done just once
and the probe will be left inactive, waiting for users, be it 'perf
trace' of any other tool.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-ujg8se8glq5izmu8cdkq15po@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
There's been reports of high NMI handler overhead, highlighted by
such kernel messages:
[ 3697.380195] perf samples too long (10009 > 10000), lowering kernel.perf_event_max_sample_rate to 13000
[ 3697.389509] INFO: NMI handler (perf_event_nmi_handler) took too long to run: 9.331 msecs
Don Zickus analyzed the source of the overhead and reported:
> While there are a few places that are causing latencies, for now I focused on
> the longest one first. It seems to be 'copy_user_from_nmi'
>
> intel_pmu_handle_irq ->
> intel_pmu_drain_pebs_nhm ->
> __intel_pmu_drain_pebs_nhm ->
> __intel_pmu_pebs_event ->
> intel_pmu_pebs_fixup_ip ->
> copy_from_user_nmi
>
> In intel_pmu_pebs_fixup_ip(), if the while-loop goes over 50, the sum of
> all the copy_from_user_nmi latencies seems to go over 1,000,000 cycles
> (there are some cases where only 10 iterations are needed to go that high
> too, but in generall over 50 or so). At this point copy_user_from_nmi
> seems to account for over 90% of the nmi latency.
The solution to that is to avoid having to call copy_from_user_nmi() for
every instruction.
Since we already limit the max basic block size, we can easily
pre-allocate a piece of memory to copy the entire thing into in one
go.
Don reported this test result:
> Your patch made a huge difference in improvement. The
> copy_from_user_nmi() no longer hits the million of cycles. I still
> have a batch of 100,000-300,000 cycles. My longest NMI paths used
> to be dominated by copy_from_user_nmi, now it is not (I have to dig
> up the new hot path).
Reported-and-tested-by: Don Zickus <dzickus@redhat.com>
Cc: jmario@redhat.com
Cc: acme@infradead.org
Cc: dave.hansen@linux.intel.com
Cc: eranian@google.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131016105755.GX10651@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
* kcore annotation improvements, including build-id cache support,
multi map 'call' instruction navigation fixes, kcore address
validation, objdump workarounds. From Adrian Hunter.
* 'trace' beautifiers for lots of syscall arguments, from Arnaldo Carvalho de Melo.
* More compact 'trace' output by suppressing zeroed args, from Arnaldo Carvalho de Melo.
* Show thread COMM by default in 'trace', from Arnaldo Carvalho de Melo.
* Show path associated with fd in live sessions, using a 'vfs_getname'
'perf probe' created dynamic tracepoint or by looking at /proc/pid/fd, from Arnaldo Carvalho de Melo.
* Memory and mmap leak fixes from Chenggang Qin.
* Add option to show full timestamp in 'trace', from David Ahern.
* Add 'record' command in 'trace', to record raw_syscalls:*, from David Ahern.
* Add summary option to dump syscall statistics in 'trace', from David Ahern.
* Fix comm resolution in 'trace' when reading events from file, from David Ahern.
* Improved messages when doing profiling in all or a subset of CPUs
using a workload as the session delimitator, as in:
'perf stat --cpu 0,2 sleep 10s'
from Arnaldo Carvalho de Melo.
* Add units to nanosec-based counters in 'perf stat', from David Ahern.
* Assorted build fixes for from David Ahern and Jiri Olsa.
* 'perf lock' fixes and cleanups, from Davidlohr Bueso.
* Memory leak fixes in 'perf test', from Felipe Pena.
* Build system super speedups, from Ingo Molnar.
* Fix mmap_read event overflow, from Jiri Olsa.
* Code cleanups from Jiri Olsa.
* Allow specifying B/K/M/G unit to the --mmap-pages arguments, from Jiri Olsa.
* Separate the GTK support in a separate libperf-gtk.so DSO, that is
only loaded when --gtk is specified, from Namhyung Kim.
* Fixes for some memory leaks, from Namhyumg Kim.
* Fix srcline sort key behavior, from Namhyung Kim.
* Fix failing assertions in numa bench, from Petr Holasek.
* perf bash completion fixes and improvements from Ramkumar Ramachandra.
* Improve error messages in 'trace', providing hints about system configuration
steps needed for using it, from Ramkumar Ramachandra.
* Remove bogus info when using 'perf stat' -e cycles/instructions, from
Ramkumar Ramachandra.
* Support for Openembedded/Yocto -dbg packages, from Ricardo Ribalda Delgado.
* Implement addr2line directly using libbfd, from Roberto Vitillo.
* Add new option --ignore-vmlinux for perf top, from Willy Tarreau.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kcore can be used to view the running kernel object code. However,
kcore changes as modules are loaded and unloaded, and when the kernel
decides to modify its own code. Consequently it is useful to create a
copy of kcore at a particular time. Unlike vmlinux, kcore is not unique
for a given build-id. And in addition, the kallsyms and modules files
are also needed. The tool therefore creates a directory:
~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh>
which contains: kcore, kallsyms and modules.
Note that the copied kcore contains only code sections. See the
kcore_copy() function for how that is determined.
The tool will not make additional copies of kcore if there is already
one with the same modules at the same addresses.
Currently, perf tools will not look for kcore in the cache. That is
addressed in another patch.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com
[ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12,
use at least one member initializer when initializing a struct to
zeros, also to fix the build on f12 ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>