Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf updates from Ingo Molnar:
 "Kernel side changes:

   - Improved kbprobes robustness

   - Intel PEBS support for PT hardware tracing

   - Other Intel PT improvements: high order pages memory footprint
     reduction and various related cleanups

   - Misc cleanups

  The perf tooling side has been very busy in this cycle, with over 300
  commits. This is an incomplete high-level summary of the many
  improvements done by over 30 developers:

   - Lots of updates to the following tools:

      'perf c2c'
      'perf config'
      'perf record'
      'perf report'
      'perf script'
      'perf test'
      'perf top'
      'perf trace'

   - Updates to libperf and libtraceevent, and a consolidation of the
     proliferation of x86 instruction decoder libraries.

   - Vendor event updates for Intel and PowerPC CPUs,

   - Updates to hardware tracing tooling for ARM and Intel CPUs,

   - ... and lots of other changes and cleanups - see the shortlog and
     Git log for details"

* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (322 commits)
  kprobes: Prohibit probing on BUG() and WARN() address
  perf/x86: Make more stuff static
  x86, perf: Fix the dependency of the x86 insn decoder selftest
  objtool: Ignore intentional differences for the x86 insn decoder
  objtool: Update sync-check.sh from perf's check-headers.sh
  perf build: Ignore intentional differences for the x86 insn decoder
  perf intel-pt: Use shared x86 insn decoder
  perf intel-pt: Remove inat.c from build dependency list
  perf: Update .gitignore file
  objtool: Move x86 insn decoder to a common location
  perf metricgroup: Support multiple events for metricgroup
  perf metricgroup: Scale the metric result
  perf pmu: Change convert_scale from static to global
  perf symbols: Move mem_info and branch_info out of symbol.h
  perf auxtrace: Uninline functions that touch perf_session
  perf tools: Remove needless evlist.h include directives
  perf tools: Remove needless evlist.h include directives
  perf tools: Remove needless thread_map.h include directives
  perf tools: Remove needless thread.h include directives
  perf tools: Remove needless map.h include directives
  ...
This commit is contained in:
Linus Torvalds
2019-09-16 17:06:21 -07:00
441 changed files with 13386 additions and 8868 deletions

View File

@@ -34,3 +34,6 @@ arch/*/include/generated/
trace/beauty/generated/
pmu-events/pmu-events.c
pmu-events/jevents
feature/
fixdep
libtraceevent-dynamic-list

View File

@@ -919,3 +919,18 @@ amended to take the number of elements as a parameter.
Note there is currently no advantage to using Intel PT instead of LBR, but
that may change in the future if greater use is made of the data.
PEBS via Intel PT
=================
Some hardware has the feature to redirect PEBS records to the Intel PT trace.
Recording is selected by using the aux-output config term e.g.
perf record -c 10000 -e '{intel_pt/branch=0/,cycles/aux-output/ppp}' uname
Note that currently, software only supports redirecting at most one PEBS event.
To display PEBS events from the Intel PT trace, use the itrace 'o' option e.g.
perf script --itrace=oe

View File

@@ -5,6 +5,8 @@
x synthesize transactions events
w synthesize ptwrite events
p synthesize power events
o synthesize other events recorded due to the use
of aux-output (refer to perf record)
e synthesize error events
d create a debug log
g synthesize a call chain (use with i or x)

View File

@@ -40,6 +40,10 @@ The '$HOME/.perfconfig' file is used to store a per-user configuration.
The file '$(sysconfdir)/perfconfig' can be used to
store a system-wide default configuration.
One an disable reading config files by setting the PERF_CONFIG environment
variable to /dev/null, or provide an alternate config file by setting that
variable.
When reading or writing, the values are read from the system and user
configuration files by default, and options '--system' and '--user'
can be used to tell the command to read from or write to only that location.

View File

@@ -60,6 +60,8 @@ OPTIONS
- 'name' : User defined event name. Single quotes (') may be used to
escape symbols in the name from parsing by shell and tool
like this: name=\'CPU_CLK_UNHALTED.THREAD:cmask=0x1\'.
- 'aux-output': Generate AUX records instead of events. This requires
that an AUX area event is also provided.
See the linkperf:perf-list[1] man page for more parameters.
@@ -422,9 +424,14 @@ CLOCK_BOOTTIME, CLOCK_REALTIME and CLOCK_TAI.
-S::
--snapshot::
Select AUX area tracing Snapshot Mode. This option is valid only with an
AUX area tracing event. Optionally the number of bytes to capture per
snapshot can be specified. In Snapshot Mode, trace data is captured only when
signal SIGUSR2 is received.
AUX area tracing event. Optionally, certain snapshot capturing parameters
can be specified in a string that follows this option:
'e': take one last snapshot on exit; guarantees that there is at least one
snapshot in the output file;
<size>: if the PMU supports this, specify the desired snapshot size.
In Snapshot Mode trace data is captured only when signal SIGUSR2 is received
and on exit if the above 'e' option is given.
--proc-map-timeout::
When processing pre-existing threads /proc/XXX/mmap, it may take a long time,

View File

@@ -438,6 +438,23 @@ OPTIONS
perf report --time 0%-10%,30%-40%
--switch-on EVENT_NAME::
Only consider events after this event is found.
This may be interesting to measure a workload only after some initialization
phase is over, i.e. insert a perf probe at that point and then using this
option with that probe.
--switch-off EVENT_NAME::
Stop considering events after this event is found.
--show-on-off-events::
Show the --switch-on/off events too. This has no effect in 'perf report' now
but probably we'll make the default not to show the switch-on/off events
on the --group mode and if there is only one event besides the off/on ones,
go straight to the histogram browser, just like 'perf report' with no events
explicitely specified does.
--itrace::
Options for decoding instruction tracing data. The options are:

View File

@@ -417,6 +417,15 @@ include::itrace.txt[]
For itrace only show specified functions and their callees for
itrace. Multiple functions can be separated by comma.
--switch-on EVENT_NAME::
Only consider events after this event is found.
--switch-off EVENT_NAME::
Stop considering events after this event is found.
--show-on-off-events::
Show the --switch-on/off events too.
SEE ALSO
--------
linkperf:perf-record[1], linkperf:perf-script-perl[1],

View File

@@ -266,6 +266,44 @@ Default is to monitor all CPUS.
Record events of type PERF_RECORD_NAMESPACES and display it with the
'cgroup_id' sort key.
--switch-on EVENT_NAME::
Only consider events after this event is found.
E.g.:
Find out where broadcast packets are handled
perf probe -L icmp_rcv
Insert a probe there:
perf probe icmp_rcv:59
Start perf top and ask it to only consider the cycles events when a
broadcast packet arrives This will show a menu with two entries and
will start counting when a broadcast packet arrives:
perf top -e cycles,probe:icmp_rcv --switch-on=probe:icmp_rcv
Alternatively one can ask for --group and then two overhead columns
will appear, the first for cycles and the second for the switch-on event.
perf top --group -e cycles,probe:icmp_rcv --switch-on=probe:icmp_rcv
This may be interesting to measure a workload only after some initialization
phase is over, i.e. insert a perf probe at that point and use the above
examples replacing probe:icmp_rcv with the just-after-init probe.
--switch-off EVENT_NAME::
Stop considering events after this event is found.
--show-on-off-events::
Show the --switch-on/off events too. This has no effect in 'perf top' now
but probably we'll make the default not to show the switch-on/off events
on the --group mode and if there is only one event besides the off/on ones,
go straight to the histogram browser, just like 'perf top' with no events
explicitely specified does.
INTERACTIVE PROMPTING KEYS
--------------------------

View File

@@ -176,6 +176,15 @@ the thread executes on the designated CPUs. Default is to monitor all CPUs.
only at exit time or when a syscall is interrupted, i.e. in those cases this
option is equivalent to the number of lines printed.
--switch-on EVENT_NAME::
Only consider events after this event is found.
--switch-off EVENT_NAME::
Stop considering events after this event is found.
--show-on-off-events::
Show the --switch-on/off events too.
--max-stack::
Set the stack depth limit when parsing the callchain, anything
beyond the specified depth will be ignored. Note that at this point

View File

@@ -298,16 +298,21 @@ Physical memory map and its node assignments.
The format of data in MEM_TOPOLOGY is as follows:
0 - version | for future changes
8 - block_size_bytes | /sys/devices/system/memory/block_size_bytes
16 - count | number of nodes
u64 version; // Currently 1
u64 block_size_bytes; // /sys/devices/system/memory/block_size_bytes
u64 count; // number of nodes
For each node we store map of physical indexes:
32 - node id | node index
40 - size | size of bitmap
48 - bitmap | bitmap of memory indexes that belongs to node
| /sys/devices/system/node/node<NODE>/memory<INDEX>
struct memory_node {
u64 node_id; // node index
u64 size; // size of bitmap
struct bitmap {
/* size of bitmap again */
u64 bitmapsize;
/* bitmap of memory indexes that belongs to node */
/* /sys/devices/system/node/node<NODE>/memory<INDEX> */
u64 entries[(bitmapsize/64)+1];
}
}[count];
The MEM_TOPOLOGY can be displayed with following command:

View File

@@ -281,11 +281,12 @@ ifeq ($(DEBUG),0)
endif
endif
INC_FLAGS += -I$(src-perf)/lib/include
INC_FLAGS += -I$(src-perf)/util/include
INC_FLAGS += -I$(src-perf)/arch/$(SRCARCH)/include
INC_FLAGS += -I$(srctree)/tools/include/uapi
INC_FLAGS += -I$(srctree)/tools/include/
INC_FLAGS += -I$(srctree)/tools/arch/$(SRCARCH)/include/uapi
INC_FLAGS += -I$(srctree)/tools/include/uapi
INC_FLAGS += -I$(srctree)/tools/arch/$(SRCARCH)/include/
INC_FLAGS += -I$(srctree)/tools/arch/$(SRCARCH)/
@@ -827,6 +828,17 @@ ifndef NO_LIBZSTD
endif
endif
ifndef NO_LIBCAP
ifeq ($(feature-libcap), 1)
CFLAGS += -DHAVE_LIBCAP_SUPPORT
EXTLIBS += -lcap
$(call detected,CONFIG_LIBCAP)
else
msg := $(warning No libcap found, disables capability support, please install libcap-devel/libcap-dev);
NO_LIBCAP := 1
endif
endif
ifndef NO_BACKTRACE
ifeq ($(feature-backtrace), 1)
CFLAGS += -DHAVE_BACKTRACE_SUPPORT

View File

@@ -88,6 +88,8 @@ include ../scripts/utilities.mak
#
# Define NO_LIBBPF if you do not want BPF support
#
# Define NO_LIBCAP if you do not want process capabilities considered by perf
#
# Define NO_SDT if you do not want to define SDT event in perf tools,
# note that it doesn't disable SDT scanning support.
#
@@ -224,6 +226,7 @@ LIB_DIR = $(srctree)/tools/lib/api/
TRACE_EVENT_DIR = $(srctree)/tools/lib/traceevent/
BPF_DIR = $(srctree)/tools/lib/bpf/
SUBCMD_DIR = $(srctree)/tools/lib/subcmd/
LIBPERF_DIR = $(srctree)/tools/perf/lib/
# Set FEATURE_TESTS to 'all' so all possible feature checkers are executed.
# Without this setting the output feature dump file misses some features, for
@@ -272,6 +275,7 @@ ifneq ($(OUTPUT),)
TE_PATH=$(OUTPUT)
BPF_PATH=$(OUTPUT)
SUBCMD_PATH=$(OUTPUT)
LIBPERF_PATH=$(OUTPUT)
ifneq ($(subdir),)
API_PATH=$(OUTPUT)/../lib/api/
else
@@ -282,6 +286,7 @@ else
API_PATH=$(LIB_DIR)
BPF_PATH=$(BPF_DIR)
SUBCMD_PATH=$(SUBCMD_DIR)
LIBPERF_PATH=$(LIBPERF_DIR)
endif
LIBTRACEEVENT = $(TE_PATH)libtraceevent.a
@@ -303,6 +308,9 @@ LIBBPF = $(BPF_PATH)libbpf.a
LIBSUBCMD = $(SUBCMD_PATH)libsubcmd.a
LIBPERF = $(LIBPERF_PATH)libperf.a
export LIBPERF
# python extension build directories
PYTHON_EXTBUILD := $(OUTPUT)python_ext_build/
PYTHON_EXTBUILD_LIB := $(PYTHON_EXTBUILD)lib/
@@ -348,9 +356,7 @@ endif
export PERL_PATH
LIBPERF_A=$(OUTPUT)libperf.a
PERFLIBS = $(LIBAPI) $(LIBTRACEEVENT) $(LIBSUBCMD)
PERFLIBS = $(LIBAPI) $(LIBTRACEEVENT) $(LIBSUBCMD) $(LIBPERF)
ifndef NO_LIBBPF
PERFLIBS += $(LIBBPF)
endif
@@ -583,8 +589,6 @@ JEVENTS_IN := $(OUTPUT)pmu-events/jevents-in.o
PMU_EVENTS_IN := $(OUTPUT)pmu-events/pmu-events-in.o
LIBPERF_IN := $(OUTPUT)libperf-in.o
export JEVENTS
build := -f $(srctree)/tools/build/Makefile.build dir=. obj
@@ -601,12 +605,9 @@ $(JEVENTS): $(JEVENTS_IN)
$(PMU_EVENTS_IN): $(JEVENTS) FORCE
$(Q)$(MAKE) -f $(srctree)/tools/build/Makefile.build dir=pmu-events obj=pmu-events
$(LIBPERF_IN): prepare FORCE
$(Q)$(MAKE) $(build)=libperf
$(OUTPUT)perf: $(PERFLIBS) $(PERF_IN) $(PMU_EVENTS_IN) $(LIBPERF_IN) $(LIBTRACEEVENT_DYNAMIC_LIST)
$(OUTPUT)perf: $(PERFLIBS) $(PERF_IN) $(PMU_EVENTS_IN) $(LIBTRACEEVENT_DYNAMIC_LIST)
$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $(LIBTRACEEVENT_DYNAMIC_LIST_LDFLAGS) \
$(PERF_IN) $(PMU_EVENTS_IN) $(LIBPERF_IN) $(LIBS) -o $@
$(PERF_IN) $(PMU_EVENTS_IN) $(LIBS) -o $@
$(GTK_IN): FORCE
$(Q)$(MAKE) $(build)=gtk
@@ -727,9 +728,6 @@ endif
$(patsubst perf-%,%.o,$(PROGRAMS)): $(wildcard */*.h)
$(LIBPERF_A): $(LIBPERF_IN)
$(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $(LIBPERF_IN) $(LIB_OBJS)
LIBTRACEEVENT_FLAGS += plugin_dir=$(plugindir_SQ) 'EXTRA_CFLAGS=$(EXTRA_CFLAGS)' 'LDFLAGS=$(LDFLAGS)'
$(LIBTRACEEVENT): FORCE
@@ -762,6 +760,13 @@ $(LIBBPF)-clean:
$(call QUIET_CLEAN, libbpf)
$(Q)$(MAKE) -C $(BPF_DIR) O=$(OUTPUT) clean >/dev/null
$(LIBPERF): FORCE
$(Q)$(MAKE) -C $(LIBPERF_DIR) O=$(OUTPUT) $(OUTPUT)libperf.a
$(LIBPERF)-clean:
$(call QUIET_CLEAN, libperf)
$(Q)$(MAKE) -C $(LIBPERF_DIR) O=$(OUTPUT) clean >/dev/null
$(LIBSUBCMD): FORCE
$(Q)$(MAKE) -C $(SUBCMD_DIR) O=$(OUTPUT) $(OUTPUT)libsubcmd.a
@@ -948,7 +953,7 @@ config-clean:
python-clean:
$(python-clean)
clean:: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean config-clean fixdep-clean python-clean
clean:: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBPERF)-clean config-clean fixdep-clean python-clean
$(call QUIET_CLEAN, core-objs) $(RM) $(LIBPERF_A) $(OUTPUT)perf-archive $(OUTPUT)perf-with-kcore $(LANG_BINDINGS)
$(Q)find $(if $(OUTPUT),$(OUTPUT),.) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete
$(Q)$(RM) $(OUTPUT).config-detected

View File

@@ -3,6 +3,7 @@
#include <linux/zalloc.h>
#include <sys/types.h>
#include <regex.h>
#include <stdlib.h>
struct arm_annotate {
regex_t call_insn,

View File

@@ -9,6 +9,7 @@
#include <linux/zalloc.h>
#include "../../util/auxtrace.h"
#include "../../util/debug.h"
#include "../../util/evlist.h"
#include "../../util/pmu.h"
#include "cs-etm.h"
@@ -50,10 +51,10 @@ static struct perf_pmu **find_all_arm_spe_pmus(int *nr_spes, int *err)
}
struct auxtrace_record
*auxtrace_record__init(struct perf_evlist *evlist, int *err)
*auxtrace_record__init(struct evlist *evlist, int *err)
{
struct perf_pmu *cs_etm_pmu;
struct perf_evsel *evsel;
struct evsel *evsel;
bool found_etm = false;
bool found_spe = false;
static struct perf_pmu **arm_spe_pmus = NULL;
@@ -70,14 +71,14 @@ struct auxtrace_record
evlist__for_each_entry(evlist, evsel) {
if (cs_etm_pmu &&
evsel->attr.type == cs_etm_pmu->type)
evsel->core.attr.type == cs_etm_pmu->type)
found_etm = true;
if (!nr_spes)
continue;
for (i = 0; i < nr_spes; i++) {
if (evsel->attr.type == arm_spe_pmus[i]->type) {
if (evsel->core.attr.type == arm_spe_pmus[i]->type) {
found_spe = true;
break;
}

View File

@@ -11,19 +11,22 @@
#include <linux/coresight-pmu.h>
#include <linux/kernel.h>
#include <linux/log2.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/zalloc.h>
#include "cs-etm.h"
#include "../../perf.h"
#include "../../util/debug.h"
#include "../../util/record.h"
#include "../../util/auxtrace.h"
#include "../../util/cpumap.h"
#include "../../util/event.h"
#include "../../util/evlist.h"
#include "../../util/evsel.h"
#include "../../util/pmu.h"
#include "../../util/thread_map.h"
#include "../../util/cs-etm.h"
#include "../../util/util.h"
#include "../../util/session.h"
#include <errno.h>
#include <stdlib.h>
@@ -32,7 +35,7 @@
struct cs_etm_recording {
struct auxtrace_record itr;
struct perf_pmu *cs_etm_pmu;
struct perf_evlist *evlist;
struct evlist *evlist;
int wrapped_cnt;
bool *wrapped;
bool snapshot_mode;
@@ -55,7 +58,7 @@ static const char *metadata_etmv4_ro[CS_ETMV4_PRIV_MAX] = {
static bool cs_etm_is_etmv4(struct auxtrace_record *itr, int cpu);
static int cs_etm_set_context_id(struct auxtrace_record *itr,
struct perf_evsel *evsel, int cpu)
struct evsel *evsel, int cpu)
{
struct cs_etm_recording *ptr;
struct perf_pmu *cs_etm_pmu;
@@ -95,7 +98,7 @@ static int cs_etm_set_context_id(struct auxtrace_record *itr,
}
/* All good, let the kernel know */
evsel->attr.config |= (1 << ETM_OPT_CTXTID);
evsel->core.attr.config |= (1 << ETM_OPT_CTXTID);
err = 0;
out:
@@ -104,7 +107,7 @@ out:
}
static int cs_etm_set_timestamp(struct auxtrace_record *itr,
struct perf_evsel *evsel, int cpu)
struct evsel *evsel, int cpu)
{
struct cs_etm_recording *ptr;
struct perf_pmu *cs_etm_pmu;
@@ -144,7 +147,7 @@ static int cs_etm_set_timestamp(struct auxtrace_record *itr,
}
/* All good, let the kernel know */
evsel->attr.config |= (1 << ETM_OPT_TS);
evsel->core.attr.config |= (1 << ETM_OPT_TS);
err = 0;
out:
@@ -152,11 +155,11 @@ out:
}
static int cs_etm_set_option(struct auxtrace_record *itr,
struct perf_evsel *evsel, u32 option)
struct evsel *evsel, u32 option)
{
int i, err = -EINVAL;
struct cpu_map *event_cpus = evsel->evlist->cpus;
struct cpu_map *online_cpus = cpu_map__new(NULL);
struct perf_cpu_map *event_cpus = evsel->evlist->core.cpus;
struct perf_cpu_map *online_cpus = perf_cpu_map__new(NULL);
/* Set option of each CPU we have */
for (i = 0; i < cpu__max_cpu(); i++) {
@@ -181,7 +184,7 @@ static int cs_etm_set_option(struct auxtrace_record *itr,
err = 0;
out:
cpu_map__put(online_cpus);
perf_cpu_map__put(online_cpus);
return err;
}
@@ -208,14 +211,14 @@ static int cs_etm_parse_snapshot_options(struct auxtrace_record *itr,
}
static int cs_etm_set_sink_attr(struct perf_pmu *pmu,
struct perf_evsel *evsel)
struct evsel *evsel)
{
char msg[BUFSIZ], path[PATH_MAX], *sink;
struct perf_evsel_config_term *term;
int ret = -EINVAL;
u32 hash;
if (evsel->attr.config2 & GENMASK(31, 0))
if (evsel->core.attr.config2 & GENMASK(31, 0))
return 0;
list_for_each_entry(term, &evsel->config_terms, list) {
@@ -233,7 +236,7 @@ static int cs_etm_set_sink_attr(struct perf_pmu *pmu,
return ret;
}
evsel->attr.config2 |= hash;
evsel->core.attr.config2 |= hash;
return 0;
}
@@ -245,16 +248,16 @@ static int cs_etm_set_sink_attr(struct perf_pmu *pmu,
}
static int cs_etm_recording_options(struct auxtrace_record *itr,
struct perf_evlist *evlist,
struct evlist *evlist,
struct record_opts *opts)
{
int ret;
struct cs_etm_recording *ptr =
container_of(itr, struct cs_etm_recording, itr);
struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
struct perf_evsel *evsel, *cs_etm_evsel = NULL;
struct cpu_map *cpus = evlist->cpus;
bool privileged = (geteuid() == 0 || perf_event_paranoid() < 0);
struct evsel *evsel, *cs_etm_evsel = NULL;
struct perf_cpu_map *cpus = evlist->core.cpus;
bool privileged = perf_event_paranoid_check(-1);
int err = 0;
ptr->evlist = evlist;
@@ -264,14 +267,14 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
opts->record_switch_events = true;
evlist__for_each_entry(evlist, evsel) {
if (evsel->attr.type == cs_etm_pmu->type) {
if (evsel->core.attr.type == cs_etm_pmu->type) {
if (cs_etm_evsel) {
pr_err("There may be only one %s event\n",
CORESIGHT_ETM_PMU_NAME);
return -EINVAL;
}
evsel->attr.freq = 0;
evsel->attr.sample_period = 1;
evsel->core.attr.freq = 0;
evsel->core.attr.sample_period = 1;
cs_etm_evsel = evsel;
opts->full_auxtrace = true;
}
@@ -396,7 +399,7 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
* AUX event. We also need the contextID in order to be notified
* when a context switch happened.
*/
if (!cpu_map__empty(cpus)) {
if (!perf_cpu_map__empty(cpus)) {
perf_evsel__set_sample_bit(cs_etm_evsel, CPU);
err = cs_etm_set_option(itr, cs_etm_evsel,
@@ -407,7 +410,7 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
/* Add dummy event to keep tracking */
if (opts->full_auxtrace) {
struct perf_evsel *tracking_evsel;
struct evsel *tracking_evsel;
err = parse_events(evlist, "dummy:u", NULL);
if (err)
@@ -416,11 +419,11 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
tracking_evsel = perf_evlist__last(evlist);
perf_evlist__set_tracking_event(evlist, tracking_evsel);
tracking_evsel->attr.freq = 0;
tracking_evsel->attr.sample_period = 1;
tracking_evsel->core.attr.freq = 0;
tracking_evsel->core.attr.sample_period = 1;
/* In per-cpu case, always need the time of mmap events etc */
if (!cpu_map__empty(cpus))
if (!perf_cpu_map__empty(cpus))
perf_evsel__set_sample_bit(tracking_evsel, TIME);
}
@@ -434,11 +437,11 @@ static u64 cs_etm_get_config(struct auxtrace_record *itr)
struct cs_etm_recording *ptr =
container_of(itr, struct cs_etm_recording, itr);
struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
struct perf_evlist *evlist = ptr->evlist;
struct perf_evsel *evsel;
struct evlist *evlist = ptr->evlist;
struct evsel *evsel;
evlist__for_each_entry(evlist, evsel) {
if (evsel->attr.type == cs_etm_pmu->type) {
if (evsel->core.attr.type == cs_etm_pmu->type) {
/*
* Variable perf_event_attr::config is assigned to
* ETMv3/PTM. The bit fields have been made to match
@@ -447,7 +450,7 @@ static u64 cs_etm_get_config(struct auxtrace_record *itr)
* drivers/hwtracing/coresight/coresight-perf.c for
* details.
*/
config = evsel->attr.config;
config = evsel->core.attr.config;
break;
}
}
@@ -485,15 +488,15 @@ static u64 cs_etmv4_get_config(struct auxtrace_record *itr)
static size_t
cs_etm_info_priv_size(struct auxtrace_record *itr __maybe_unused,
struct perf_evlist *evlist __maybe_unused)
struct evlist *evlist __maybe_unused)
{
int i;
int etmv3 = 0, etmv4 = 0;
struct cpu_map *event_cpus = evlist->cpus;
struct cpu_map *online_cpus = cpu_map__new(NULL);
struct perf_cpu_map *event_cpus = evlist->core.cpus;
struct perf_cpu_map *online_cpus = perf_cpu_map__new(NULL);
/* cpu map is not empty, we have specific CPUs to work with */
if (!cpu_map__empty(event_cpus)) {
if (!perf_cpu_map__empty(event_cpus)) {
for (i = 0; i < cpu__max_cpu(); i++) {
if (!cpu_map__has(event_cpus, i) ||
!cpu_map__has(online_cpus, i))
@@ -517,7 +520,7 @@ cs_etm_info_priv_size(struct auxtrace_record *itr __maybe_unused,
}
}
cpu_map__put(online_cpus);
perf_cpu_map__put(online_cpus);
return (CS_ETM_HEADER_SIZE +
(etmv4 * CS_ETMV4_PRIV_SIZE) +
@@ -564,7 +567,7 @@ static int cs_etm_get_ro(struct perf_pmu *pmu, int cpu, const char *path)
static void cs_etm_get_metadata(int cpu, u32 *offset,
struct auxtrace_record *itr,
struct auxtrace_info_event *info)
struct perf_record_auxtrace_info *info)
{
u32 increment;
u64 magic;
@@ -629,15 +632,15 @@ static void cs_etm_get_metadata(int cpu, u32 *offset,
static int cs_etm_info_fill(struct auxtrace_record *itr,
struct perf_session *session,
struct auxtrace_info_event *info,
struct perf_record_auxtrace_info *info,
size_t priv_size)
{
int i;
u32 offset;
u64 nr_cpu, type;
struct cpu_map *cpu_map;
struct cpu_map *event_cpus = session->evlist->cpus;
struct cpu_map *online_cpus = cpu_map__new(NULL);
struct perf_cpu_map *cpu_map;
struct perf_cpu_map *event_cpus = session->evlist->core.cpus;
struct perf_cpu_map *online_cpus = perf_cpu_map__new(NULL);
struct cs_etm_recording *ptr =
container_of(itr, struct cs_etm_recording, itr);
struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
@@ -649,11 +652,11 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
return -EINVAL;
/* If the cpu_map is empty all online CPUs are involved */
if (cpu_map__empty(event_cpus)) {
if (perf_cpu_map__empty(event_cpus)) {
cpu_map = online_cpus;
} else {
/* Make sure all specified CPUs are online */
for (i = 0; i < cpu_map__nr(event_cpus); i++) {
for (i = 0; i < perf_cpu_map__nr(event_cpus); i++) {
if (cpu_map__has(event_cpus, i) &&
!cpu_map__has(online_cpus, i))
return -EINVAL;
@@ -662,7 +665,7 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
cpu_map = event_cpus;
}
nr_cpu = cpu_map__nr(cpu_map);
nr_cpu = perf_cpu_map__nr(cpu_map);
/* Get PMU type as dynamically assigned by the core */
type = cs_etm_pmu->type;
@@ -679,7 +682,7 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
if (cpu_map__has(cpu_map, i))
cs_etm_get_metadata(i, &offset, itr, info);
cpu_map__put(online_cpus);
perf_cpu_map__put(online_cpus);
return 0;
}
@@ -817,11 +820,11 @@ static int cs_etm_snapshot_start(struct auxtrace_record *itr)
{
struct cs_etm_recording *ptr =
container_of(itr, struct cs_etm_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(ptr->evlist, evsel) {
if (evsel->attr.type == ptr->cs_etm_pmu->type)
return perf_evsel__disable(evsel);
if (evsel->core.attr.type == ptr->cs_etm_pmu->type)
return evsel__disable(evsel);
}
return -EINVAL;
}
@@ -830,11 +833,11 @@ static int cs_etm_snapshot_finish(struct auxtrace_record *itr)
{
struct cs_etm_recording *ptr =
container_of(itr, struct cs_etm_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(ptr->evlist, evsel) {
if (evsel->attr.type == ptr->cs_etm_pmu->type)
return perf_evsel__enable(evsel);
if (evsel->core.attr.type == ptr->cs_etm_pmu->type)
return evsel__enable(evsel);
}
return -EINVAL;
}
@@ -858,10 +861,10 @@ static int cs_etm_read_finish(struct auxtrace_record *itr, int idx)
{
struct cs_etm_recording *ptr =
container_of(itr, struct cs_etm_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(ptr->evlist, evsel) {
if (evsel->attr.type == ptr->cs_etm_pmu->type)
if (evsel->core.attr.type == ptr->cs_etm_pmu->type)
return perf_evlist__enable_event_idx(ptr->evlist,
evsel, idx);
}

View File

@@ -2,6 +2,7 @@
#include <linux/compiler.h>
#include <sys/types.h>
#include <regex.h>
#include <stdlib.h>
struct arm64_annotate {
regex_t call_insn,

View File

@@ -12,6 +12,7 @@
#include <time.h>
#include "../../util/cpumap.h"
#include "../../util/event.h"
#include "../../util/evsel.h"
#include "../../util/evlist.h"
#include "../../util/session.h"
@@ -19,6 +20,7 @@
#include "../../util/pmu.h"
#include "../../util/debug.h"
#include "../../util/auxtrace.h"
#include "../../util/record.h"
#include "../../util/arm-spe.h"
#define KiB(x) ((x) * 1024)
@@ -27,19 +29,19 @@
struct arm_spe_recording {
struct auxtrace_record itr;
struct perf_pmu *arm_spe_pmu;
struct perf_evlist *evlist;
struct evlist *evlist;
};
static size_t
arm_spe_info_priv_size(struct auxtrace_record *itr __maybe_unused,
struct perf_evlist *evlist __maybe_unused)
struct evlist *evlist __maybe_unused)
{
return ARM_SPE_AUXTRACE_PRIV_SIZE;
}
static int arm_spe_info_fill(struct auxtrace_record *itr,
struct perf_session *session,
struct auxtrace_info_event *auxtrace_info,
struct perf_record_auxtrace_info *auxtrace_info,
size_t priv_size)
{
struct arm_spe_recording *sper =
@@ -59,27 +61,27 @@ static int arm_spe_info_fill(struct auxtrace_record *itr,
}
static int arm_spe_recording_options(struct auxtrace_record *itr,
struct perf_evlist *evlist,
struct evlist *evlist,
struct record_opts *opts)
{
struct arm_spe_recording *sper =
container_of(itr, struct arm_spe_recording, itr);
struct perf_pmu *arm_spe_pmu = sper->arm_spe_pmu;
struct perf_evsel *evsel, *arm_spe_evsel = NULL;
bool privileged = geteuid() == 0 || perf_event_paranoid() < 0;
struct perf_evsel *tracking_evsel;
struct evsel *evsel, *arm_spe_evsel = NULL;
bool privileged = perf_event_paranoid_check(-1);
struct evsel *tracking_evsel;
int err;
sper->evlist = evlist;
evlist__for_each_entry(evlist, evsel) {
if (evsel->attr.type == arm_spe_pmu->type) {
if (evsel->core.attr.type == arm_spe_pmu->type) {
if (arm_spe_evsel) {
pr_err("There may be only one " ARM_SPE_PMU_NAME "x event\n");
return -EINVAL;
}
evsel->attr.freq = 0;
evsel->attr.sample_period = 1;
evsel->core.attr.freq = 0;
evsel->core.attr.sample_period = 1;
arm_spe_evsel = evsel;
opts->full_auxtrace = true;
}
@@ -130,8 +132,8 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
tracking_evsel = perf_evlist__last(evlist);
perf_evlist__set_tracking_event(evlist, tracking_evsel);
tracking_evsel->attr.freq = 0;
tracking_evsel->attr.sample_period = 1;
tracking_evsel->core.attr.freq = 0;
tracking_evsel->core.attr.sample_period = 1;
perf_evsel__set_sample_bit(tracking_evsel, TIME);
perf_evsel__set_sample_bit(tracking_evsel, CPU);
perf_evsel__reset_sample_bit(tracking_evsel, BRANCH_STACK);
@@ -160,10 +162,10 @@ static int arm_spe_read_finish(struct auxtrace_record *itr, int idx)
{
struct arm_spe_recording *sper =
container_of(itr, struct arm_spe_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(sper->evlist, evsel) {
if (evsel->attr.type == sper->arm_spe_pmu->type)
if (evsel->core.attr.type == sper->arm_spe_pmu->type)
return perf_evlist__enable_event_idx(sper->evlist,
evsel, idx);
}

View File

@@ -1,6 +1,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <api/fs/fs.h>
#include "debug.h"
#include "header.h"
#define MIDR "/regs/identification/midr_el1"
@@ -16,7 +17,7 @@ char *get_cpuid_str(struct perf_pmu *pmu)
const char *sysfs = sysfs__mountpoint();
int cpu;
u64 midr = 0;
struct cpu_map *cpus;
struct perf_cpu_map *cpus;
FILE *file;
if (!sysfs || !pmu || !pmu->cpus)
@@ -27,7 +28,7 @@ char *get_cpuid_str(struct perf_pmu *pmu)
return NULL;
/* read midr from list of cpus mapped to this pmu */
cpus = cpu_map__get(pmu->cpus);
cpus = perf_cpu_map__get(pmu->cpus);
for (cpu = 0; cpu < cpus->nr; cpu++) {
scnprintf(path, PATH_MAX, "%s/devices/system/cpu/cpu%d"MIDR,
sysfs, cpus->map[cpu]);
@@ -60,6 +61,6 @@ char *get_cpuid_str(struct perf_pmu *pmu)
buf = NULL;
}
cpu_map__put(cpus);
perf_cpu_map__put(cpus);
return buf;
}

View File

@@ -4,11 +4,9 @@
* Copyright (C) 2015 Naveen N. Rao, IBM Corporation
*/
#include "debug.h"
#include "symbol.h"
#include "map.h"
#include "probe-event.h"
#include "probe-file.h"
#include "symbol.h" // for the elf__needs_adjust_symbols() prototype
#include <stdbool.h>
#include <gelf.h>
#ifdef HAVE_LIBELF_SUPPORT
bool elf__needs_adjust_symbols(GElf_Ehdr ehdr)

View File

@@ -1,6 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include "common.h"
#include "../util/env.h"
#include "../util/debug.h"

View File

@@ -2,7 +2,9 @@
#ifndef ARCH_PERF_COMMON_H
#define ARCH_PERF_COMMON_H
#include "../util/env.h"
#include <stdbool.h>
struct perf_env;
int perf_env__lookup_objdump(struct perf_env *env, const char **path);
bool perf_env__single_address_space(struct perf_env *env);

View File

@@ -20,7 +20,9 @@
10 common unlink sys_unlink
11 nospu execve sys_execve compat_sys_execve
12 common chdir sys_chdir
13 common time sys_time compat_sys_time
13 32 time sys_time32
13 64 time sys_time
13 spu time sys_time
14 common mknod sys_mknod
15 common chmod sys_chmod
16 common lchown sys_lchown
@@ -36,14 +38,17 @@
22 spu umount sys_ni_syscall
23 common setuid sys_setuid
24 common getuid sys_getuid
25 common stime sys_stime compat_sys_stime
25 32 stime sys_stime32
25 64 stime sys_stime
25 spu stime sys_stime
26 nospu ptrace sys_ptrace compat_sys_ptrace
27 common alarm sys_alarm
28 32 oldfstat sys_fstat sys_ni_syscall
28 64 oldfstat sys_ni_syscall
28 spu oldfstat sys_ni_syscall
29 nospu pause sys_pause
30 nospu utime sys_utime compat_sys_utime
30 32 utime sys_utime32
30 64 utime sys_utime
31 common stty sys_ni_syscall
32 common gtty sys_ni_syscall
33 common access sys_access
@@ -157,7 +162,9 @@
121 common setdomainname sys_setdomainname
122 common uname sys_newuname
123 common modify_ldt sys_ni_syscall
124 common adjtimex sys_adjtimex compat_sys_adjtimex
124 32 adjtimex sys_adjtimex_time32
124 64 adjtimex sys_adjtimex
124 spu adjtimex sys_adjtimex
125 common mprotect sys_mprotect
126 32 sigprocmask sys_sigprocmask compat_sys_sigprocmask
126 64 sigprocmask sys_ni_syscall
@@ -198,8 +205,12 @@
158 common sched_yield sys_sched_yield
159 common sched_get_priority_max sys_sched_get_priority_max
160 common sched_get_priority_min sys_sched_get_priority_min
161 common sched_rr_get_interval sys_sched_rr_get_interval compat_sys_sched_rr_get_interval
162 common nanosleep sys_nanosleep compat_sys_nanosleep
161 32 sched_rr_get_interval sys_sched_rr_get_interval_time32
161 64 sched_rr_get_interval sys_sched_rr_get_interval
161 spu sched_rr_get_interval sys_sched_rr_get_interval
162 32 nanosleep sys_nanosleep_time32
162 64 nanosleep sys_nanosleep
162 spu nanosleep sys_nanosleep
163 common mremap sys_mremap
164 common setresuid sys_setresuid
165 common getresuid sys_getresuid
@@ -213,7 +224,8 @@
173 nospu rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction
174 nospu rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask
175 nospu rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending
176 nospu rt_sigtimedwait sys_rt_sigtimedwait compat_sys_rt_sigtimedwait
176 32 rt_sigtimedwait sys_rt_sigtimedwait_time32 compat_sys_rt_sigtimedwait_time32
176 64 rt_sigtimedwait sys_rt_sigtimedwait
177 nospu rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo
178 nospu rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend
179 common pread64 sys_pread64 compat_sys_pread64
@@ -260,7 +272,9 @@
218 common removexattr sys_removexattr
219 common lremovexattr sys_lremovexattr
220 common fremovexattr sys_fremovexattr
221 common futex sys_futex compat_sys_futex
221 32 futex sys_futex_time32
221 64 futex sys_futex
221 spu futex sys_futex
222 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity
223 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity
# 224 unused
@@ -268,7 +282,9 @@
226 32 sendfile64 sys_sendfile64 compat_sys_sendfile64
227 common io_setup sys_io_setup compat_sys_io_setup
228 common io_destroy sys_io_destroy
229 common io_getevents sys_io_getevents compat_sys_io_getevents
229 32 io_getevents sys_io_getevents_time32
229 64 io_getevents sys_io_getevents
229 spu io_getevents sys_io_getevents
230 common io_submit sys_io_submit compat_sys_io_submit
231 common io_cancel sys_io_cancel
232 nospu set_tid_address sys_set_tid_address
@@ -280,19 +296,33 @@
238 common epoll_wait sys_epoll_wait
239 common remap_file_pages sys_remap_file_pages
240 common timer_create sys_timer_create compat_sys_timer_create
241 common timer_settime sys_timer_settime compat_sys_timer_settime
242 common timer_gettime sys_timer_gettime compat_sys_timer_gettime
241 32 timer_settime sys_timer_settime32
241 64 timer_settime sys_timer_settime
241 spu timer_settime sys_timer_settime
242 32 timer_gettime sys_timer_gettime32
242 64 timer_gettime sys_timer_gettime
242 spu timer_gettime sys_timer_gettime
243 common timer_getoverrun sys_timer_getoverrun
244 common timer_delete sys_timer_delete
245 common clock_settime sys_clock_settime compat_sys_clock_settime
246 common clock_gettime sys_clock_gettime compat_sys_clock_gettime
247 common clock_getres sys_clock_getres compat_sys_clock_getres
248 common clock_nanosleep sys_clock_nanosleep compat_sys_clock_nanosleep
245 32 clock_settime sys_clock_settime32
245 64 clock_settime sys_clock_settime
245 spu clock_settime sys_clock_settime
246 32 clock_gettime sys_clock_gettime32
246 64 clock_gettime sys_clock_gettime
246 spu clock_gettime sys_clock_gettime
247 32 clock_getres sys_clock_getres_time32
247 64 clock_getres sys_clock_getres
247 spu clock_getres sys_clock_getres
248 32 clock_nanosleep sys_clock_nanosleep_time32
248 64 clock_nanosleep sys_clock_nanosleep
248 spu clock_nanosleep sys_clock_nanosleep
249 32 swapcontext ppc_swapcontext ppc32_swapcontext
249 64 swapcontext ppc64_swapcontext
249 spu swapcontext sys_ni_syscall
250 common tgkill sys_tgkill
251 common utimes sys_utimes compat_sys_utimes
251 32 utimes sys_utimes_time32
251 64 utimes sys_utimes
251 spu utimes sys_utimes
252 common statfs64 sys_statfs64 compat_sys_statfs64
253 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64
254 32 fadvise64_64 ppc_fadvise64_64
@@ -308,8 +338,10 @@
261 nospu set_mempolicy sys_set_mempolicy compat_sys_set_mempolicy
262 nospu mq_open sys_mq_open compat_sys_mq_open
263 nospu mq_unlink sys_mq_unlink
264 nospu mq_timedsend sys_mq_timedsend compat_sys_mq_timedsend
265 nospu mq_timedreceive sys_mq_timedreceive compat_sys_mq_timedreceive
264 32 mq_timedsend sys_mq_timedsend_time32
264 64 mq_timedsend sys_mq_timedsend
265 32 mq_timedreceive sys_mq_timedreceive_time32
265 64 mq_timedreceive sys_mq_timedreceive
266 nospu mq_notify sys_mq_notify compat_sys_mq_notify
267 nospu mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr
268 nospu kexec_load sys_kexec_load compat_sys_kexec_load
@@ -324,8 +356,10 @@
277 nospu inotify_rm_watch sys_inotify_rm_watch
278 nospu spu_run sys_spu_run
279 nospu spu_create sys_spu_create
280 nospu pselect6 sys_pselect6 compat_sys_pselect6
281 nospu ppoll sys_ppoll compat_sys_ppoll
280 32 pselect6 sys_pselect6_time32 compat_sys_pselect6_time32
280 64 pselect6 sys_pselect6
281 32 ppoll sys_ppoll_time32 compat_sys_ppoll_time32
281 64 ppoll sys_ppoll
282 common unshare sys_unshare
283 common splice sys_splice
284 common tee sys_tee
@@ -334,7 +368,9 @@
287 common mkdirat sys_mkdirat
288 common mknodat sys_mknodat
289 common fchownat sys_fchownat
290 common futimesat sys_futimesat compat_sys_futimesat
290 32 futimesat sys_futimesat_time32
290 64 futimesat sys_futimesat
290 spu utimesat sys_futimesat
291 32 fstatat64 sys_fstatat64
291 64 newfstatat sys_newfstatat
291 spu newfstatat sys_newfstatat
@@ -350,15 +386,21 @@
301 common move_pages sys_move_pages compat_sys_move_pages
302 common getcpu sys_getcpu
303 nospu epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait
304 common utimensat sys_utimensat compat_sys_utimensat
304 32 utimensat sys_utimensat_time32
304 64 utimensat sys_utimensat
304 spu utimensat sys_utimensat
305 common signalfd sys_signalfd compat_sys_signalfd
306 common timerfd_create sys_timerfd_create
307 common eventfd sys_eventfd
308 common sync_file_range2 sys_sync_file_range2 compat_sys_sync_file_range2
309 nospu fallocate sys_fallocate compat_sys_fallocate
310 nospu subpage_prot sys_subpage_prot
311 common timerfd_settime sys_timerfd_settime compat_sys_timerfd_settime
312 common timerfd_gettime sys_timerfd_gettime compat_sys_timerfd_gettime
311 32 timerfd_settime sys_timerfd_settime32
311 64 timerfd_settime sys_timerfd_settime
311 spu timerfd_settime sys_timerfd_settime
312 32 timerfd_gettime sys_timerfd_gettime32
312 64 timerfd_gettime sys_timerfd_gettime
312 spu timerfd_gettime sys_timerfd_gettime
313 common signalfd4 sys_signalfd4 compat_sys_signalfd4
314 common eventfd2 sys_eventfd2
315 common epoll_create1 sys_epoll_create1
@@ -389,11 +431,15 @@
340 common getsockopt sys_getsockopt compat_sys_getsockopt
341 common sendmsg sys_sendmsg compat_sys_sendmsg
342 common recvmsg sys_recvmsg compat_sys_recvmsg
343 common recvmmsg sys_recvmmsg compat_sys_recvmmsg
343 32 recvmmsg sys_recvmmsg_time32 compat_sys_recvmmsg_time32
343 64 recvmmsg sys_recvmmsg
343 spu recvmmsg sys_recvmmsg
344 common accept4 sys_accept4
345 common name_to_handle_at sys_name_to_handle_at
346 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at
347 common clock_adjtime sys_clock_adjtime compat_sys_clock_adjtime
347 32 clock_adjtime sys_clock_adjtime32
347 64 clock_adjtime sys_clock_adjtime
347 spu clock_adjtime sys_clock_adjtime
348 common syncfs sys_syncfs
349 common sendmmsg sys_sendmmsg compat_sys_sendmmsg
350 common setns sys_setns
@@ -414,6 +460,7 @@
363 spu switch_endian sys_ni_syscall
364 common userfaultfd sys_userfaultfd
365 common membarrier sys_membarrier
# 366-377 originally left for IPC, now unused
378 nospu mlock2 sys_mlock2
379 nospu copy_file_range sys_copy_file_range
380 common preadv2 sys_preadv2 compat_sys_preadv2
@@ -424,4 +471,49 @@
385 nospu pkey_free sys_pkey_free
386 nospu pkey_mprotect sys_pkey_mprotect
387 nospu rseq sys_rseq
388 nospu io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents
388 32 io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents
388 64 io_pgetevents sys_io_pgetevents
# room for arch specific syscalls
392 64 semtimedop sys_semtimedop
393 common semget sys_semget
394 common semctl sys_semctl compat_sys_semctl
395 common shmget sys_shmget
396 common shmctl sys_shmctl compat_sys_shmctl
397 common shmat sys_shmat compat_sys_shmat
398 common shmdt sys_shmdt
399 common msgget sys_msgget
400 common msgsnd sys_msgsnd compat_sys_msgsnd
401 common msgrcv sys_msgrcv compat_sys_msgrcv
402 common msgctl sys_msgctl compat_sys_msgctl
403 32 clock_gettime64 sys_clock_gettime sys_clock_gettime
404 32 clock_settime64 sys_clock_settime sys_clock_settime
405 32 clock_adjtime64 sys_clock_adjtime sys_clock_adjtime
406 32 clock_getres_time64 sys_clock_getres sys_clock_getres
407 32 clock_nanosleep_time64 sys_clock_nanosleep sys_clock_nanosleep
408 32 timer_gettime64 sys_timer_gettime sys_timer_gettime
409 32 timer_settime64 sys_timer_settime sys_timer_settime
410 32 timerfd_gettime64 sys_timerfd_gettime sys_timerfd_gettime
411 32 timerfd_settime64 sys_timerfd_settime sys_timerfd_settime
412 32 utimensat_time64 sys_utimensat sys_utimensat
413 32 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64
414 32 ppoll_time64 sys_ppoll compat_sys_ppoll_time64
416 32 io_pgetevents_time64 sys_io_pgetevents sys_io_pgetevents
417 32 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64
418 32 mq_timedsend_time64 sys_mq_timedsend sys_mq_timedsend
419 32 mq_timedreceive_time64 sys_mq_timedreceive sys_mq_timedreceive
420 32 semtimedop_time64 sys_semtimedop sys_semtimedop
421 32 rt_sigtimedwait_time64 sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time64
422 32 futex_time64 sys_futex sys_futex
423 32 sched_rr_get_interval_time64 sys_sched_rr_get_interval sys_sched_rr_get_interval
424 common pidfd_send_signal sys_pidfd_send_signal
425 common io_uring_setup sys_io_uring_setup
426 common io_uring_enter sys_io_uring_enter
427 common io_uring_register sys_io_uring_register
428 common open_tree sys_open_tree
429 common move_mount sys_move_mount
430 common fsopen sys_fsopen
431 common fsconfig sys_fsconfig
432 common fsmount sys_fsmount
433 common fspick sys_fspick
434 common pidfd_open sys_pidfd_open
435 nospu clone3 ppc_clone3

View File

@@ -32,7 +32,7 @@ const char *ppc_book3s_hv_kvm_tp[] = {
const char *kvm_events_tp[NR_TPS + 1];
const char *kvm_exit_reason;
static void hcall_event_get_key(struct perf_evsel *evsel,
static void hcall_event_get_key(struct evsel *evsel,
struct perf_sample *sample,
struct event_key *key)
{
@@ -55,14 +55,14 @@ static const char *get_hcall_exit_reason(u64 exit_code)
return "UNKNOWN";
}
static bool hcall_event_end(struct perf_evsel *evsel,
static bool hcall_event_end(struct evsel *evsel,
struct perf_sample *sample __maybe_unused,
struct event_key *key __maybe_unused)
{
return (!strcmp(evsel->name, kvm_events_tp[3]));
}
static bool hcall_event_begin(struct perf_evsel *evsel,
static bool hcall_event_begin(struct evsel *evsel,
struct perf_sample *sample, struct event_key *key)
{
if (!strcmp(evsel->name, kvm_events_tp[2])) {
@@ -106,7 +106,7 @@ const char * const kvm_skip_events[] = {
};
static int is_tracepoint_available(const char *str, struct perf_evlist *evlist)
static int is_tracepoint_available(const char *str, struct evlist *evlist)
{
struct parse_events_error err;
int ret;
@@ -119,7 +119,7 @@ static int is_tracepoint_available(const char *str, struct perf_evlist *evlist)
}
static int ppc__setup_book3s_hv(struct perf_kvm_stat *kvm,
struct perf_evlist *evlist)
struct evlist *evlist)
{
const char **events_ptr;
int i, nr_tp = 0, err = -1;
@@ -146,7 +146,7 @@ static int ppc__setup_book3s_hv(struct perf_kvm_stat *kvm,
/* Wrapper to setup kvm tracepoints */
static int ppc__setup_kvm_tp(struct perf_kvm_stat *kvm)
{
struct perf_evlist *evlist = perf_evlist__new();
struct evlist *evlist = evlist__new();
if (evlist == NULL)
return -ENOMEM;

View File

@@ -1,4 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
#include "map_symbol.h"
#include "mem-events.h"
/* PowerPC does not support 'ldlat' parameter. */

View File

@@ -4,7 +4,6 @@
#include <regex.h>
#include <linux/zalloc.h>
#include "../../perf.h"
#include "../../util/perf_regs.h"
#include "../../util/debug.h"

View File

@@ -5,6 +5,7 @@
*/
#include "debug.h"
#include "dso.h"
#include "symbol.h"
#include "map.h"
#include "probe-event.h"

View File

@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <elfutils/libdwfl.h>
#include <linux/kernel.h>
#include "../../util/unwind-libdw.h"
#include "../../util/perf_regs.h"
#include "../../util/event.h"

View File

@@ -8,6 +8,7 @@
#include "../../util/evlist.h"
#include "../../util/auxtrace.h"
#include "../../util/evsel.h"
#include "../../util/record.h"
#define PERF_EVENT_CPUM_SF 0xB0000 /* Event: Basic-sampling */
#define PERF_EVENT_CPUM_SF_DIAG 0xBD000 /* Event: Combined-sampling */
@@ -20,7 +21,7 @@ static void cpumsf_free(struct auxtrace_record *itr)
}
static size_t cpumsf_info_priv_size(struct auxtrace_record *itr __maybe_unused,
struct perf_evlist *evlist __maybe_unused)
struct evlist *evlist __maybe_unused)
{
return 0;
}
@@ -28,7 +29,7 @@ static size_t cpumsf_info_priv_size(struct auxtrace_record *itr __maybe_unused,
static int
cpumsf_info_fill(struct auxtrace_record *itr __maybe_unused,
struct perf_session *session __maybe_unused,
struct auxtrace_info_event *auxtrace_info __maybe_unused,
struct perf_record_auxtrace_info *auxtrace_info __maybe_unused,
size_t priv_size __maybe_unused)
{
auxtrace_info->type = PERF_AUXTRACE_S390_CPUMSF;
@@ -43,7 +44,7 @@ cpumsf_reference(struct auxtrace_record *itr __maybe_unused)
static int
cpumsf_recording_options(struct auxtrace_record *ar __maybe_unused,
struct perf_evlist *evlist __maybe_unused,
struct evlist *evlist __maybe_unused,
struct record_opts *opts)
{
unsigned int factor = 1;
@@ -82,19 +83,19 @@ cpumsf_parse_snapshot_options(struct auxtrace_record *itr __maybe_unused,
* auxtrace_record__init is called when perf record
* check if the event really need auxtrace
*/
struct auxtrace_record *auxtrace_record__init(struct perf_evlist *evlist,
struct auxtrace_record *auxtrace_record__init(struct evlist *evlist,
int *err)
{
struct auxtrace_record *aux;
struct perf_evsel *pos;
struct evsel *pos;
int diagnose = 0;
*err = 0;
if (evlist->nr_entries == 0)
if (evlist->core.nr_entries == 0)
return NULL;
evlist__for_each_entry(evlist, pos) {
if (pos->attr.config == PERF_EVENT_CPUM_SF_DIAG) {
if (pos->core.attr.config == PERF_EVENT_CPUM_SF_DIAG) {
diagnose = 1;
break;
}

View File

@@ -7,6 +7,7 @@
*/
#include <errno.h>
#include <string.h>
#include "../../util/kvm-stat.h"
#include "../../util/evsel.h"
#include <asm/sie.h>
@@ -23,7 +24,7 @@ const char *kvm_exit_reason = "icptcode";
const char *kvm_entry_trace = "kvm:kvm_s390_sie_enter";
const char *kvm_exit_trace = "kvm:kvm_s390_sie_exit";
static void event_icpt_insn_get_key(struct perf_evsel *evsel,
static void event_icpt_insn_get_key(struct evsel *evsel,
struct perf_sample *sample,
struct event_key *key)
{
@@ -34,7 +35,7 @@ static void event_icpt_insn_get_key(struct perf_evsel *evsel,
key->exit_reasons = sie_icpt_insn_codes;
}
static void event_sigp_get_key(struct perf_evsel *evsel,
static void event_sigp_get_key(struct evsel *evsel,
struct perf_sample *sample,
struct event_key *key)
{
@@ -42,7 +43,7 @@ static void event_sigp_get_key(struct perf_evsel *evsel,
key->exit_reasons = sie_sigp_order_codes;
}
static void event_diag_get_key(struct perf_evsel *evsel,
static void event_diag_get_key(struct evsel *evsel,
struct perf_sample *sample,
struct event_key *key)
{
@@ -50,7 +51,7 @@ static void event_diag_get_key(struct perf_evsel *evsel,
key->exit_reasons = sie_diagnose_codes;
}
static void event_icpt_prog_get_key(struct perf_evsel *evsel,
static void event_icpt_prog_get_key(struct evsel *evsel,
struct perf_sample *sample,
struct event_key *key)
{

View File

@@ -7,6 +7,7 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ptrace.h>
#include <asm/ptrace.h>
#include <errno.h>

View File

@@ -1,11 +1,12 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/types.h>
#include "../../../../arch/x86/include/asm/insn.h"
#include <string.h>
#include "debug.h"
#include "tests/tests.h"
#include "arch-tests.h"
#include "intel-pt-decoder/insn.h"
#include "intel-pt-decoder/intel-pt-insn-decoder.h"
struct test_data {

View File

@@ -1,6 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
#include "tests/tests.h"
#include "perf.h"
#include "cloexec.h"
#include "debug.h"
#include "evlist.h"
@@ -40,8 +39,8 @@ static pid_t spawn(void)
*/
int test__intel_cqm_count_nmi_context(struct test *test __maybe_unused, int subtest __maybe_unused)
{
struct perf_evlist *evlist = NULL;
struct perf_evsel *evsel = NULL;
struct evlist *evlist = NULL;
struct evsel *evsel = NULL;
struct perf_event_attr pe;
int i, fd[2], flag, ret;
size_t mmap_len;
@@ -51,7 +50,7 @@ int test__intel_cqm_count_nmi_context(struct test *test __maybe_unused, int subt
flag = perf_event_open_cloexec_flag();
evlist = perf_evlist__new();
evlist = evlist__new();
if (!evlist) {
pr_debug("perf_evlist__new failed\n");
return TEST_FAIL;
@@ -124,6 +123,6 @@ int test__intel_cqm_count_nmi_context(struct test *test __maybe_unused, int subt
kill(pid, SIGKILL);
wait(NULL);
out:
perf_evlist__delete(evlist);
evlist__delete(evlist);
return err;
}

View File

@@ -1,16 +1,22 @@
// SPDX-License-Identifier: GPL-2.0
#include <errno.h>
#include <inttypes.h>
#include <limits.h>
#include <stdbool.h>
#include <stdio.h>
#include <unistd.h>
#include <linux/types.h>
#include <sys/prctl.h>
#include <perf/cpumap.h>
#include <perf/evlist.h>
#include "debug.h"
#include "parse-events.h"
#include "evlist.h"
#include "evsel.h"
#include "thread_map.h"
#include "cpumap.h"
#include "record.h"
#include "tsc.h"
#include "tests/tests.h"
@@ -49,10 +55,10 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe
},
.sample_time = true,
};
struct thread_map *threads = NULL;
struct cpu_map *cpus = NULL;
struct perf_evlist *evlist = NULL;
struct perf_evsel *evsel = NULL;
struct perf_thread_map *threads = NULL;
struct perf_cpu_map *cpus = NULL;
struct evlist *evlist = NULL;
struct evsel *evsel = NULL;
int err = -1, ret, i;
const char *comm1, *comm2;
struct perf_tsc_conversion tc;
@@ -65,13 +71,13 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe
threads = thread_map__new(-1, getpid(), UINT_MAX);
CHECK_NOT_NULL__(threads);
cpus = cpu_map__new(NULL);
cpus = perf_cpu_map__new(NULL);
CHECK_NOT_NULL__(cpus);
evlist = perf_evlist__new();
evlist = evlist__new();
CHECK_NOT_NULL__(evlist);
perf_evlist__set_maps(evlist, cpus, threads);
perf_evlist__set_maps(&evlist->core, cpus, threads);
CHECK__(parse_events(evlist, "cycles:u", NULL));
@@ -79,11 +85,11 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe
evsel = perf_evlist__first(evlist);
evsel->attr.comm = 1;
evsel->attr.disabled = 1;
evsel->attr.enable_on_exec = 0;
evsel->core.attr.comm = 1;
evsel->core.attr.disabled = 1;
evsel->core.attr.enable_on_exec = 0;
CHECK__(perf_evlist__open(evlist));
CHECK__(evlist__open(evlist));
CHECK__(perf_evlist__mmap(evlist, UINT_MAX));
@@ -97,7 +103,7 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe
goto out_err;
}
perf_evlist__enable(evlist);
evlist__enable(evlist);
comm1 = "Test COMM 1";
CHECK__(prctl(PR_SET_NAME, (unsigned long)comm1, 0, 0, 0));
@@ -107,7 +113,7 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe
comm2 = "Test COMM 2";
CHECK__(prctl(PR_SET_NAME, (unsigned long)comm2, 0, 0, 0));
perf_evlist__disable(evlist);
evlist__disable(evlist);
for (i = 0; i < evlist->nr_mmaps; i++) {
md = &evlist->mmap[i];
@@ -163,6 +169,6 @@ next_event:
err = 0;
out_err:
perf_evlist__delete(evlist);
evlist__delete(evlist);
return err;
}

View File

@@ -6,11 +6,13 @@
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <linux/string.h>
#include <linux/types.h>
#include "perf.h"
#include "perf-sys.h"
#include "debug.h"
#include "tests/tests.h"
#include "cloexec.h"
#include "event.h"
#include "util.h"
#include "arch-tests.h"

View File

@@ -1,7 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include "perf.h"
#include "../../../../arch/x86/include/asm/insn.h"
#include "archinsn.h"
#include "util/intel-pt-decoder/insn.h"
#include "machine.h"
#include "thread.h"
#include "symbol.h"

View File

@@ -16,12 +16,12 @@
#include "../../util/evlist.h"
static
struct auxtrace_record *auxtrace_record__init_intel(struct perf_evlist *evlist,
struct auxtrace_record *auxtrace_record__init_intel(struct evlist *evlist,
int *err)
{
struct perf_pmu *intel_pt_pmu;
struct perf_pmu *intel_bts_pmu;
struct perf_evsel *evsel;
struct evsel *evsel;
bool found_pt = false;
bool found_bts = false;
@@ -29,9 +29,9 @@ struct auxtrace_record *auxtrace_record__init_intel(struct perf_evlist *evlist,
intel_bts_pmu = perf_pmu__find(INTEL_BTS_PMU_NAME);
evlist__for_each_entry(evlist, evsel) {
if (intel_pt_pmu && evsel->attr.type == intel_pt_pmu->type)
if (intel_pt_pmu && evsel->core.attr.type == intel_pt_pmu->type)
found_pt = true;
if (intel_bts_pmu && evsel->attr.type == intel_bts_pmu->type)
if (intel_bts_pmu && evsel->core.attr.type == intel_bts_pmu->type)
found_bts = true;
}
@@ -50,7 +50,7 @@ struct auxtrace_record *auxtrace_record__init_intel(struct perf_evlist *evlist,
return NULL;
}
struct auxtrace_record *auxtrace_record__init(struct perf_evlist *evlist,
struct auxtrace_record *auxtrace_record__init(struct evlist *evlist,
int *err)
{
char buffer[64];

View File

@@ -6,6 +6,7 @@
#include <string.h>
#include <regex.h>
#include "../../util/debug.h"
#include "../../util/header.h"
static inline void

View File

@@ -12,14 +12,17 @@
#include <linux/zalloc.h>
#include "../../util/cpumap.h"
#include "../../util/event.h"
#include "../../util/evsel.h"
#include "../../util/evlist.h"
#include "../../util/session.h"
#include "../../util/pmu.h"
#include "../../util/debug.h"
#include "../../util/record.h"
#include "../../util/tsc.h"
#include "../../util/auxtrace.h"
#include "../../util/intel-bts.h"
#include "../../util/util.h"
#define KiB(x) ((x) * 1024)
#define MiB(x) ((x) * 1024 * 1024)
@@ -35,7 +38,7 @@ struct intel_bts_snapshot_ref {
struct intel_bts_recording {
struct auxtrace_record itr;
struct perf_pmu *intel_bts_pmu;
struct perf_evlist *evlist;
struct evlist *evlist;
bool snapshot_mode;
size_t snapshot_size;
int snapshot_ref_cnt;
@@ -50,14 +53,14 @@ struct branch {
static size_t
intel_bts_info_priv_size(struct auxtrace_record *itr __maybe_unused,
struct perf_evlist *evlist __maybe_unused)
struct evlist *evlist __maybe_unused)
{
return INTEL_BTS_AUXTRACE_PRIV_SIZE;
}
static int intel_bts_info_fill(struct auxtrace_record *itr,
struct perf_session *session,
struct auxtrace_info_event *auxtrace_info,
struct perf_record_auxtrace_info *auxtrace_info,
size_t priv_size)
{
struct intel_bts_recording *btsr =
@@ -99,27 +102,27 @@ static int intel_bts_info_fill(struct auxtrace_record *itr,
}
static int intel_bts_recording_options(struct auxtrace_record *itr,
struct perf_evlist *evlist,
struct evlist *evlist,
struct record_opts *opts)
{
struct intel_bts_recording *btsr =
container_of(itr, struct intel_bts_recording, itr);
struct perf_pmu *intel_bts_pmu = btsr->intel_bts_pmu;
struct perf_evsel *evsel, *intel_bts_evsel = NULL;
const struct cpu_map *cpus = evlist->cpus;
bool privileged = geteuid() == 0 || perf_event_paranoid() < 0;
struct evsel *evsel, *intel_bts_evsel = NULL;
const struct perf_cpu_map *cpus = evlist->core.cpus;
bool privileged = perf_event_paranoid_check(-1);
btsr->evlist = evlist;
btsr->snapshot_mode = opts->auxtrace_snapshot_mode;
evlist__for_each_entry(evlist, evsel) {
if (evsel->attr.type == intel_bts_pmu->type) {
if (evsel->core.attr.type == intel_bts_pmu->type) {
if (intel_bts_evsel) {
pr_err("There may be only one " INTEL_BTS_PMU_NAME " event\n");
return -EINVAL;
}
evsel->attr.freq = 0;
evsel->attr.sample_period = 1;
evsel->core.attr.freq = 0;
evsel->core.attr.sample_period = 1;
intel_bts_evsel = evsel;
opts->full_auxtrace = true;
}
@@ -133,7 +136,7 @@ static int intel_bts_recording_options(struct auxtrace_record *itr,
if (!opts->full_auxtrace)
return 0;
if (opts->full_auxtrace && !cpu_map__empty(cpus)) {
if (opts->full_auxtrace && !perf_cpu_map__empty(cpus)) {
pr_err(INTEL_BTS_PMU_NAME " does not support per-cpu recording\n");
return -EINVAL;
}
@@ -214,13 +217,13 @@ static int intel_bts_recording_options(struct auxtrace_record *itr,
* In the case of per-cpu mmaps, we need the CPU on the
* AUX event.
*/
if (!cpu_map__empty(cpus))
if (!perf_cpu_map__empty(cpus))
perf_evsel__set_sample_bit(intel_bts_evsel, CPU);
}
/* Add dummy event to keep tracking */
if (opts->full_auxtrace) {
struct perf_evsel *tracking_evsel;
struct evsel *tracking_evsel;
int err;
err = parse_events(evlist, "dummy:u", NULL);
@@ -231,8 +234,8 @@ static int intel_bts_recording_options(struct auxtrace_record *itr,
perf_evlist__set_tracking_event(evlist, tracking_evsel);
tracking_evsel->attr.freq = 0;
tracking_evsel->attr.sample_period = 1;
tracking_evsel->core.attr.freq = 0;
tracking_evsel->core.attr.sample_period = 1;
}
return 0;
@@ -313,11 +316,11 @@ static int intel_bts_snapshot_start(struct auxtrace_record *itr)
{
struct intel_bts_recording *btsr =
container_of(itr, struct intel_bts_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(btsr->evlist, evsel) {
if (evsel->attr.type == btsr->intel_bts_pmu->type)
return perf_evsel__disable(evsel);
if (evsel->core.attr.type == btsr->intel_bts_pmu->type)
return evsel__disable(evsel);
}
return -EINVAL;
}
@@ -326,11 +329,11 @@ static int intel_bts_snapshot_finish(struct auxtrace_record *itr)
{
struct intel_bts_recording *btsr =
container_of(itr, struct intel_bts_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(btsr->evlist, evsel) {
if (evsel->attr.type == btsr->intel_bts_pmu->type)
return perf_evsel__enable(evsel);
if (evsel->core.attr.type == btsr->intel_bts_pmu->type)
return evsel__enable(evsel);
}
return -EINVAL;
}
@@ -408,10 +411,10 @@ static int intel_bts_read_finish(struct auxtrace_record *itr, int idx)
{
struct intel_bts_recording *btsr =
container_of(itr, struct intel_bts_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(btsr->evlist, evsel) {
if (evsel->attr.type == btsr->intel_bts_pmu->type)
if (evsel->core.attr.type == btsr->intel_bts_pmu->type)
return perf_evlist__enable_event_idx(btsr->evlist,
evsel, idx);
}

View File

@@ -13,7 +13,6 @@
#include <linux/zalloc.h>
#include <cpuid.h>
#include "../../perf.h"
#include "../../util/session.h"
#include "../../util/event.h"
#include "../../util/evlist.h"
@@ -24,7 +23,10 @@
#include "../../util/pmu.h"
#include "../../util/debug.h"
#include "../../util/auxtrace.h"
#include "../../util/record.h"
#include "../../util/target.h"
#include "../../util/tsc.h"
#include "../../util/util.h"
#include "../../util/intel-pt.h"
#define KiB(x) ((x) * 1024)
@@ -44,7 +46,7 @@ struct intel_pt_recording {
struct auxtrace_record itr;
struct perf_pmu *intel_pt_pmu;
int have_sched_switch;
struct perf_evlist *evlist;
struct evlist *evlist;
bool snapshot_mode;
bool snapshot_init_done;
size_t snapshot_size;
@@ -110,9 +112,9 @@ static u64 intel_pt_masked_bits(u64 mask, u64 bits)
}
static int intel_pt_read_config(struct perf_pmu *intel_pt_pmu, const char *str,
struct perf_evlist *evlist, u64 *res)
struct evlist *evlist, u64 *res)
{
struct perf_evsel *evsel;
struct evsel *evsel;
u64 mask;
*res = 0;
@@ -122,8 +124,8 @@ static int intel_pt_read_config(struct perf_pmu *intel_pt_pmu, const char *str,
return -EINVAL;
evlist__for_each_entry(evlist, evsel) {
if (evsel->attr.type == intel_pt_pmu->type) {
*res = intel_pt_masked_bits(mask, evsel->attr.config);
if (evsel->core.attr.type == intel_pt_pmu->type) {
*res = intel_pt_masked_bits(mask, evsel->core.attr.config);
return 0;
}
}
@@ -132,7 +134,7 @@ static int intel_pt_read_config(struct perf_pmu *intel_pt_pmu, const char *str,
}
static size_t intel_pt_psb_period(struct perf_pmu *intel_pt_pmu,
struct perf_evlist *evlist)
struct evlist *evlist)
{
u64 val;
int err, topa_multiple_entries;
@@ -268,13 +270,13 @@ intel_pt_pmu_default_config(struct perf_pmu *intel_pt_pmu)
return attr;
}
static const char *intel_pt_find_filter(struct perf_evlist *evlist,
static const char *intel_pt_find_filter(struct evlist *evlist,
struct perf_pmu *intel_pt_pmu)
{
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(evlist, evsel) {
if (evsel->attr.type == intel_pt_pmu->type)
if (evsel->core.attr.type == intel_pt_pmu->type)
return evsel->filter;
}
@@ -289,7 +291,7 @@ static size_t intel_pt_filter_bytes(const char *filter)
}
static size_t
intel_pt_info_priv_size(struct auxtrace_record *itr, struct perf_evlist *evlist)
intel_pt_info_priv_size(struct auxtrace_record *itr, struct evlist *evlist)
{
struct intel_pt_recording *ptr =
container_of(itr, struct intel_pt_recording, itr);
@@ -312,7 +314,7 @@ static void intel_pt_tsc_ctc_ratio(u32 *n, u32 *d)
static int intel_pt_info_fill(struct auxtrace_record *itr,
struct perf_session *session,
struct auxtrace_info_event *auxtrace_info,
struct perf_record_auxtrace_info *auxtrace_info,
size_t priv_size)
{
struct intel_pt_recording *ptr =
@@ -326,7 +328,7 @@ static int intel_pt_info_fill(struct auxtrace_record *itr,
unsigned long max_non_turbo_ratio;
size_t filter_str_len;
const char *filter;
u64 *info;
__u64 *info;
int err;
if (priv_size != ptr->priv_size)
@@ -365,7 +367,7 @@ static int intel_pt_info_fill(struct auxtrace_record *itr,
ui__warning("Intel Processor Trace: TSC not available\n");
}
per_cpu_mmaps = !cpu_map__empty(session->evlist->cpus);
per_cpu_mmaps = !perf_cpu_map__empty(session->evlist->core.cpus);
auxtrace_info->type = PERF_AUXTRACE_INTEL_PT;
auxtrace_info->priv[INTEL_PT_PMU_TYPE] = intel_pt_pmu->type;
@@ -398,10 +400,10 @@ static int intel_pt_info_fill(struct auxtrace_record *itr,
return 0;
}
static int intel_pt_track_switches(struct perf_evlist *evlist)
static int intel_pt_track_switches(struct evlist *evlist)
{
const char *sched_switch = "sched:sched_switch";
struct perf_evsel *evsel;
struct evsel *evsel;
int err;
if (!perf_evlist__can_select_event(evlist, sched_switch))
@@ -513,7 +515,7 @@ out_err:
}
static int intel_pt_validate_config(struct perf_pmu *intel_pt_pmu,
struct perf_evsel *evsel)
struct evsel *evsel)
{
int err;
char c;
@@ -526,39 +528,59 @@ static int intel_pt_validate_config(struct perf_pmu *intel_pt_pmu,
* sets pt=0, which avoids senseless kernel errors.
*/
if (perf_pmu__scan_file(intel_pt_pmu, "format/pt", "%c", &c) == 1 &&
!(evsel->attr.config & 1)) {
!(evsel->core.attr.config & 1)) {
pr_warning("pt=0 doesn't make sense, forcing pt=1\n");
evsel->attr.config |= 1;
evsel->core.attr.config |= 1;
}
err = intel_pt_val_config_term(intel_pt_pmu, "caps/cycle_thresholds",
"cyc_thresh", "caps/psb_cyc",
evsel->attr.config);
evsel->core.attr.config);
if (err)
return err;
err = intel_pt_val_config_term(intel_pt_pmu, "caps/mtc_periods",
"mtc_period", "caps/mtc",
evsel->attr.config);
evsel->core.attr.config);
if (err)
return err;
return intel_pt_val_config_term(intel_pt_pmu, "caps/psb_periods",
"psb_period", "caps/psb_cyc",
evsel->attr.config);
evsel->core.attr.config);
}
/*
* Currently, there is not enough information to disambiguate different PEBS
* events, so only allow one.
*/
static bool intel_pt_too_many_aux_output(struct evlist *evlist)
{
struct evsel *evsel;
int aux_output_cnt = 0;
evlist__for_each_entry(evlist, evsel)
aux_output_cnt += !!evsel->core.attr.aux_output;
if (aux_output_cnt > 1) {
pr_err(INTEL_PT_PMU_NAME " supports at most one event with aux-output\n");
return true;
}
return false;
}
static int intel_pt_recording_options(struct auxtrace_record *itr,
struct perf_evlist *evlist,
struct evlist *evlist,
struct record_opts *opts)
{
struct intel_pt_recording *ptr =
container_of(itr, struct intel_pt_recording, itr);
struct perf_pmu *intel_pt_pmu = ptr->intel_pt_pmu;
bool have_timing_info, need_immediate = false;
struct perf_evsel *evsel, *intel_pt_evsel = NULL;
const struct cpu_map *cpus = evlist->cpus;
bool privileged = geteuid() == 0 || perf_event_paranoid() < 0;
struct evsel *evsel, *intel_pt_evsel = NULL;
const struct perf_cpu_map *cpus = evlist->core.cpus;
bool privileged = perf_event_paranoid_check(-1);
u64 tsc_bit;
int err;
@@ -566,13 +588,13 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
ptr->snapshot_mode = opts->auxtrace_snapshot_mode;
evlist__for_each_entry(evlist, evsel) {
if (evsel->attr.type == intel_pt_pmu->type) {
if (evsel->core.attr.type == intel_pt_pmu->type) {
if (intel_pt_evsel) {
pr_err("There may be only one " INTEL_PT_PMU_NAME " event\n");
return -EINVAL;
}
evsel->attr.freq = 0;
evsel->attr.sample_period = 1;
evsel->core.attr.freq = 0;
evsel->core.attr.sample_period = 1;
intel_pt_evsel = evsel;
opts->full_auxtrace = true;
}
@@ -588,6 +610,9 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
return -EINVAL;
}
if (intel_pt_too_many_aux_output(evlist))
return -EINVAL;
if (!opts->full_auxtrace)
return 0;
@@ -670,7 +695,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
intel_pt_parse_terms(&intel_pt_pmu->format, "tsc", &tsc_bit);
if (opts->full_auxtrace && (intel_pt_evsel->attr.config & tsc_bit))
if (opts->full_auxtrace && (intel_pt_evsel->core.attr.config & tsc_bit))
have_timing_info = true;
else
have_timing_info = false;
@@ -679,13 +704,13 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
* Per-cpu recording needs sched_switch events to distinguish different
* threads.
*/
if (have_timing_info && !cpu_map__empty(cpus)) {
if (have_timing_info && !perf_cpu_map__empty(cpus)) {
if (perf_can_record_switch_events()) {
bool cpu_wide = !target__none(&opts->target) &&
!target__has_task(&opts->target);
if (!cpu_wide && perf_can_record_cpu_wide()) {
struct perf_evsel *switch_evsel;
struct evsel *switch_evsel;
err = parse_events(evlist, "dummy:u", NULL);
if (err)
@@ -693,9 +718,9 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
switch_evsel = perf_evlist__last(evlist);
switch_evsel->attr.freq = 0;
switch_evsel->attr.sample_period = 1;
switch_evsel->attr.context_switch = 1;
switch_evsel->core.attr.freq = 0;
switch_evsel->core.attr.sample_period = 1;
switch_evsel->core.attr.context_switch = 1;
switch_evsel->system_wide = true;
switch_evsel->no_aux_samples = true;
@@ -737,13 +762,13 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
* In the case of per-cpu mmaps, we need the CPU on the
* AUX event.
*/
if (!cpu_map__empty(cpus))
if (!perf_cpu_map__empty(cpus))
perf_evsel__set_sample_bit(intel_pt_evsel, CPU);
}
/* Add dummy event to keep tracking */
if (opts->full_auxtrace) {
struct perf_evsel *tracking_evsel;
struct evsel *tracking_evsel;
err = parse_events(evlist, "dummy:u", NULL);
if (err)
@@ -753,15 +778,15 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
perf_evlist__set_tracking_event(evlist, tracking_evsel);
tracking_evsel->attr.freq = 0;
tracking_evsel->attr.sample_period = 1;
tracking_evsel->core.attr.freq = 0;
tracking_evsel->core.attr.sample_period = 1;
tracking_evsel->no_aux_samples = true;
if (need_immediate)
tracking_evsel->immediate = true;
/* In per-cpu case, always need the time of mmap events etc */
if (!cpu_map__empty(cpus)) {
if (!perf_cpu_map__empty(cpus)) {
perf_evsel__set_sample_bit(tracking_evsel, TIME);
/* And the CPU for switch events */
perf_evsel__set_sample_bit(tracking_evsel, CPU);
@@ -773,7 +798,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
* Warn the user when we do not have enough information to decode i.e.
* per-cpu with no sched_switch (except workload-only).
*/
if (!ptr->have_sched_switch && !cpu_map__empty(cpus) &&
if (!ptr->have_sched_switch && !perf_cpu_map__empty(cpus) &&
!target__none(&opts->target))
ui__warning("Intel Processor Trace decoding will not be possible except for kernel tracing!\n");
@@ -784,11 +809,11 @@ static int intel_pt_snapshot_start(struct auxtrace_record *itr)
{
struct intel_pt_recording *ptr =
container_of(itr, struct intel_pt_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(ptr->evlist, evsel) {
if (evsel->attr.type == ptr->intel_pt_pmu->type)
return perf_evsel__disable(evsel);
if (evsel->core.attr.type == ptr->intel_pt_pmu->type)
return evsel__disable(evsel);
}
return -EINVAL;
}
@@ -797,11 +822,11 @@ static int intel_pt_snapshot_finish(struct auxtrace_record *itr)
{
struct intel_pt_recording *ptr =
container_of(itr, struct intel_pt_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(ptr->evlist, evsel) {
if (evsel->attr.type == ptr->intel_pt_pmu->type)
return perf_evsel__enable(evsel);
if (evsel->core.attr.type == ptr->intel_pt_pmu->type)
return evsel__enable(evsel);
}
return -EINVAL;
}
@@ -1070,10 +1095,10 @@ static int intel_pt_read_finish(struct auxtrace_record *itr, int idx)
{
struct intel_pt_recording *ptr =
container_of(itr, struct intel_pt_recording, itr);
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(ptr->evlist, evsel) {
if (evsel->attr.type == ptr->intel_pt_pmu->type)
if (evsel->core.attr.type == ptr->intel_pt_pmu->type)
return perf_evlist__enable_event_idx(ptr->evlist, evsel,
idx);
}

View File

@@ -1,7 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
#include <errno.h>
#include "../../util/kvm-stat.h"
#include "../../util/evsel.h"
#include <string.h>
#include "../../../util/kvm-stat.h"
#include "../../../util/evsel.h"
#include <asm/svm.h>
#include <asm/vmx.h>
#include <asm/kvm.h>
@@ -27,7 +28,7 @@ const char *kvm_exit_trace = "kvm:kvm_exit";
* the time of MMIO write: kvm_mmio(KVM_TRACE_MMIO_WRITE...) -> kvm_entry
* the time of MMIO read: kvm_exit -> kvm_mmio(KVM_TRACE_MMIO_READ...).
*/
static void mmio_event_get_key(struct perf_evsel *evsel, struct perf_sample *sample,
static void mmio_event_get_key(struct evsel *evsel, struct perf_sample *sample,
struct event_key *key)
{
key->key = perf_evsel__intval(evsel, sample, "gpa");
@@ -38,7 +39,7 @@ static void mmio_event_get_key(struct perf_evsel *evsel, struct perf_sample *sam
#define KVM_TRACE_MMIO_READ 1
#define KVM_TRACE_MMIO_WRITE 2
static bool mmio_event_begin(struct perf_evsel *evsel,
static bool mmio_event_begin(struct evsel *evsel,
struct perf_sample *sample, struct event_key *key)
{
/* MMIO read begin event in kernel. */
@@ -55,7 +56,7 @@ static bool mmio_event_begin(struct perf_evsel *evsel,
return false;
}
static bool mmio_event_end(struct perf_evsel *evsel, struct perf_sample *sample,
static bool mmio_event_end(struct evsel *evsel, struct perf_sample *sample,
struct event_key *key)
{
/* MMIO write end event in kernel. */
@@ -89,7 +90,7 @@ static struct kvm_events_ops mmio_events = {
};
/* The time of emulation pio access is from kvm_pio to kvm_entry. */
static void ioport_event_get_key(struct perf_evsel *evsel,
static void ioport_event_get_key(struct evsel *evsel,
struct perf_sample *sample,
struct event_key *key)
{
@@ -97,7 +98,7 @@ static void ioport_event_get_key(struct perf_evsel *evsel,
key->info = perf_evsel__intval(evsel, sample, "rw");
}
static bool ioport_event_begin(struct perf_evsel *evsel,
static bool ioport_event_begin(struct evsel *evsel,
struct perf_sample *sample,
struct event_key *key)
{
@@ -109,7 +110,7 @@ static bool ioport_event_begin(struct perf_evsel *evsel,
return false;
}
static bool ioport_event_end(struct perf_evsel *evsel,
static bool ioport_event_end(struct evsel *evsel,
struct perf_sample *sample __maybe_unused,
struct event_key *key __maybe_unused)
{

View File

@@ -2,11 +2,13 @@
#include <errno.h>
#include <string.h>
#include <regex.h>
#include <linux/kernel.h>
#include <linux/zalloc.h>
#include "../../perf.h"
#include "../../perf-sys.h"
#include "../../util/perf_regs.h"
#include "../../util/debug.h"
#include "../../util/event.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG(AX, PERF_REG_X86_AX),

View File

@@ -5,10 +5,10 @@
#include <linux/stddef.h>
#include <linux/perf_event.h>
#include "../../perf.h"
#include <linux/types.h>
#include "../../util/debug.h"
#include "../../util/tsc.h"
#include <asm/barrier.h>
#include "../../../util/debug.h"
#include "../../../util/tsc.h"
int perf_read_tsc_conversion(const struct perf_event_mmap_page *pc,
struct perf_tsc_conversion *tc)
@@ -57,7 +57,7 @@ int perf_event__synth_time_conv(const struct perf_event_mmap_page *pc,
.time_conv = {
.header = {
.type = PERF_RECORD_TIME_CONV,
.size = sizeof(struct time_conv_event),
.size = sizeof(struct perf_record_time_conv),
},
},
};

View File

@@ -14,12 +14,14 @@
#include <inttypes.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
#include <linux/compiler.h>
#include <linux/kernel.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <sys/epoll.h>
#include <sys/eventfd.h>
#include <perf/cpumap.h>
#include "../util/stat.h"
#include <subcmd/parse-options.h>
@@ -219,7 +221,7 @@ static void init_fdmaps(struct worker *w, int pct)
}
}
static int do_threads(struct worker *worker, struct cpu_map *cpu)
static int do_threads(struct worker *worker, struct perf_cpu_map *cpu)
{
pthread_attr_t thread_attr, *attrp = NULL;
cpu_set_t cpuset;
@@ -301,7 +303,7 @@ int bench_epoll_ctl(int argc, const char **argv)
int j, ret = 0;
struct sigaction act;
struct worker *worker = NULL;
struct cpu_map *cpu;
struct perf_cpu_map *cpu;
struct rlimit rl, prevrl;
unsigned int i;
@@ -315,7 +317,7 @@ int bench_epoll_ctl(int argc, const char **argv)
act.sa_sigaction = toggle_done;
sigaction(SIGINT, &act, NULL);
cpu = cpu_map__new(NULL);
cpu = perf_cpu_map__new(NULL);
if (!cpu)
goto errmem;

View File

@@ -63,6 +63,7 @@
/* For the CLR_() macros */
#include <string.h>
#include <pthread.h>
#include <unistd.h>
#include <errno.h>
#include <inttypes.h>
@@ -75,6 +76,7 @@
#include <sys/epoll.h>
#include <sys/eventfd.h>
#include <sys/types.h>
#include <perf/cpumap.h>
#include "../util/stat.h"
#include <subcmd/parse-options.h>
@@ -288,7 +290,7 @@ static void print_summary(void)
(int) runtime.tv_sec);
}
static int do_threads(struct worker *worker, struct cpu_map *cpu)
static int do_threads(struct worker *worker, struct perf_cpu_map *cpu)
{
pthread_attr_t thread_attr, *attrp = NULL;
cpu_set_t cpuset;
@@ -415,7 +417,7 @@ int bench_epoll_wait(int argc, const char **argv)
struct sigaction act;
unsigned int i;
struct worker *worker = NULL;
struct cpu_map *cpu;
struct perf_cpu_map *cpu;
pthread_t wthread;
struct rlimit rl, prevrl;
@@ -429,7 +431,7 @@ int bench_epoll_wait(int argc, const char **argv)
act.sa_sigaction = toggle_done;
sigaction(SIGINT, &act, NULL);
cpu = cpu_map__new(NULL);
cpu = perf_cpu_map__new(NULL);
if (!cpu)
goto errmem;

View File

@@ -20,6 +20,7 @@
#include <linux/kernel.h>
#include <linux/zalloc.h>
#include <sys/time.h>
#include <perf/cpumap.h>
#include "../util/stat.h"
#include <subcmd/parse-options.h>
@@ -124,7 +125,7 @@ int bench_futex_hash(int argc, const char **argv)
unsigned int i;
pthread_attr_t thread_attr;
struct worker *worker = NULL;
struct cpu_map *cpu;
struct perf_cpu_map *cpu;
argc = parse_options(argc, argv, options, bench_futex_hash_usage, 0);
if (argc) {
@@ -132,7 +133,7 @@ int bench_futex_hash(int argc, const char **argv)
exit(EXIT_FAILURE);
}
cpu = cpu_map__new(NULL);
cpu = perf_cpu_map__new(NULL);
if (!cpu)
goto errmem;

View File

@@ -14,6 +14,7 @@
#include <linux/kernel.h>
#include <linux/zalloc.h>
#include <errno.h>
#include <perf/cpumap.h>
#include "bench.h"
#include "futex.h"
#include "cpumap.h"
@@ -116,7 +117,7 @@ static void *workerfn(void *arg)
}
static void create_threads(struct worker *w, pthread_attr_t thread_attr,
struct cpu_map *cpu)
struct perf_cpu_map *cpu)
{
cpu_set_t cpuset;
unsigned int i;
@@ -150,13 +151,13 @@ int bench_futex_lock_pi(int argc, const char **argv)
unsigned int i;
struct sigaction act;
pthread_attr_t thread_attr;
struct cpu_map *cpu;
struct perf_cpu_map *cpu;
argc = parse_options(argc, argv, options, bench_futex_lock_pi_usage, 0);
if (argc)
goto err;
cpu = cpu_map__new(NULL);
cpu = perf_cpu_map__new(NULL);
if (!cpu)
err(EXIT_FAILURE, "calloc");

View File

@@ -20,6 +20,7 @@
#include <linux/kernel.h>
#include <linux/time64.h>
#include <errno.h>
#include <perf/cpumap.h>
#include "bench.h"
#include "futex.h"
#include "cpumap.h"
@@ -84,7 +85,7 @@ static void *workerfn(void *arg __maybe_unused)
}
static void block_threads(pthread_t *w,
pthread_attr_t thread_attr, struct cpu_map *cpu)
pthread_attr_t thread_attr, struct perf_cpu_map *cpu)
{
cpu_set_t cpuset;
unsigned int i;
@@ -117,13 +118,13 @@ int bench_futex_requeue(int argc, const char **argv)
unsigned int i, j;
struct sigaction act;
pthread_attr_t thread_attr;
struct cpu_map *cpu;
struct perf_cpu_map *cpu;
argc = parse_options(argc, argv, options, bench_futex_requeue_usage, 0);
if (argc)
goto err;
cpu = cpu_map__new(NULL);
cpu = perf_cpu_map__new(NULL);
if (!cpu)
err(EXIT_FAILURE, "cpu_map__new");

View File

@@ -138,7 +138,7 @@ static void *blocked_workerfn(void *arg __maybe_unused)
}
static void block_threads(pthread_t *w, pthread_attr_t thread_attr,
struct cpu_map *cpu)
struct perf_cpu_map *cpu)
{
cpu_set_t cpuset;
unsigned int i;
@@ -224,7 +224,7 @@ int bench_futex_wake_parallel(int argc, const char **argv)
struct sigaction act;
pthread_attr_t thread_attr;
struct thread_data *waking_worker;
struct cpu_map *cpu;
struct perf_cpu_map *cpu;
argc = parse_options(argc, argv, options,
bench_futex_wake_parallel_usage, 0);
@@ -237,7 +237,7 @@ int bench_futex_wake_parallel(int argc, const char **argv)
act.sa_sigaction = toggle_done;
sigaction(SIGINT, &act, NULL);
cpu = cpu_map__new(NULL);
cpu = perf_cpu_map__new(NULL);
if (!cpu)
err(EXIT_FAILURE, "calloc");

View File

@@ -20,6 +20,7 @@
#include <linux/kernel.h>
#include <linux/time64.h>
#include <errno.h>
#include <perf/cpumap.h>
#include "bench.h"
#include "futex.h"
#include "cpumap.h"
@@ -90,7 +91,7 @@ static void print_summary(void)
}
static void block_threads(pthread_t *w,
pthread_attr_t thread_attr, struct cpu_map *cpu)
pthread_attr_t thread_attr, struct perf_cpu_map *cpu)
{
cpu_set_t cpuset;
unsigned int i;
@@ -123,7 +124,7 @@ int bench_futex_wake(int argc, const char **argv)
unsigned int i, j;
struct sigaction act;
pthread_attr_t thread_attr;
struct cpu_map *cpu;
struct perf_cpu_map *cpu;
argc = parse_options(argc, argv, options, bench_futex_wake_usage, 0);
if (argc) {
@@ -131,7 +132,7 @@ int bench_futex_wake(int argc, const char **argv)
exit(EXIT_FAILURE);
}
cpu = cpu_map__new(NULL);
cpu = perf_cpu_map__new(NULL);
if (!cpu)
err(EXIT_FAILURE, "calloc");

View File

@@ -8,7 +8,7 @@
*/
#include "debug.h"
#include "../perf.h"
#include "../perf-sys.h"
#include <subcmd/parse-options.h>
#include "../util/header.h"
#include "../util/cloexec.h"
@@ -20,6 +20,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/time.h>
#include <errno.h>
#include <linux/time64.h>

View File

@@ -9,7 +9,6 @@
/* For the CLR_() macros */
#include <pthread.h>
#include "../perf.h"
#include "../builtin.h"
#include <subcmd/parse-options.h>
#include "../util/cloexec.h"

View File

@@ -10,7 +10,6 @@
*
*/
#include "../perf.h"
#include "../util/util.h"
#include <subcmd/parse-options.h>
#include "../builtin.h"

View File

@@ -9,7 +9,6 @@
* http://people.redhat.com/mingo/cfs-scheduler/tools/pipe-test-1m.c
* Ported to perf by Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
*/
#include "../perf.h"
#include "../util/util.h"
#include <subcmd/parse-options.h>
#include "../builtin.h"

View File

@@ -24,15 +24,17 @@
#include "util/event.h"
#include <subcmd/parse-options.h>
#include "util/parse-events.h"
#include "util/thread.h"
#include "util/sort.h"
#include "util/hist.h"
#include "util/dso.h"
#include "util/map.h"
#include "util/session.h"
#include "util/tool.h"
#include "util/data.h"
#include "arch/common.h"
#include "util/block-range.h"
#include "util/map_symbol.h"
#include "util/branch.h"
#include <dlfcn.h>
#include <errno.h>
@@ -156,7 +158,7 @@ static int hist_iter__branch_callback(struct hist_entry_iter *iter,
struct hist_entry *he = iter->he;
struct branch_info *bi;
struct perf_sample *sample = iter->sample;
struct perf_evsel *evsel = iter->evsel;
struct evsel *evsel = iter->evsel;
int err;
bi = he->branch_info;
@@ -171,7 +173,7 @@ out:
return err;
}
static int process_branch_callback(struct perf_evsel *evsel,
static int process_branch_callback(struct evsel *evsel,
struct perf_sample *sample,
struct addr_location *al __maybe_unused,
struct perf_annotate *ann,
@@ -208,7 +210,7 @@ static bool has_annotation(struct perf_annotate *ann)
return ui__has_annotation() || ann->use_stdio2;
}
static int perf_evsel__add_sample(struct perf_evsel *evsel,
static int perf_evsel__add_sample(struct evsel *evsel,
struct perf_sample *sample,
struct addr_location *al,
struct perf_annotate *ann,
@@ -257,7 +259,7 @@ static int perf_evsel__add_sample(struct perf_evsel *evsel,
static int process_sample_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct perf_annotate *ann = container_of(tool, struct perf_annotate, tool);
@@ -293,7 +295,7 @@ static int process_feature_event(struct perf_session *session,
}
static int hist_entry__tty_annotate(struct hist_entry *he,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_annotate *ann)
{
if (!ann->use_stdio2)
@@ -303,7 +305,7 @@ static int hist_entry__tty_annotate(struct hist_entry *he,
}
static void hists__find_annotations(struct hists *hists,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_annotate *ann)
{
struct rb_node *nd = rb_first_cached(&hists->entries), *next;
@@ -333,7 +335,7 @@ find_next:
if (use_browser == 2) {
int ret;
int (*annotate)(struct hist_entry *he,
struct perf_evsel *evsel,
struct evsel *evsel,
struct hist_browser_timer *hbt);
annotate = dlsym(perf_gtk_handle,
@@ -387,7 +389,7 @@ static int __cmd_annotate(struct perf_annotate *ann)
{
int ret;
struct perf_session *session = ann->session;
struct perf_evsel *pos;
struct evsel *pos;
u64 total_nr_samples;
if (ann->cpu_list) {

View File

@@ -16,7 +16,6 @@
* futex ... Futex performance
* epoll ... Event poll performance
*/
#include "perf.h"
#include <subcmd/parse-options.h>
#include "builtin.h"
#include "bench/bench.h"

View File

@@ -14,18 +14,20 @@
#include <errno.h>
#include <unistd.h>
#include "builtin.h"
#include "perf.h"
#include "namespaces.h"
#include "util/cache.h"
#include "util/debug.h"
#include "util/header.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include "util/strlist.h"
#include "util/build-id.h"
#include "util/session.h"
#include "util/dso.h"
#include "util/symbol.h"
#include "util/time-utils.h"
#include "util/util.h"
#include "util/probe-file.h"
#include <linux/string.h>
static int build_id_cache__kcore_buildid(const char *proc_dir, char *sbuildid)
{

View File

@@ -1,4 +1,3 @@
// SPDX-License-Identifier: GPL-2.0
/*
* builtin-buildid-list.c
*
@@ -11,8 +10,9 @@
#include "builtin.h"
#include "perf.h"
#include "util/build-id.h"
#include "util/cache.h"
#include "util/debug.h"
#include "util/dso.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include "util/session.h"
#include "util/symbol.h"

View File

@@ -20,12 +20,15 @@
#include <sys/param.h>
#include "debug.h"
#include "builtin.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include "map_symbol.h"
#include "mem-events.h"
#include "session.h"
#include "hist.h"
#include "sort.h"
#include "tool.h"
#include "cacheline.h"
#include "data.h"
#include "event.h"
#include "evlist.h"
@@ -34,6 +37,9 @@
#include "thread.h"
#include "mem2node.h"
#include "symbol.h"
#include "ui/ui.h"
#include "ui/progress.h"
#include "../perf.h"
struct c2c_hists {
struct hists hists;
@@ -248,7 +254,7 @@ static void compute_stats(struct c2c_hist_entry *c2c_he,
static int process_sample_event(struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct c2c_hists *c2c_hists = &c2c.hists;
@@ -1106,7 +1112,7 @@ node_entry(struct perf_hpp_fmt *fmt __maybe_unused, struct perf_hpp *hpp,
break;
case 1:
{
int num = bitmap_weight(c2c_he->cpuset, c2c.cpus_cnt);
int num = bitmap_weight(set, c2c.cpus_cnt);
struct c2c_stats *stats = &c2c_he->node_stats[node];
ret = scnprintf(hpp->buf, hpp->size, "%2d{%2d ", node, num);
@@ -2027,7 +2033,7 @@ static int setup_nodes(struct perf_session *session)
c2c.node_info = 2;
c2c.nodes_cnt = session->header.env.nr_numa_nodes;
c2c.cpus_cnt = session->header.env.nr_cpus_online;
c2c.cpus_cnt = session->header.env.nr_cpus_avail;
n = session->header.env.numa_nodes;
if (!n)
@@ -2049,7 +2055,7 @@ static int setup_nodes(struct perf_session *session)
c2c.cpu2node = cpu2node;
for (node = 0; node < c2c.nodes_cnt; node++) {
struct cpu_map *map = n[node].map;
struct perf_cpu_map *map = n[node].map;
unsigned long *set;
set = bitmap_alloc(c2c.cpus_cnt);
@@ -2059,7 +2065,7 @@ static int setup_nodes(struct perf_session *session)
nodes[node] = set;
/* empty node, skip */
if (cpu_map__empty(map))
if (perf_cpu_map__empty(map))
continue;
for (cpu = 0; cpu < map->nr; cpu++) {
@@ -2236,8 +2242,8 @@ static void print_pareto(FILE *out)
static void print_c2c_info(FILE *out, struct perf_session *session)
{
struct perf_evlist *evlist = session->evlist;
struct perf_evsel *evsel;
struct evlist *evlist = session->evlist;
struct evsel *evsel;
bool first = true;
fprintf(out, "=================================================\n");
@@ -2567,7 +2573,7 @@ parse_callchain_opt(const struct option *opt, const char *arg, int unset)
return parse_callchain_report_opt(arg);
}
static int setup_callchain(struct perf_evlist *evlist)
static int setup_callchain(struct evlist *evlist)
{
u64 sample_type = perf_evlist__combined_sample_type(evlist);
enum perf_call_graph_mode mode = CALLCHAIN_NONE;

View File

@@ -7,14 +7,13 @@
*/
#include "builtin.h"
#include "perf.h"
#include "util/cache.h"
#include <subcmd/parse-options.h>
#include "util/util.h"
#include "util/debug.h"
#include "util/config.h"
#include <linux/string.h>
#include <stdio.h>
#include <stdlib.h>
static bool use_system_config, use_user_config;

View File

@@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/compiler.h>
#include <stdio.h>
#include <string.h>
#include "builtin.h"
#include "perf.h"
#include "debug.h"

View File

@@ -6,6 +6,7 @@
* DSOs and symbol information, sort them and produce a diff.
*/
#include "builtin.h"
#include "perf.h"
#include "util/debug.h"
#include "util/event.h"
@@ -15,6 +16,7 @@
#include "util/session.h"
#include "util/tool.h"
#include "util/sort.h"
#include "util/srcline.h"
#include "util/symbol.h"
#include "util/data.h"
#include "util/config.h"
@@ -22,6 +24,8 @@
#include "util/annotate.h"
#include "util/map.h"
#include <linux/zalloc.h>
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include <errno.h>
#include <inttypes.h>
@@ -376,7 +380,7 @@ struct hist_entry_ops block_hist_ops = {
static int diff__process_sample_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct perf_diff *pdiff = container_of(tool, struct perf_diff, tool);
@@ -448,10 +452,10 @@ static struct perf_diff pdiff = {
},
};
static struct perf_evsel *evsel_match(struct perf_evsel *evsel,
struct perf_evlist *evlist)
static struct evsel *evsel_match(struct evsel *evsel,
struct evlist *evlist)
{
struct perf_evsel *e;
struct evsel *e;
evlist__for_each_entry(evlist, e) {
if (perf_evsel__match2(evsel, e))
@@ -461,9 +465,9 @@ static struct perf_evsel *evsel_match(struct perf_evsel *evsel,
return NULL;
}
static void perf_evlist__collapse_resort(struct perf_evlist *evlist)
static void perf_evlist__collapse_resort(struct evlist *evlist)
{
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(evlist, evsel) {
struct hists *hists = evsel__hists(evsel);
@@ -1009,8 +1013,8 @@ static void data__fprintf(void)
static void data_process(void)
{
struct perf_evlist *evlist_base = data__files[0].session->evlist;
struct perf_evsel *evsel_base;
struct evlist *evlist_base = data__files[0].session->evlist;
struct evsel *evsel_base;
bool first = true;
evlist__for_each_entry(evlist_base, evsel_base) {
@@ -1019,8 +1023,8 @@ static void data_process(void)
int i;
data__for_each_file_new(i, d) {
struct perf_evlist *evlist = d->session->evlist;
struct perf_evsel *evsel;
struct evlist *evlist = d->session->evlist;
struct evsel *evsel;
struct hists *hists;
evsel = evsel_match(evsel_base, evlist);

View File

@@ -21,7 +21,7 @@
static int __cmd_evlist(const char *file_name, struct perf_attr_details *details)
{
struct perf_session *session;
struct perf_evsel *pos;
struct evsel *pos;
struct perf_data data = {
.path = file_name,
.mode = PERF_DATA_MODE_READ,
@@ -36,7 +36,7 @@ static int __cmd_evlist(const char *file_name, struct perf_attr_details *details
evlist__for_each_entry(session->evlist, pos) {
perf_evsel__fprintf(pos, details, stdout);
if (pos->attr.type == PERF_TYPE_TRACEPOINT)
if (pos->core.attr.type == PERF_TYPE_TRACEPOINT)
has_tracepoint = true;
}

View File

@@ -6,28 +6,31 @@
*/
#include "builtin.h"
#include "perf.h"
#include <errno.h>
#include <unistd.h>
#include <signal.h>
#include <stdlib.h>
#include <fcntl.h>
#include <poll.h>
#include <linux/capability.h>
#include <linux/string.h>
#include "debug.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include <api/fs/tracing_path.h>
#include "evlist.h"
#include "target.h"
#include "cpumap.h"
#include "thread_map.h"
#include "util/cap.h"
#include "util/config.h"
#define DEFAULT_TRACER "function_graph"
struct perf_ftrace {
struct perf_evlist *evlist;
struct evlist *evlist;
struct target target;
const char *tracer;
struct list_head filters;
@@ -156,16 +159,16 @@ static int set_tracing_pid(struct perf_ftrace *ftrace)
if (target__has_cpu(&ftrace->target))
return 0;
for (i = 0; i < thread_map__nr(ftrace->evlist->threads); i++) {
for (i = 0; i < perf_thread_map__nr(ftrace->evlist->core.threads); i++) {
scnprintf(buf, sizeof(buf), "%d",
ftrace->evlist->threads->map[i]);
ftrace->evlist->core.threads->map[i]);
if (append_tracing_file("set_ftrace_pid", buf) < 0)
return -1;
}
return 0;
}
static int set_tracing_cpumask(struct cpu_map *cpumap)
static int set_tracing_cpumask(struct perf_cpu_map *cpumap)
{
char *cpumask;
size_t mask_size;
@@ -192,7 +195,7 @@ static int set_tracing_cpumask(struct cpu_map *cpumap)
static int set_tracing_cpu(struct perf_ftrace *ftrace)
{
struct cpu_map *cpumap = ftrace->evlist->cpus;
struct perf_cpu_map *cpumap = ftrace->evlist->core.cpus;
if (!target__has_cpu(&ftrace->target))
return 0;
@@ -202,11 +205,11 @@ static int set_tracing_cpu(struct perf_ftrace *ftrace)
static int reset_tracing_cpu(void)
{
struct cpu_map *cpumap = cpu_map__new(NULL);
struct perf_cpu_map *cpumap = perf_cpu_map__new(NULL);
int ret;
ret = set_tracing_cpumask(cpumap);
cpu_map__put(cpumap);
perf_cpu_map__put(cpumap);
return ret;
}
@@ -281,8 +284,14 @@ static int __cmd_ftrace(struct perf_ftrace *ftrace, int argc, const char **argv)
.events = POLLIN,
};
if (geteuid() != 0) {
pr_err("ftrace only works for root!\n");
if (!perf_cap__capable(CAP_SYS_ADMIN)) {
pr_err("ftrace only works for %s!\n",
#ifdef HAVE_LIBCAP_SUPPORT
"users with the SYS_ADMIN capability"
#else
"root"
#endif
);
return -1;
}
@@ -495,7 +504,7 @@ int cmd_ftrace(int argc, const char **argv)
goto out_delete_filters;
}
ftrace.evlist = perf_evlist__new();
ftrace.evlist = evlist__new();
if (ftrace.evlist == NULL) {
ret = -ENOMEM;
goto out_delete_filters;
@@ -508,7 +517,7 @@ int cmd_ftrace(int argc, const char **argv)
ret = __cmd_ftrace(&ftrace, argc, argv);
out_delete_evlist:
perf_evlist__delete(ftrace.evlist);
evlist__delete(ftrace.evlist);
out_delete_filters:
delete_filter_func(&ftrace.filters);

View File

@@ -4,8 +4,9 @@
*
* Builtin help command
*/
#include "perf.h"
#include "util/cache.h"
#include "util/config.h"
#include "util/strbuf.h"
#include "builtin.h"
#include <subcmd/exec-cmd.h>
#include "common-cmds.h"
@@ -14,10 +15,12 @@
#include <subcmd/help.h>
#include "util/debug.h"
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/zalloc.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>

View File

@@ -8,8 +8,8 @@
*/
#include "builtin.h"
#include "perf.h"
#include "util/color.h"
#include "util/dso.h"
#include "util/evlist.h"
#include "util/evsel.h"
#include "util/map.h"
@@ -96,7 +96,7 @@ static int perf_event__repipe_op2_synth(struct perf_session *session,
static int perf_event__repipe_attr(struct perf_tool *tool,
union perf_event *event,
struct perf_evlist **pevlist)
struct evlist **pevlist)
{
struct perf_inject *inject = container_of(tool, struct perf_inject,
tool);
@@ -215,13 +215,13 @@ static int perf_event__drop_aux(struct perf_tool *tool,
typedef int (*inject_handler)(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine);
static int perf_event__repipe_sample(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
if (evsel && evsel->handler) {
@@ -424,7 +424,7 @@ static int dso__inject_build_id(struct dso *dso, struct perf_tool *tool,
static int perf_event__inject_buildid(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel __maybe_unused,
struct evsel *evsel __maybe_unused,
struct machine *machine)
{
struct addr_location al;
@@ -465,7 +465,7 @@ repipe:
static int perf_inject__sched_process_exit(struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_sample *sample,
struct perf_evsel *evsel __maybe_unused,
struct evsel *evsel __maybe_unused,
struct machine *machine __maybe_unused)
{
struct perf_inject *inject = container_of(tool, struct perf_inject, tool);
@@ -485,7 +485,7 @@ static int perf_inject__sched_process_exit(struct perf_tool *tool,
static int perf_inject__sched_switch(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct perf_inject *inject = container_of(tool, struct perf_inject, tool);
@@ -509,7 +509,7 @@ static int perf_inject__sched_switch(struct perf_tool *tool,
static int perf_inject__sched_stat(struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct event_entry *ent;
@@ -530,8 +530,8 @@ found:
sample_sw.period = sample->period;
sample_sw.time = sample->time;
perf_event__synthesize_sample(event_sw, evsel->attr.sample_type,
evsel->attr.read_format, &sample_sw);
perf_event__synthesize_sample(event_sw, evsel->core.attr.sample_type,
evsel->core.attr.read_format, &sample_sw);
build_id__mark_dso_hit(tool, event_sw, &sample_sw, evsel, machine);
return perf_event__repipe(tool, event_sw, &sample_sw, machine);
}
@@ -541,10 +541,10 @@ static void sig_handler(int sig __maybe_unused)
session_done = 1;
}
static int perf_evsel__check_stype(struct perf_evsel *evsel,
static int perf_evsel__check_stype(struct evsel *evsel,
u64 sample_type, const char *sample_msg)
{
struct perf_event_attr *attr = &evsel->attr;
struct perf_event_attr *attr = &evsel->core.attr;
const char *name = perf_evsel__name(evsel);
if (!(attr->sample_type & sample_type)) {
@@ -559,7 +559,7 @@ static int perf_evsel__check_stype(struct perf_evsel *evsel,
static int drop_sample(struct perf_tool *tool __maybe_unused,
union perf_event *event __maybe_unused,
struct perf_sample *sample __maybe_unused,
struct perf_evsel *evsel __maybe_unused,
struct evsel *evsel __maybe_unused,
struct machine *machine __maybe_unused)
{
return 0;
@@ -567,8 +567,8 @@ static int drop_sample(struct perf_tool *tool __maybe_unused,
static void strip_init(struct perf_inject *inject)
{
struct perf_evlist *evlist = inject->session->evlist;
struct perf_evsel *evsel;
struct evlist *evlist = inject->session->evlist;
struct evsel *evsel;
inject->tool.context_switch = perf_event__drop;
@@ -576,10 +576,10 @@ static void strip_init(struct perf_inject *inject)
evsel->handler = drop_sample;
}
static bool has_tracking(struct perf_evsel *evsel)
static bool has_tracking(struct evsel *evsel)
{
return evsel->attr.mmap || evsel->attr.mmap2 || evsel->attr.comm ||
evsel->attr.task;
return evsel->core.attr.mmap || evsel->core.attr.mmap2 || evsel->core.attr.comm ||
evsel->core.attr.task;
}
#define COMPAT_MASK (PERF_SAMPLE_ID | PERF_SAMPLE_TID | PERF_SAMPLE_TIME | \
@@ -590,10 +590,10 @@ static bool has_tracking(struct perf_evsel *evsel)
* their selected event to exist, except if there is only 1 selected event left
* and it has a compatible sample type.
*/
static bool ok_to_remove(struct perf_evlist *evlist,
struct perf_evsel *evsel_to_remove)
static bool ok_to_remove(struct evlist *evlist,
struct evsel *evsel_to_remove)
{
struct perf_evsel *evsel;
struct evsel *evsel;
int cnt = 0;
bool ok = false;
@@ -603,8 +603,8 @@ static bool ok_to_remove(struct perf_evlist *evlist,
evlist__for_each_entry(evlist, evsel) {
if (evsel->handler != drop_sample) {
cnt += 1;
if ((evsel->attr.sample_type & COMPAT_MASK) ==
(evsel_to_remove->attr.sample_type & COMPAT_MASK))
if ((evsel->core.attr.sample_type & COMPAT_MASK) ==
(evsel_to_remove->core.attr.sample_type & COMPAT_MASK))
ok = true;
}
}
@@ -614,16 +614,16 @@ static bool ok_to_remove(struct perf_evlist *evlist,
static void strip_fini(struct perf_inject *inject)
{
struct perf_evlist *evlist = inject->session->evlist;
struct perf_evsel *evsel, *tmp;
struct evlist *evlist = inject->session->evlist;
struct evsel *evsel, *tmp;
/* Remove non-synthesized evsels if possible */
evlist__for_each_entry_safe(evlist, tmp, evsel) {
if (evsel->handler == drop_sample &&
ok_to_remove(evlist, evsel)) {
pr_debug("Deleting %s\n", perf_evsel__name(evsel));
perf_evlist__remove(evlist, evsel);
perf_evsel__delete(evsel);
evlist__remove(evlist, evsel);
evsel__delete(evsel);
}
}
}
@@ -651,7 +651,7 @@ static int __cmd_inject(struct perf_inject *inject)
if (inject->build_ids) {
inject->tool.sample = perf_event__inject_buildid;
} else if (inject->sched_stat) {
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(session->evlist, evsel) {
const char *name = perf_evsel__name(evsel);
@@ -712,7 +712,7 @@ static int __cmd_inject(struct perf_inject *inject)
* remove the evsel.
*/
if (inject->itrace_synth_opts.set) {
struct perf_evsel *evsel;
struct evsel *evsel;
perf_header__clear_feat(&session->header,
HEADER_AUXTRACE);
@@ -724,8 +724,8 @@ static int __cmd_inject(struct perf_inject *inject)
if (evsel) {
pr_debug("Deleting %s\n",
perf_evsel__name(evsel));
perf_evlist__remove(session->evlist, evsel);
perf_evsel__delete(evsel);
evlist__remove(session->evlist, evsel);
evsel__delete(evsel);
}
if (inject->strip)
strip_fini(inject);

View File

@@ -11,6 +11,7 @@
#include <linux/compiler.h>
#include <subcmd/parse-options.h>
#include "debug.h"
#include "dso.h"
#include "machine.h"
#include "map.h"
#include "symbol.h"

View File

@@ -2,6 +2,7 @@
#include "builtin.h"
#include "perf.h"
#include "util/dso.h"
#include "util/evlist.h"
#include "util/evsel.h"
#include "util/config.h"
@@ -14,6 +15,7 @@
#include "util/callchain.h"
#include "util/time-utils.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include "util/trace-event.h"
#include "util/data.h"
@@ -166,7 +168,7 @@ static int insert_caller_stat(unsigned long call_site,
return 0;
}
static int perf_evsel__process_alloc_event(struct perf_evsel *evsel,
static int perf_evsel__process_alloc_event(struct evsel *evsel,
struct perf_sample *sample)
{
unsigned long ptr = perf_evsel__intval(evsel, sample, "ptr"),
@@ -185,7 +187,7 @@ static int perf_evsel__process_alloc_event(struct perf_evsel *evsel,
return 0;
}
static int perf_evsel__process_alloc_node_event(struct perf_evsel *evsel,
static int perf_evsel__process_alloc_node_event(struct evsel *evsel,
struct perf_sample *sample)
{
int ret = perf_evsel__process_alloc_event(evsel, sample);
@@ -229,7 +231,7 @@ static struct alloc_stat *search_alloc_stat(unsigned long ptr,
return NULL;
}
static int perf_evsel__process_free_event(struct perf_evsel *evsel,
static int perf_evsel__process_free_event(struct evsel *evsel,
struct perf_sample *sample)
{
unsigned long ptr = perf_evsel__intval(evsel, sample, "ptr");
@@ -381,7 +383,7 @@ static int build_alloc_func_list(void)
* Find first non-memory allocation function from callchain.
* The allocation functions are in the 'alloc_func_list'.
*/
static u64 find_callsite(struct perf_evsel *evsel, struct perf_sample *sample)
static u64 find_callsite(struct evsel *evsel, struct perf_sample *sample)
{
struct addr_location al;
struct machine *machine = &kmem_session->machines.host;
@@ -728,7 +730,7 @@ static char *compact_gfp_string(unsigned long gfp_flags)
return NULL;
}
static int parse_gfp_flags(struct perf_evsel *evsel, struct perf_sample *sample,
static int parse_gfp_flags(struct evsel *evsel, struct perf_sample *sample,
unsigned int gfp_flags)
{
struct tep_record record = {
@@ -749,7 +751,8 @@ static int parse_gfp_flags(struct perf_evsel *evsel, struct perf_sample *sample,
}
trace_seq_init(&seq);
tep_event_info(&seq, evsel->tp_format, &record);
tep_print_event(evsel->tp_format->tep,
&seq, &record, "%s", TEP_PRINT_INFO);
str = strtok_r(seq.buffer, " ", &pos);
while (str) {
@@ -779,7 +782,7 @@ static int parse_gfp_flags(struct perf_evsel *evsel, struct perf_sample *sample,
return 0;
}
static int perf_evsel__process_page_alloc_event(struct perf_evsel *evsel,
static int perf_evsel__process_page_alloc_event(struct evsel *evsel,
struct perf_sample *sample)
{
u64 page;
@@ -852,7 +855,7 @@ static int perf_evsel__process_page_alloc_event(struct perf_evsel *evsel,
return 0;
}
static int perf_evsel__process_page_free_event(struct perf_evsel *evsel,
static int perf_evsel__process_page_free_event(struct evsel *evsel,
struct perf_sample *sample)
{
u64 page;
@@ -930,13 +933,13 @@ static bool perf_kmem__skip_sample(struct perf_sample *sample)
return false;
}
typedef int (*tracepoint_handler)(struct perf_evsel *evsel,
typedef int (*tracepoint_handler)(struct evsel *evsel,
struct perf_sample *sample);
static int process_sample_event(struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
int err = 0;
@@ -1363,8 +1366,8 @@ static void sort_result(void)
static int __cmd_kmem(struct perf_session *session)
{
int err = -EINVAL;
struct perf_evsel *evsel;
const struct perf_evsel_str_handler kmem_tracepoints[] = {
struct evsel *evsel;
const struct evsel_str_handler kmem_tracepoints[] = {
/* slab allocator */
{ "kmem:kmalloc", perf_evsel__process_alloc_event, },
{ "kmem:kmem_cache_alloc", perf_evsel__process_alloc_event, },
@@ -1967,7 +1970,7 @@ int cmd_kmem(int argc, const char **argv)
}
if (kmem_page) {
struct perf_evsel *evsel;
struct evsel *evsel;
evsel = perf_evlist__find_tracepoint_by_name(session->evlist,
"kmem:mm_page_alloc");

View File

@@ -2,15 +2,16 @@
#include "builtin.h"
#include "perf.h"
#include "util/build-id.h"
#include "util/evsel.h"
#include "util/evlist.h"
#include "util/term.h"
#include "util/cache.h"
#include "util/symbol.h"
#include "util/thread.h"
#include "util/header.h"
#include "util/session.h"
#include "util/intlist.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include "util/trace-event.h"
#include "util/debug.h"
@@ -19,6 +20,7 @@
#include "util/top.h"
#include "util/data.h"
#include "util/ordered-events.h"
#include "ui/ui.h"
#include <sys/prctl.h>
#ifdef HAVE_TIMERFD_SUPPORT
@@ -30,6 +32,7 @@
#include <fcntl.h>
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/time64.h>
#include <linux/zalloc.h>
#include <errno.h>
@@ -57,7 +60,7 @@ static const char *get_filename_for_perf_kvm(void)
#ifdef HAVE_KVM_STAT_SUPPORT
#include "util/kvm-stat.h"
void exit_event_get_key(struct perf_evsel *evsel,
void exit_event_get_key(struct evsel *evsel,
struct perf_sample *sample,
struct event_key *key)
{
@@ -65,12 +68,12 @@ void exit_event_get_key(struct perf_evsel *evsel,
key->key = perf_evsel__intval(evsel, sample, kvm_exit_reason);
}
bool kvm_exit_event(struct perf_evsel *evsel)
bool kvm_exit_event(struct evsel *evsel)
{
return !strcmp(evsel->name, kvm_exit_trace);
}
bool exit_event_begin(struct perf_evsel *evsel,
bool exit_event_begin(struct evsel *evsel,
struct perf_sample *sample, struct event_key *key)
{
if (kvm_exit_event(evsel)) {
@@ -81,12 +84,12 @@ bool exit_event_begin(struct perf_evsel *evsel,
return false;
}
bool kvm_entry_event(struct perf_evsel *evsel)
bool kvm_entry_event(struct evsel *evsel)
{
return !strcmp(evsel->name, kvm_entry_trace);
}
bool exit_event_end(struct perf_evsel *evsel,
bool exit_event_end(struct evsel *evsel,
struct perf_sample *sample __maybe_unused,
struct event_key *key __maybe_unused)
{
@@ -286,7 +289,7 @@ static bool update_kvm_event(struct kvm_event *event, int vcpu_id,
}
static bool is_child_event(struct perf_kvm_stat *kvm,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct event_key *key)
{
@@ -396,7 +399,7 @@ static bool handle_end_event(struct perf_kvm_stat *kvm,
static
struct vcpu_event_record *per_vcpu_record(struct thread *thread,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
/* Only kvm_entry records vcpu id. */
@@ -419,7 +422,7 @@ struct vcpu_event_record *per_vcpu_record(struct thread *thread,
static bool handle_kvm_event(struct perf_kvm_stat *kvm,
struct thread *thread,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
struct vcpu_event_record *vcpu_record;
@@ -672,7 +675,7 @@ static bool skip_sample(struct perf_kvm_stat *kvm,
static int process_sample_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
int err = 0;
@@ -743,7 +746,7 @@ static bool verify_vcpu(int vcpu)
static s64 perf_kvm__mmap_read_idx(struct perf_kvm_stat *kvm, int idx,
u64 *mmap_time)
{
struct perf_evlist *evlist = kvm->evlist;
struct evlist *evlist = kvm->evlist;
union perf_event *event;
struct perf_mmap *md;
u64 timestamp;
@@ -972,7 +975,7 @@ static int kvm_events_live_report(struct perf_kvm_stat *kvm)
goto out;
/* everything is good - enable the events and process */
perf_evlist__enable(kvm->evlist);
evlist__enable(kvm->evlist);
while (!done) {
struct fdarray *fda = &kvm->evlist->pollfd;
@@ -993,7 +996,7 @@ static int kvm_events_live_report(struct perf_kvm_stat *kvm)
err = fdarray__poll(fda, 100);
}
perf_evlist__disable(kvm->evlist);
evlist__disable(kvm->evlist);
if (err == 0) {
sort_result(kvm);
@@ -1011,8 +1014,8 @@ out:
static int kvm_live_open_events(struct perf_kvm_stat *kvm)
{
int err, rc = -1;
struct perf_evsel *pos;
struct perf_evlist *evlist = kvm->evlist;
struct evsel *pos;
struct evlist *evlist = kvm->evlist;
char sbuf[STRERR_BUFSIZE];
perf_evlist__config(evlist, &kvm->opts, NULL);
@@ -1022,7 +1025,7 @@ static int kvm_live_open_events(struct perf_kvm_stat *kvm)
* This command processes KVM tracepoints from host only
*/
evlist__for_each_entry(evlist, pos) {
struct perf_event_attr *attr = &pos->attr;
struct perf_event_attr *attr = &pos->core.attr;
/* make sure these *are* set */
perf_evsel__set_sample_bit(pos, TID);
@@ -1048,7 +1051,7 @@ static int kvm_live_open_events(struct perf_kvm_stat *kvm)
attr->disabled = 1;
}
err = perf_evlist__open(evlist);
err = evlist__open(evlist);
if (err < 0) {
printf("Couldn't create the events: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
@@ -1058,7 +1061,7 @@ static int kvm_live_open_events(struct perf_kvm_stat *kvm)
if (perf_evlist__mmap(evlist, kvm->opts.mmap_pages) < 0) {
ui__error("Failed to mmap the events: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
perf_evlist__close(evlist);
evlist__close(evlist);
goto out;
}
@@ -1283,14 +1286,14 @@ kvm_events_report(struct perf_kvm_stat *kvm, int argc, const char **argv)
}
#ifdef HAVE_TIMERFD_SUPPORT
static struct perf_evlist *kvm_live_event_list(void)
static struct evlist *kvm_live_event_list(void)
{
struct perf_evlist *evlist;
struct evlist *evlist;
char *tp, *name, *sys;
int err = -1;
const char * const *events_tp;
evlist = perf_evlist__new();
evlist = evlist__new();
if (evlist == NULL)
return NULL;
@@ -1325,7 +1328,7 @@ static struct perf_evlist *kvm_live_event_list(void)
out:
if (err) {
perf_evlist__delete(evlist);
evlist__delete(evlist);
evlist = NULL;
}
@@ -1450,7 +1453,7 @@ static int kvm_events_live(struct perf_kvm_stat *kvm,
perf_session__set_id_hdr_size(kvm->session);
ordered_events__set_copy_on_queue(&kvm->session->ordered_events, true);
machine__synthesize_threads(&kvm->session->machines.host, &kvm->opts.target,
kvm->evlist->threads, false, 1);
kvm->evlist->core.threads, false, 1);
err = kvm_live_open_events(kvm);
if (err)
goto out;
@@ -1460,7 +1463,7 @@ static int kvm_events_live(struct perf_kvm_stat *kvm,
out:
perf_session__delete(kvm->session);
kvm->session = NULL;
perf_evlist__delete(kvm->evlist);
evlist__delete(kvm->evlist);
return err;
}

View File

@@ -10,14 +10,13 @@
*/
#include "builtin.h"
#include "perf.h"
#include "util/parse-events.h"
#include "util/cache.h"
#include "util/pmu.h"
#include "util/debug.h"
#include "util/metricgroup.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include <stdio.h>
static bool desc_flag = true;
static bool details_flag;

View File

@@ -4,13 +4,13 @@
#include "builtin.h"
#include "perf.h"
#include "util/evlist.h"
#include "util/evlist.h" // for struct evsel_str_handler
#include "util/evsel.h"
#include "util/cache.h"
#include "util/symbol.h"
#include "util/thread.h"
#include "util/header.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include "util/trace-event.h"
@@ -347,16 +347,16 @@ alloc_failed:
}
struct trace_lock_handler {
int (*acquire_event)(struct perf_evsel *evsel,
int (*acquire_event)(struct evsel *evsel,
struct perf_sample *sample);
int (*acquired_event)(struct perf_evsel *evsel,
int (*acquired_event)(struct evsel *evsel,
struct perf_sample *sample);
int (*contended_event)(struct perf_evsel *evsel,
int (*contended_event)(struct evsel *evsel,
struct perf_sample *sample);
int (*release_event)(struct perf_evsel *evsel,
int (*release_event)(struct evsel *evsel,
struct perf_sample *sample);
};
@@ -396,7 +396,7 @@ enum acquire_flags {
READ_LOCK = 2,
};
static int report_lock_acquire_event(struct perf_evsel *evsel,
static int report_lock_acquire_event(struct evsel *evsel,
struct perf_sample *sample)
{
void *addr;
@@ -468,7 +468,7 @@ end:
return 0;
}
static int report_lock_acquired_event(struct perf_evsel *evsel,
static int report_lock_acquired_event(struct evsel *evsel,
struct perf_sample *sample)
{
void *addr;
@@ -531,7 +531,7 @@ end:
return 0;
}
static int report_lock_contended_event(struct perf_evsel *evsel,
static int report_lock_contended_event(struct evsel *evsel,
struct perf_sample *sample)
{
void *addr;
@@ -586,7 +586,7 @@ end:
return 0;
}
static int report_lock_release_event(struct perf_evsel *evsel,
static int report_lock_release_event(struct evsel *evsel,
struct perf_sample *sample)
{
void *addr;
@@ -656,7 +656,7 @@ static struct trace_lock_handler report_lock_ops = {
static struct trace_lock_handler *trace_handler;
static int perf_evsel__process_lock_acquire(struct perf_evsel *evsel,
static int perf_evsel__process_lock_acquire(struct evsel *evsel,
struct perf_sample *sample)
{
if (trace_handler->acquire_event)
@@ -664,7 +664,7 @@ static int perf_evsel__process_lock_acquire(struct perf_evsel *evsel,
return 0;
}
static int perf_evsel__process_lock_acquired(struct perf_evsel *evsel,
static int perf_evsel__process_lock_acquired(struct evsel *evsel,
struct perf_sample *sample)
{
if (trace_handler->acquired_event)
@@ -672,7 +672,7 @@ static int perf_evsel__process_lock_acquired(struct perf_evsel *evsel,
return 0;
}
static int perf_evsel__process_lock_contended(struct perf_evsel *evsel,
static int perf_evsel__process_lock_contended(struct evsel *evsel,
struct perf_sample *sample)
{
if (trace_handler->contended_event)
@@ -680,7 +680,7 @@ static int perf_evsel__process_lock_contended(struct perf_evsel *evsel,
return 0;
}
static int perf_evsel__process_lock_release(struct perf_evsel *evsel,
static int perf_evsel__process_lock_release(struct evsel *evsel,
struct perf_sample *sample)
{
if (trace_handler->release_event)
@@ -806,13 +806,13 @@ static int dump_info(void)
return rc;
}
typedef int (*tracepoint_handler)(struct perf_evsel *evsel,
typedef int (*tracepoint_handler)(struct evsel *evsel,
struct perf_sample *sample);
static int process_sample_event(struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
int err = 0;
@@ -847,7 +847,7 @@ static void sort_result(void)
}
}
static const struct perf_evsel_str_handler lock_tracepoints[] = {
static const struct evsel_str_handler lock_tracepoints[] = {
{ "lock:lock_acquire", perf_evsel__process_lock_acquire, }, /* CONFIG_LOCKDEP */
{ "lock:lock_acquired", perf_evsel__process_lock_acquired, }, /* CONFIG_LOCKDEP, CONFIG_LOCK_STAT */
{ "lock:lock_contended", perf_evsel__process_lock_contended, }, /* CONFIG_LOCKDEP, CONFIG_LOCK_STAT */

View File

@@ -11,8 +11,10 @@
#include "util/tool.h"
#include "util/session.h"
#include "util/data.h"
#include "util/map_symbol.h"
#include "util/mem-events.h"
#include "util/debug.h"
#include "util/dso.h"
#include "util/map.h"
#include "util/symbol.h"
@@ -230,7 +232,7 @@ out_put:
static int process_sample_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel __maybe_unused,
struct evsel *evsel __maybe_unused,
struct machine *machine)
{
return dump_raw_samples(tool, event, sample, machine);

View File

@@ -16,17 +16,18 @@
#include <stdlib.h>
#include <string.h>
#include "perf.h"
#include "builtin.h"
#include "namespaces.h"
#include "util/build-id.h"
#include "util/strlist.h"
#include "util/strfilter.h"
#include "util/symbol.h"
#include "util/symbol_conf.h"
#include "util/debug.h"
#include <subcmd/parse-options.h>
#include "util/probe-finder.h"
#include "util/probe-event.h"
#include "util/probe-file.h"
#include <linux/string.h>
#include <linux/zalloc.h>
#define DEFAULT_VAR_FILTER "!__k???tab_* & !__crc_*"

View File

@@ -8,8 +8,6 @@
*/
#include "builtin.h"
#include "perf.h"
#include "util/build-id.h"
#include <subcmd/parse-options.h>
#include "util/parse-events.h"
@@ -22,9 +20,11 @@
#include "util/evlist.h"
#include "util/evsel.h"
#include "util/debug.h"
#include "util/target.h"
#include "util/session.h"
#include "util/tool.h"
#include "util/symbol.h"
#include "util/record.h"
#include "util/cpumap.h"
#include "util/thread_map.h"
#include "util/data.h"
@@ -42,6 +42,7 @@
#include "util/units.h"
#include "util/bpf-event.h"
#include "asm/bug.h"
#include "perf.h"
#include <errno.h>
#include <inttypes.h>
@@ -52,6 +53,7 @@
#include <signal.h>
#include <sys/mman.h>
#include <sys/wait.h>
#include <linux/string.h>
#include <linux/time64.h>
#include <linux/zalloc.h>
@@ -73,7 +75,7 @@ struct record {
u64 bytes_written;
struct perf_data data;
struct auxtrace_record *itr;
struct perf_evlist *evlist;
struct evlist *evlist;
struct perf_session *session;
int realtime_prio;
bool no_buildid;
@@ -346,7 +348,7 @@ static void record__aio_set_pos(int trace_fd, off_t pos)
static void record__aio_mmap_read_sync(struct record *rec)
{
int i;
struct perf_evlist *evlist = rec->evlist;
struct evlist *evlist = rec->evlist;
struct perf_mmap *maps = evlist->mmap;
if (!record__aio_enabled(rec))
@@ -613,19 +615,35 @@ out:
return rc;
}
static void record__read_auxtrace_snapshot(struct record *rec)
static void record__read_auxtrace_snapshot(struct record *rec, bool on_exit)
{
pr_debug("Recording AUX area tracing snapshot\n");
if (record__auxtrace_read_snapshot_all(rec) < 0) {
trigger_error(&auxtrace_snapshot_trigger);
} else {
if (auxtrace_record__snapshot_finish(rec->itr))
if (auxtrace_record__snapshot_finish(rec->itr, on_exit))
trigger_error(&auxtrace_snapshot_trigger);
else
trigger_ready(&auxtrace_snapshot_trigger);
}
}
static int record__auxtrace_snapshot_exit(struct record *rec)
{
if (trigger_is_error(&auxtrace_snapshot_trigger))
return 0;
if (!auxtrace_record__snapshot_started &&
auxtrace_record__snapshot_start(rec->itr))
return -1;
record__read_auxtrace_snapshot(rec, true);
if (trigger_is_error(&auxtrace_snapshot_trigger))
return -1;
return 0;
}
static int record__auxtrace_init(struct record *rec)
{
int err;
@@ -654,7 +672,8 @@ int record__auxtrace_mmap_read(struct record *rec __maybe_unused,
}
static inline
void record__read_auxtrace_snapshot(struct record *rec __maybe_unused)
void record__read_auxtrace_snapshot(struct record *rec __maybe_unused,
bool on_exit __maybe_unused)
{
}
@@ -664,6 +683,12 @@ int auxtrace_record__snapshot_start(struct auxtrace_record *itr __maybe_unused)
return 0;
}
static inline
int record__auxtrace_snapshot_exit(struct record *rec __maybe_unused)
{
return 0;
}
static int record__auxtrace_init(struct record *rec __maybe_unused)
{
return 0;
@@ -672,7 +697,7 @@ static int record__auxtrace_init(struct record *rec __maybe_unused)
#endif
static int record__mmap_evlist(struct record *rec,
struct perf_evlist *evlist)
struct evlist *evlist)
{
struct record_opts *opts = &rec->opts;
char msg[512];
@@ -713,8 +738,8 @@ static int record__mmap(struct record *rec)
static int record__open(struct record *rec)
{
char msg[BUFSIZ];
struct perf_evsel *pos;
struct perf_evlist *evlist = rec->evlist;
struct evsel *pos;
struct evlist *evlist = rec->evlist;
struct perf_session *session = rec->session;
struct record_opts *opts = &rec->opts;
int rc = 0;
@@ -732,14 +757,14 @@ static int record__open(struct record *rec)
pos->tracking = 0;
pos = perf_evlist__last(evlist);
pos->tracking = 1;
pos->attr.enable_on_exec = 1;
pos->core.attr.enable_on_exec = 1;
}
perf_evlist__config(evlist, opts, &callchain_param);
evlist__for_each_entry(evlist, pos) {
try_again:
if (perf_evsel__open(pos, pos->cpus, pos->threads) < 0) {
if (evsel__open(pos, pos->core.cpus, pos->core.threads) < 0) {
if (perf_evsel__fallback(pos, errno, msg, sizeof(msg))) {
if (verbose > 0)
ui__warning("%s\n", msg);
@@ -782,7 +807,7 @@ out:
static int process_sample_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct record *rec = container_of(tool, struct record, tool);
@@ -875,7 +900,7 @@ static void record__adjust_affinity(struct record *rec, struct perf_mmap *map)
static size_t process_comp_header(void *record, size_t increment)
{
struct compressed_event *event = record;
struct perf_record_compressed *event = record;
size_t size = sizeof(*event);
if (increment) {
@@ -893,7 +918,7 @@ static size_t zstd_compress(struct perf_session *session, void *dst, size_t dst_
void *src, size_t src_size)
{
size_t compressed;
size_t max_record_size = PERF_SAMPLE_MAX_SIZE - sizeof(struct compressed_event) - 1;
size_t max_record_size = PERF_SAMPLE_MAX_SIZE - sizeof(struct perf_record_compressed) - 1;
compressed = zstd_compress_stream_to_records(&session->zstd_data, dst, dst_size, src, src_size,
max_record_size, process_comp_header);
@@ -904,7 +929,7 @@ static size_t zstd_compress(struct perf_session *session, void *dst, size_t dst_
return compressed;
}
static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evlist,
static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
bool overwrite, bool synch)
{
u64 bytes_written = rec->bytes_written;
@@ -1002,7 +1027,7 @@ static void record__init_features(struct record *rec)
if (rec->no_buildid)
perf_header__clear_feat(&session->header, HEADER_BUILD_ID);
if (!have_tracepoints(&rec->evlist->entries))
if (!have_tracepoints(&rec->evlist->core.entries))
perf_header__clear_feat(&session->header, HEADER_TRACING_DATA);
if (!rec->opts.branch_stack)
@@ -1047,7 +1072,7 @@ record__finish_output(struct record *rec)
static int record__synthesize_workload(struct record *rec, bool tail)
{
int err;
struct thread_map *thread_map;
struct perf_thread_map *thread_map;
if (rec->opts.tail_synthesize != tail)
return 0;
@@ -1060,7 +1085,7 @@ static int record__synthesize_workload(struct record *rec, bool tail)
process_synthesized_event,
&rec->session->machines.host,
rec->opts.sample_address);
thread_map__put(thread_map);
perf_thread_map__put(thread_map);
return err;
}
@@ -1165,7 +1190,7 @@ perf_event__synth_time_conv(const struct perf_event_mmap_page *pc __maybe_unused
}
static const struct perf_event_mmap_page *
perf_evlist__pick_pc(struct perf_evlist *evlist)
perf_evlist__pick_pc(struct evlist *evlist)
{
if (evlist) {
if (evlist->mmap && evlist->mmap[0].base)
@@ -1218,7 +1243,7 @@ static int record__synthesize(struct record *rec, bool tail)
return err;
}
if (have_tracepoints(&rec->evlist->entries)) {
if (have_tracepoints(&rec->evlist->core.entries)) {
/*
* FIXME err <= 0 here actually means that
* there were no tracepoints so its not really
@@ -1275,7 +1300,7 @@ static int record__synthesize(struct record *rec, bool tail)
if (err)
goto out;
err = perf_event__synthesize_thread_map2(&rec->tool, rec->evlist->threads,
err = perf_event__synthesize_thread_map2(&rec->tool, rec->evlist->core.threads,
process_synthesized_event,
NULL);
if (err < 0) {
@@ -1283,7 +1308,7 @@ static int record__synthesize(struct record *rec, bool tail)
return err;
}
err = perf_event__synthesize_cpu_map(&rec->tool, rec->evlist->cpus,
err = perf_event__synthesize_cpu_map(&rec->tool, rec->evlist->core.cpus,
process_synthesized_event, NULL);
if (err < 0) {
pr_err("Couldn't synthesize cpu map.\n");
@@ -1295,7 +1320,7 @@ static int record__synthesize(struct record *rec, bool tail)
if (err < 0)
pr_warning("Couldn't synthesize bpf events.\n");
err = __machine__synthesize_threads(machine, tool, &opts->target, rec->evlist->threads,
err = __machine__synthesize_threads(machine, tool, &opts->target, rec->evlist->core.threads,
process_synthesized_event, opts->sample_address,
1);
out:
@@ -1313,7 +1338,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
struct perf_data *data = &rec->data;
struct perf_session *session;
bool disabled = false, draining = false;
struct perf_evlist *sb_evlist = NULL;
struct evlist *sb_evlist = NULL;
int fd;
float ratio = 0;
@@ -1375,7 +1400,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
* because we synthesize event name through the pipe
* and need the id for that.
*/
if (data->is_pipe && rec->evlist->nr_entries == 1)
if (data->is_pipe && rec->evlist->core.nr_entries == 1)
rec->opts.sample_id = true;
if (record__open(rec) != 0) {
@@ -1453,7 +1478,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
* so don't spoil it by prematurely enabling them.
*/
if (!target__none(&opts->target) && !opts->initial_delay)
perf_evlist__enable(rec->evlist);
evlist__enable(rec->evlist);
/*
* Let the child rip
@@ -1506,7 +1531,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
if (opts->initial_delay) {
usleep(opts->initial_delay * USEC_PER_MSEC);
perf_evlist__enable(rec->evlist);
evlist__enable(rec->evlist);
}
trigger_ready(&auxtrace_snapshot_trigger);
@@ -1536,7 +1561,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
if (auxtrace_record__snapshot_started) {
auxtrace_record__snapshot_started = 0;
if (!trigger_is_error(&auxtrace_snapshot_trigger))
record__read_auxtrace_snapshot(rec);
record__read_auxtrace_snapshot(rec, false);
if (trigger_is_error(&auxtrace_snapshot_trigger)) {
pr_err("AUX area tracing snapshot failed\n");
err = -1;
@@ -1605,13 +1630,17 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
*/
if (done && !disabled && !target__none(&opts->target)) {
trigger_off(&auxtrace_snapshot_trigger);
perf_evlist__disable(rec->evlist);
evlist__disable(rec->evlist);
disabled = true;
}
}
trigger_off(&auxtrace_snapshot_trigger);
trigger_off(&switch_output_trigger);
if (opts->auxtrace_snapshot_on_exit)
record__auxtrace_snapshot_exit(rec);
if (forks && workload_exec_errno) {
char msg[STRERR_BUFSIZE];
const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
@@ -2265,7 +2294,7 @@ int cmd_record(int argc, const char **argv)
CPU_ZERO(&rec->affinity_mask);
rec->opts.affinity = PERF_AFFINITY_SYS;
rec->evlist = perf_evlist__new();
rec->evlist = evlist__new();
if (rec->evlist == NULL)
return -ENOMEM;
@@ -2345,7 +2374,7 @@ int cmd_record(int argc, const char **argv)
if (symbol_conf.kptr_restrict && !perf_evlist__exclude_kernel(rec->evlist))
pr_warning(
"WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted,\n"
"check /proc/sys/kernel/kptr_restrict.\n\n"
"check /proc/sys/kernel/kptr_restrict and /proc/sys/kernel/perf_event_paranoid.\n\n"
"Samples in kernel functions may not be resolved if a suitable vmlinux\n"
"file is not found in the buildid cache or in the vmlinux path.\n\n"
"Samples in kernel modules won't be resolved at all.\n\n"
@@ -2386,7 +2415,7 @@ int cmd_record(int argc, const char **argv)
if (record.opts.overwrite)
record.opts.tail_synthesize = true;
if (rec->evlist->nr_entries == 0 &&
if (rec->evlist->core.nr_entries == 0 &&
__perf_evlist__add_default(rec->evlist, !record.opts.no_samples) < 0) {
pr_err("Not enough memory for event selector list\n");
goto out;
@@ -2449,7 +2478,7 @@ int cmd_record(int argc, const char **argv)
err = __cmd_record(&record, argc, argv);
out:
perf_evlist__delete(rec->evlist);
evlist__delete(rec->evlist);
symbol__exit();
auxtrace_record__free(rec->itr);
return err;

View File

@@ -12,12 +12,16 @@
#include "util/annotate.h"
#include "util/color.h"
#include "util/dso.h"
#include <linux/list.h>
#include <linux/rbtree.h>
#include <linux/err.h>
#include <linux/zalloc.h>
#include "util/map.h"
#include "util/symbol.h"
#include "util/map_symbol.h"
#include "util/mem-events.h"
#include "util/branch.h"
#include "util/callchain.h"
#include "util/values.h"
@@ -25,8 +29,10 @@
#include "util/debug.h"
#include "util/evlist.h"
#include "util/evsel.h"
#include "util/evswitch.h"
#include "util/header.h"
#include "util/session.h"
#include "util/srcline.h"
#include "util/tool.h"
#include <subcmd/parse-options.h>
@@ -42,6 +48,9 @@
#include "util/auxtrace.h"
#include "util/units.h"
#include "util/branch.h"
#include "util/util.h"
#include "ui/ui.h"
#include "ui/progress.h"
#include <dlfcn.h>
#include <errno.h>
@@ -50,6 +59,7 @@
#include <linux/ctype.h>
#include <signal.h>
#include <linux/bitmap.h>
#include <linux/string.h>
#include <linux/stringify.h>
#include <linux/time64.h>
#include <sys/types.h>
@@ -60,6 +70,7 @@
struct report {
struct perf_tool tool;
struct perf_session *session;
struct evswitch evswitch;
bool use_tui, use_gtk, use_stdio;
bool show_full_info;
bool show_threads;
@@ -128,7 +139,7 @@ static int hist_iter__report_callback(struct hist_entry_iter *iter,
int err = 0;
struct report *rep = arg;
struct hist_entry *he = iter->he;
struct perf_evsel *evsel = iter->evsel;
struct evsel *evsel = iter->evsel;
struct perf_sample *sample = iter->sample;
struct mem_info *mi;
struct branch_info *bi;
@@ -172,7 +183,7 @@ static int hist_iter__branch_callback(struct hist_entry_iter *iter,
struct report *rep = arg;
struct branch_info *bi;
struct perf_sample *sample = iter->sample;
struct perf_evsel *evsel = iter->evsel;
struct evsel *evsel = iter->evsel;
int err;
if (!ui__has_annotation() && !rep->symbol_ipc)
@@ -193,7 +204,7 @@ out:
}
static void setup_forced_leader(struct report *report,
struct perf_evlist *evlist)
struct evlist *evlist)
{
if (report->group_set)
perf_evlist__force_leader(evlist);
@@ -208,7 +219,7 @@ static int process_feature_event(struct perf_session *session,
return perf_event__process_feature(session, event);
if (event->feat.feat_id != HEADER_LAST_FEATURE) {
pr_err("failed: wrong feature ID: %" PRIu64 "\n",
pr_err("failed: wrong feature ID: %" PRI_lu64 "\n",
event->feat.feat_id);
return -1;
}
@@ -225,7 +236,7 @@ static int process_feature_event(struct perf_session *session,
static int process_sample_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct report *rep = container_of(tool, struct report, tool);
@@ -243,6 +254,9 @@ static int process_sample_event(struct perf_tool *tool,
return 0;
}
if (evswitch__discard(&rep->evswitch, evsel))
return 0;
if (machine__resolve(machine, &al, sample) < 0) {
pr_debug("problem processing %d event, skipping it.\n",
event->header.type);
@@ -292,7 +306,7 @@ out_put:
static int process_read_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine __maybe_unused)
{
struct report *rep = container_of(tool, struct report, tool);
@@ -400,7 +414,7 @@ static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report
char unit;
unsigned long nr_samples = hists->stats.nr_events[PERF_RECORD_SAMPLE];
u64 nr_events = hists->stats.total_period;
struct perf_evsel *evsel = hists_to_evsel(hists);
struct evsel *evsel = hists_to_evsel(hists);
char buf[512];
size_t size = sizeof(buf);
int socked_id = hists->socket_filter;
@@ -414,7 +428,7 @@ static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report
}
if (perf_evsel__is_group_event(evsel)) {
struct perf_evsel *pos;
struct evsel *pos;
perf_evsel__group_desc(evsel, buf, size);
evname = buf;
@@ -436,7 +450,7 @@ static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report
ret = fprintf(fp, "# Samples: %lu%c", nr_samples, unit);
if (evname != NULL) {
ret += fprintf(fp, " of event%s '%s'",
evsel->nr_members > 1 ? "s" : "", evname);
evsel->core.nr_members > 1 ? "s" : "", evname);
}
if (rep->time_str)
@@ -459,11 +473,11 @@ static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report
return ret + fprintf(fp, "\n#\n");
}
static int perf_evlist__tty_browse_hists(struct perf_evlist *evlist,
static int perf_evlist__tty_browse_hists(struct evlist *evlist,
struct report *rep,
const char *help)
{
struct perf_evsel *pos;
struct evsel *pos;
if (!quiet) {
fprintf(stdout, "#\n# Total Lost Samples: %" PRIu64 "\n#\n",
@@ -532,7 +546,7 @@ static void report__warn_kptr_restrict(const struct report *rep)
static int report__gtk_browse_hists(struct report *rep, const char *help)
{
int (*hist_browser)(struct perf_evlist *evlist, const char *help,
int (*hist_browser)(struct evlist *evlist, const char *help,
struct hist_browser_timer *timer, float min_pcnt);
hist_browser = dlsym(perf_gtk_handle, "perf_evlist__gtk_browse_hists");
@@ -549,7 +563,7 @@ static int report__browse_hists(struct report *rep)
{
int ret;
struct perf_session *session = rep->session;
struct perf_evlist *evlist = session->evlist;
struct evlist *evlist = session->evlist;
const char *help = perf_tip(system_path(TIPDIR));
if (help == NULL) {
@@ -586,7 +600,7 @@ static int report__browse_hists(struct report *rep)
static int report__collapse_hists(struct report *rep)
{
struct ui_progress prog;
struct perf_evsel *pos;
struct evsel *pos;
int ret = 0;
ui_progress__init(&prog, rep->nr_entries, "Merging related events...");
@@ -623,7 +637,7 @@ static int hists__resort_cb(struct hist_entry *he, void *arg)
struct symbol *sym = he->ms.sym;
if (rep->symbol_ipc && sym && !sym->annotate2) {
struct perf_evsel *evsel = hists_to_evsel(he->hists);
struct evsel *evsel = hists_to_evsel(he->hists);
symbol__annotate2(sym, he->ms.map, evsel,
&annotation__default_options, NULL);
@@ -635,7 +649,7 @@ static int hists__resort_cb(struct hist_entry *he, void *arg)
static void report__output_resort(struct report *rep)
{
struct ui_progress prog;
struct perf_evsel *pos;
struct evsel *pos;
ui_progress__init(&prog, rep->nr_entries, "Sorting events for output...");
@@ -818,7 +832,7 @@ static int __cmd_report(struct report *rep)
{
int ret;
struct perf_session *session = rep->session;
struct perf_evsel *pos;
struct evsel *pos;
struct perf_data *data = session->data;
signal(SIGINT, sig_handler);
@@ -1189,6 +1203,7 @@ int cmd_report(int argc, const char **argv)
OPT_CALLBACK(0, "time-quantum", &symbol_conf.time_quantum, "time (ms|us|ns|s)",
"Set time quantum for time sort key (default 100ms)",
parse_time_quantum),
OPTS_EVSWITCH(&report.evswitch),
OPT_END()
};
struct perf_data data = {
@@ -1257,6 +1272,10 @@ repeat:
if (session == NULL)
return -1;
ret = evswitch__init(&report.evswitch, session->evlist, stderr);
if (ret)
return ret;
if (zstd_init(&(session->zstd_data), 0) < 0)
pr_warning("Decompression initialization failed. Reported data may be incomplete.\n");
@@ -1271,6 +1290,8 @@ repeat:
has_br_stack = perf_header__has_feat(&session->header,
HEADER_BRANCH_STACK);
if (perf_evlist__combined_sample_type(session->evlist) & PERF_SAMPLE_STACK_USER)
has_br_stack = false;
setup_forced_leader(&report, session->evlist);

View File

@@ -1,9 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
#include "builtin.h"
#include "perf.h"
#include "perf-sys.h"
#include "util/evlist.h"
#include "util/cache.h"
#include "util/evsel.h"
#include "util/symbol.h"
#include "util/thread.h"
@@ -18,6 +18,7 @@
#include "util/callchain.h"
#include "util/time-utils.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include "util/trace-event.h"
@@ -133,13 +134,13 @@ typedef int (*sort_fn_t)(struct work_atoms *, struct work_atoms *);
struct perf_sched;
struct trace_sched_handler {
int (*switch_event)(struct perf_sched *sched, struct perf_evsel *evsel,
int (*switch_event)(struct perf_sched *sched, struct evsel *evsel,
struct perf_sample *sample, struct machine *machine);
int (*runtime_event)(struct perf_sched *sched, struct perf_evsel *evsel,
int (*runtime_event)(struct perf_sched *sched, struct evsel *evsel,
struct perf_sample *sample, struct machine *machine);
int (*wakeup_event)(struct perf_sched *sched, struct perf_evsel *evsel,
int (*wakeup_event)(struct perf_sched *sched, struct evsel *evsel,
struct perf_sample *sample, struct machine *machine);
/* PERF_RECORD_FORK event, not sched_process_fork tracepoint */
@@ -147,7 +148,7 @@ struct trace_sched_handler {
struct machine *machine);
int (*migrate_task_event)(struct perf_sched *sched,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine);
};
@@ -159,11 +160,11 @@ struct perf_sched_map {
DECLARE_BITMAP(comp_cpus_mask, MAX_CPUS);
int *comp_cpus;
bool comp;
struct thread_map *color_pids;
struct perf_thread_map *color_pids;
const char *color_pids_str;
struct cpu_map *color_cpus;
struct perf_cpu_map *color_cpus;
const char *color_cpus_str;
struct cpu_map *cpus;
struct perf_cpu_map *cpus;
const char *cpus_str;
};
@@ -799,7 +800,7 @@ static void test_calibrations(struct perf_sched *sched)
static int
replay_wakeup_event(struct perf_sched *sched,
struct perf_evsel *evsel, struct perf_sample *sample,
struct evsel *evsel, struct perf_sample *sample,
struct machine *machine __maybe_unused)
{
const char *comm = perf_evsel__strval(evsel, sample, "comm");
@@ -820,7 +821,7 @@ replay_wakeup_event(struct perf_sched *sched,
}
static int replay_switch_event(struct perf_sched *sched,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine __maybe_unused)
{
@@ -1093,7 +1094,7 @@ add_sched_in_event(struct work_atoms *atoms, u64 timestamp)
}
static int latency_switch_event(struct perf_sched *sched,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -1163,7 +1164,7 @@ out_put:
}
static int latency_runtime_event(struct perf_sched *sched,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -1198,7 +1199,7 @@ out_put:
}
static int latency_wakeup_event(struct perf_sched *sched,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -1259,7 +1260,7 @@ out_put:
}
static int latency_migrate_task_event(struct perf_sched *sched,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -1470,7 +1471,7 @@ again:
}
static int process_sched_wakeup_event(struct perf_tool *tool,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -1514,7 +1515,7 @@ map__findnew_thread(struct perf_sched *sched, struct machine *machine, pid_t pid
return thread;
}
static int map_switch_event(struct perf_sched *sched, struct perf_evsel *evsel,
static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
struct perf_sample *sample, struct machine *machine)
{
const u32 next_pid = perf_evsel__intval(evsel, sample, "next_pid");
@@ -1655,7 +1656,7 @@ out:
}
static int process_sched_switch_event(struct perf_tool *tool,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -1681,7 +1682,7 @@ static int process_sched_switch_event(struct perf_tool *tool,
}
static int process_sched_runtime_event(struct perf_tool *tool,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -1711,7 +1712,7 @@ static int perf_sched__process_fork_event(struct perf_tool *tool,
}
static int process_sched_migrate_task_event(struct perf_tool *tool,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -1724,14 +1725,14 @@ static int process_sched_migrate_task_event(struct perf_tool *tool,
}
typedef int (*tracepoint_handler)(struct perf_tool *tool,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine);
static int perf_sched__process_tracepoint_sample(struct perf_tool *tool __maybe_unused,
union perf_event *event __maybe_unused,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
int err = 0;
@@ -1777,7 +1778,7 @@ static int perf_sched__process_comm(struct perf_tool *tool __maybe_unused,
static int perf_sched__read_events(struct perf_sched *sched)
{
const struct perf_evsel_str_handler handlers[] = {
const struct evsel_str_handler handlers[] = {
{ "sched:sched_switch", process_sched_switch_event, },
{ "sched:sched_stat_runtime", process_sched_runtime_event, },
{ "sched:sched_wakeup", process_sched_wakeup_event, },
@@ -1839,7 +1840,7 @@ static inline void print_sched_time(unsigned long long nsecs, int width)
* returns runtime data for event, allocating memory for it the
* first time it is used.
*/
static struct evsel_runtime *perf_evsel__get_runtime(struct perf_evsel *evsel)
static struct evsel_runtime *perf_evsel__get_runtime(struct evsel *evsel)
{
struct evsel_runtime *r = evsel->priv;
@@ -1854,7 +1855,7 @@ static struct evsel_runtime *perf_evsel__get_runtime(struct perf_evsel *evsel)
/*
* save last time event was seen per cpu
*/
static void perf_evsel__save_time(struct perf_evsel *evsel,
static void perf_evsel__save_time(struct evsel *evsel,
u64 timestamp, u32 cpu)
{
struct evsel_runtime *r = perf_evsel__get_runtime(evsel);
@@ -1881,7 +1882,7 @@ static void perf_evsel__save_time(struct perf_evsel *evsel,
}
/* returns last time this event was seen on the given cpu */
static u64 perf_evsel__get_time(struct perf_evsel *evsel, u32 cpu)
static u64 perf_evsel__get_time(struct evsel *evsel, u32 cpu)
{
struct evsel_runtime *r = perf_evsel__get_runtime(evsel);
@@ -1988,7 +1989,7 @@ static char task_state_char(struct thread *thread, int state)
}
static void timehist_print_sample(struct perf_sched *sched,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct addr_location *al,
struct thread *thread,
@@ -2121,7 +2122,7 @@ static void timehist_update_runtime_stats(struct thread_runtime *r,
}
static bool is_idle_sample(struct perf_sample *sample,
struct perf_evsel *evsel)
struct evsel *evsel)
{
/* pid 0 == swapper == idle task */
if (strcmp(perf_evsel__name(evsel), "sched:sched_switch") == 0)
@@ -2132,7 +2133,7 @@ static bool is_idle_sample(struct perf_sample *sample,
static void save_task_callchain(struct perf_sched *sched,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct callchain_cursor *cursor = &callchain_cursor;
@@ -2286,7 +2287,7 @@ static void save_idle_callchain(struct perf_sched *sched,
static struct thread *timehist_get_thread(struct perf_sched *sched,
struct perf_sample *sample,
struct machine *machine,
struct perf_evsel *evsel)
struct evsel *evsel)
{
struct thread *thread;
@@ -2332,7 +2333,7 @@ static struct thread *timehist_get_thread(struct perf_sched *sched,
static bool timehist_skip_sample(struct perf_sched *sched,
struct thread *thread,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
bool rc = false;
@@ -2354,7 +2355,7 @@ static bool timehist_skip_sample(struct perf_sched *sched,
}
static void timehist_print_wakeup_event(struct perf_sched *sched,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine,
struct thread *awakened)
@@ -2389,7 +2390,7 @@ static void timehist_print_wakeup_event(struct perf_sched *sched,
static int timehist_sched_wakeup_event(struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -2419,7 +2420,7 @@ static int timehist_sched_wakeup_event(struct perf_tool *tool,
}
static void timehist_print_migration_event(struct perf_sched *sched,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine,
struct thread *migrated)
@@ -2473,7 +2474,7 @@ static void timehist_print_migration_event(struct perf_sched *sched,
static int timehist_migrate_task_event(struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -2501,7 +2502,7 @@ static int timehist_migrate_task_event(struct perf_tool *tool,
static int timehist_sched_change_event(struct perf_tool *tool,
union perf_event *event,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -2627,7 +2628,7 @@ out:
static int timehist_sched_switch_event(struct perf_tool *tool,
union perf_event *event,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine __maybe_unused)
{
@@ -2643,7 +2644,7 @@ static int process_lost(struct perf_tool *tool __maybe_unused,
timestamp__scnprintf_usec(sample->time, tstr, sizeof(tstr));
printf("%15s ", tstr);
printf("lost %" PRIu64 " events on cpu %d\n", event->lost.lost, sample->cpu);
printf("lost %" PRI_lu64 " events on cpu %d\n", event->lost.lost, sample->cpu);
return 0;
}
@@ -2897,14 +2898,14 @@ static void timehist_print_summary(struct perf_sched *sched,
typedef int (*sched_handler)(struct perf_tool *tool,
union perf_event *event,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine);
static int perf_timehist__process_sample(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
@@ -2924,12 +2925,12 @@ static int perf_timehist__process_sample(struct perf_tool *tool,
}
static int timehist_check_attr(struct perf_sched *sched,
struct perf_evlist *evlist)
struct evlist *evlist)
{
struct perf_evsel *evsel;
struct evsel *evsel;
struct evsel_runtime *er;
list_for_each_entry(evsel, &evlist->entries, node) {
list_for_each_entry(evsel, &evlist->core.entries, core.node) {
er = perf_evsel__get_runtime(evsel);
if (er == NULL) {
pr_err("Failed to allocate memory for evsel runtime data\n");
@@ -2948,12 +2949,12 @@ static int timehist_check_attr(struct perf_sched *sched,
static int perf_sched__timehist(struct perf_sched *sched)
{
const struct perf_evsel_str_handler handlers[] = {
const struct evsel_str_handler handlers[] = {
{ "sched:sched_switch", timehist_sched_switch_event, },
{ "sched:sched_wakeup", timehist_sched_wakeup_event, },
{ "sched:sched_wakeup_new", timehist_sched_wakeup_event, },
};
const struct perf_evsel_str_handler migrate_handlers[] = {
const struct evsel_str_handler migrate_handlers[] = {
{ "sched:sched_migrate_task", timehist_migrate_task_event, },
};
struct perf_data data = {
@@ -2963,7 +2964,7 @@ static int perf_sched__timehist(struct perf_sched *sched)
};
struct perf_session *session;
struct perf_evlist *evlist;
struct evlist *evlist;
int err = -1;
/*
@@ -3170,7 +3171,7 @@ static int perf_sched__lat(struct perf_sched *sched)
static int setup_map_cpus(struct perf_sched *sched)
{
struct cpu_map *map;
struct perf_cpu_map *map;
sched->max_cpu = sysconf(_SC_NPROCESSORS_CONF);
@@ -3183,7 +3184,7 @@ static int setup_map_cpus(struct perf_sched *sched)
if (!sched->map.cpus_str)
return 0;
map = cpu_map__new(sched->map.cpus_str);
map = perf_cpu_map__new(sched->map.cpus_str);
if (!map) {
pr_err("failed to get cpus map from %s\n", sched->map.cpus_str);
return -1;
@@ -3195,7 +3196,7 @@ static int setup_map_cpus(struct perf_sched *sched)
static int setup_color_pids(struct perf_sched *sched)
{
struct thread_map *map;
struct perf_thread_map *map;
if (!sched->map.color_pids_str)
return 0;
@@ -3212,12 +3213,12 @@ static int setup_color_pids(struct perf_sched *sched)
static int setup_color_cpus(struct perf_sched *sched)
{
struct cpu_map *map;
struct perf_cpu_map *map;
if (!sched->map.color_cpus_str)
return 0;
map = cpu_map__new(sched->map.color_cpus_str);
map = perf_cpu_map__new(sched->map.color_cpus_str);
if (!map) {
pr_err("failed to get thread map from %s\n", sched->map.color_cpus_str);
return -1;

View File

@@ -1,9 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
#include "builtin.h"
#include "perf.h"
#include "util/cache.h"
#include "util/counts.h"
#include "util/debug.h"
#include "util/dso.h"
#include <subcmd/exec-cmd.h>
#include "util/header.h"
#include <subcmd/parse-options.h>
@@ -11,11 +11,13 @@
#include "util/session.h"
#include "util/tool.h"
#include "util/map.h"
#include "util/srcline.h"
#include "util/symbol.h"
#include "util/thread.h"
#include "util/trace-event.h"
#include "util/evlist.h"
#include "util/evsel.h"
#include "util/evswitch.h"
#include "util/sort.h"
#include "util/data.h"
#include "util/auxtrace.h"
@@ -27,6 +29,7 @@
#include "util/thread-stack.h"
#include "util/time-utils.h"
#include "util/path.h"
#include "ui/ui.h"
#include "print_binary.h"
#include "archinsn.h"
#include <linux/bitmap.h>
@@ -48,6 +51,10 @@
#include <fcntl.h>
#include <unistd.h>
#include <subcmd/pager.h>
#include <perf/evlist.h>
#include "util/record.h"
#include "util/util.h"
#include "perf.h"
#include <linux/ctype.h>
@@ -242,7 +249,7 @@ static struct {
},
};
struct perf_evsel_script {
struct evsel_script {
char *filename;
FILE *fp;
u64 samples;
@@ -251,15 +258,15 @@ struct perf_evsel_script {
int gnum;
};
static inline struct perf_evsel_script *evsel_script(struct perf_evsel *evsel)
static inline struct evsel_script *evsel_script(struct evsel *evsel)
{
return (struct perf_evsel_script *)evsel->priv;
return (struct evsel_script *)evsel->priv;
}
static struct perf_evsel_script *perf_evsel_script__new(struct perf_evsel *evsel,
static struct evsel_script *perf_evsel_script__new(struct evsel *evsel,
struct perf_data *data)
{
struct perf_evsel_script *es = zalloc(sizeof(*es));
struct evsel_script *es = zalloc(sizeof(*es));
if (es != NULL) {
if (asprintf(&es->filename, "%s.%s.dump", data->file.path, perf_evsel__name(evsel)) < 0)
@@ -277,7 +284,7 @@ out_free:
return NULL;
}
static void perf_evsel_script__delete(struct perf_evsel_script *es)
static void perf_evsel_script__delete(struct evsel_script *es)
{
zfree(&es->filename);
fclose(es->fp);
@@ -285,7 +292,7 @@ static void perf_evsel_script__delete(struct perf_evsel_script *es)
free(es);
}
static int perf_evsel_script__fprintf(struct perf_evsel_script *es, FILE *fp)
static int perf_evsel_script__fprintf(struct evsel_script *es, FILE *fp)
{
struct stat st;
@@ -340,12 +347,12 @@ static const char *output_field2str(enum perf_output_field field)
#define PRINT_FIELD(x) (output[output_type(attr->type)].fields & PERF_OUTPUT_##x)
static int perf_evsel__do_check_stype(struct perf_evsel *evsel,
static int perf_evsel__do_check_stype(struct evsel *evsel,
u64 sample_type, const char *sample_msg,
enum perf_output_field field,
bool allow_user_set)
{
struct perf_event_attr *attr = &evsel->attr;
struct perf_event_attr *attr = &evsel->core.attr;
int type = output_type(attr->type);
const char *evname;
@@ -372,7 +379,7 @@ static int perf_evsel__do_check_stype(struct perf_evsel *evsel,
return 0;
}
static int perf_evsel__check_stype(struct perf_evsel *evsel,
static int perf_evsel__check_stype(struct evsel *evsel,
u64 sample_type, const char *sample_msg,
enum perf_output_field field)
{
@@ -380,10 +387,10 @@ static int perf_evsel__check_stype(struct perf_evsel *evsel,
false);
}
static int perf_evsel__check_attr(struct perf_evsel *evsel,
static int perf_evsel__check_attr(struct evsel *evsel,
struct perf_session *session)
{
struct perf_event_attr *attr = &evsel->attr;
struct perf_event_attr *attr = &evsel->core.attr;
bool allow_user_set;
if (perf_header__has_feat(&session->header, HEADER_STAT))
@@ -418,7 +425,7 @@ static int perf_evsel__check_attr(struct perf_evsel *evsel,
return -EINVAL;
if (PRINT_FIELD(SYM) &&
!(evsel->attr.sample_type & (PERF_SAMPLE_IP|PERF_SAMPLE_ADDR))) {
!(evsel->core.attr.sample_type & (PERF_SAMPLE_IP|PERF_SAMPLE_ADDR))) {
pr_err("Display of symbols requested but neither sample IP nor "
"sample address\navailable. Hence, no addresses to convert "
"to symbols.\n");
@@ -430,7 +437,7 @@ static int perf_evsel__check_attr(struct perf_evsel *evsel,
return -EINVAL;
}
if (PRINT_FIELD(DSO) &&
!(evsel->attr.sample_type & (PERF_SAMPLE_IP|PERF_SAMPLE_ADDR))) {
!(evsel->core.attr.sample_type & (PERF_SAMPLE_IP|PERF_SAMPLE_ADDR))) {
pr_err("Display of DSO requested but no address to convert.\n");
return -EINVAL;
}
@@ -507,7 +514,7 @@ static void set_print_ip_opts(struct perf_event_attr *attr)
static int perf_session__check_output_opt(struct perf_session *session)
{
unsigned int j;
struct perf_evsel *evsel;
struct evsel *evsel;
for (j = 0; j < OUTPUT_TYPE_MAX; ++j) {
evsel = perf_session__find_first_evtype(session, attr_type(j));
@@ -531,7 +538,7 @@ static int perf_session__check_output_opt(struct perf_session *session)
if (evsel == NULL)
continue;
set_print_ip_opts(&evsel->attr);
set_print_ip_opts(&evsel->core.attr);
}
if (!no_callchain) {
@@ -558,7 +565,7 @@ static int perf_session__check_output_opt(struct perf_session *session)
j = PERF_TYPE_TRACEPOINT;
evlist__for_each_entry(session->evlist, evsel) {
if (evsel->attr.type != j)
if (evsel->core.attr.type != j)
continue;
if (evsel__has_callchain(evsel)) {
@@ -566,7 +573,7 @@ static int perf_session__check_output_opt(struct perf_session *session)
output[j].fields |= PERF_OUTPUT_SYM;
output[j].fields |= PERF_OUTPUT_SYMOFFSET;
output[j].fields |= PERF_OUTPUT_DSO;
set_print_ip_opts(&evsel->attr);
set_print_ip_opts(&evsel->core.attr);
goto out;
}
}
@@ -614,10 +621,10 @@ static int perf_sample__fprintf_uregs(struct perf_sample *sample,
static int perf_sample__fprintf_start(struct perf_sample *sample,
struct thread *thread,
struct perf_evsel *evsel,
struct evsel *evsel,
u32 type, FILE *fp)
{
struct perf_event_attr *attr = &evsel->attr;
struct perf_event_attr *attr = &evsel->core.attr;
unsigned long secs;
unsigned long long nsecs;
int printed = 0;
@@ -1162,13 +1169,13 @@ out:
}
static const char *resolve_branch_sym(struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct thread *thread,
struct addr_location *al,
u64 *ip)
{
struct addr_location addr_al;
struct perf_event_attr *attr = &evsel->attr;
struct perf_event_attr *attr = &evsel->core.attr;
const char *name = NULL;
if (sample->flags & (PERF_IP_FLAG_CALL | PERF_IP_FLAG_TRACE_BEGIN)) {
@@ -1191,11 +1198,11 @@ static const char *resolve_branch_sym(struct perf_sample *sample,
}
static int perf_sample__fprintf_callindent(struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct thread *thread,
struct addr_location *al, FILE *fp)
{
struct perf_event_attr *attr = &evsel->attr;
struct perf_event_attr *attr = &evsel->core.attr;
size_t depth = thread_stack__depth(thread, sample->cpu);
const char *name = NULL;
static int spacing;
@@ -1285,12 +1292,12 @@ static int perf_sample__fprintf_ipc(struct perf_sample *sample,
}
static int perf_sample__fprintf_bts(struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct thread *thread,
struct addr_location *al,
struct machine *machine, FILE *fp)
{
struct perf_event_attr *attr = &evsel->attr;
struct perf_event_attr *attr = &evsel->core.attr;
unsigned int type = output_type(attr->type);
bool print_srcline_last = false;
int printed = 0;
@@ -1322,7 +1329,7 @@ static int perf_sample__fprintf_bts(struct perf_sample *sample,
/* print branch_to information */
if (PRINT_FIELD(ADDR) ||
((evsel->attr.sample_type & PERF_SAMPLE_ADDR) &&
((evsel->core.attr.sample_type & PERF_SAMPLE_ADDR) &&
!output[type].user_set)) {
printed += fprintf(fp, " => ");
printed += perf_sample__fprintf_addr(sample, thread, attr, fp);
@@ -1593,9 +1600,9 @@ static int perf_sample__fprintf_synth_cbr(struct perf_sample *sample, FILE *fp)
}
static int perf_sample__fprintf_synth(struct perf_sample *sample,
struct perf_evsel *evsel, FILE *fp)
struct evsel *evsel, FILE *fp)
{
switch (evsel->attr.config) {
switch (evsel->core.attr.config) {
case PERF_SYNTH_INTEL_PTWRITE:
return perf_sample__fprintf_synth_ptwrite(sample, fp);
case PERF_SYNTH_INTEL_MWAIT:
@@ -1627,8 +1634,9 @@ struct perf_script {
bool show_bpf_events;
bool allocated;
bool per_event_dump;
struct cpu_map *cpus;
struct thread_map *threads;
struct evswitch evswitch;
struct perf_cpu_map *cpus;
struct perf_thread_map *threads;
int name_width;
const char *time_str;
struct perf_time_interval *ptime_range;
@@ -1636,9 +1644,9 @@ struct perf_script {
int range_num;
};
static int perf_evlist__max_name_len(struct perf_evlist *evlist)
static int perf_evlist__max_name_len(struct evlist *evlist)
{
struct perf_evsel *evsel;
struct evsel *evsel;
int max = 0;
evlist__for_each_entry(evlist, evsel) {
@@ -1670,7 +1678,7 @@ static int data_src__fprintf(u64 data_src, FILE *fp)
struct metric_ctx {
struct perf_sample *sample;
struct thread *thread;
struct perf_evsel *evsel;
struct evsel *evsel;
FILE *fp;
};
@@ -1705,7 +1713,7 @@ static void script_new_line(struct perf_stat_config *config __maybe_unused,
static void perf_sample__fprint_metric(struct perf_script *script,
struct thread *thread,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
FILE *fp)
{
@@ -1720,7 +1728,7 @@ static void perf_sample__fprint_metric(struct perf_script *script,
},
.force_header = false,
};
struct perf_evsel *ev2;
struct evsel *ev2;
u64 val;
if (!evsel->stats)
@@ -1733,7 +1741,7 @@ static void perf_sample__fprint_metric(struct perf_script *script,
sample->cpu,
&rt_stat);
evsel_script(evsel)->val = val;
if (evsel_script(evsel->leader)->gnum == evsel->leader->nr_members) {
if (evsel_script(evsel->leader)->gnum == evsel->leader->core.nr_members) {
for_each_group_member (ev2, evsel->leader) {
perf_stat__print_shadow_stats(&stat_config, ev2,
evsel_script(ev2)->val,
@@ -1747,7 +1755,7 @@ static void perf_sample__fprint_metric(struct perf_script *script,
}
static bool show_event(struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct thread *thread,
struct addr_location *al)
{
@@ -1788,14 +1796,14 @@ static bool show_event(struct perf_sample *sample,
}
static void process_event(struct perf_script *script,
struct perf_sample *sample, struct perf_evsel *evsel,
struct perf_sample *sample, struct evsel *evsel,
struct addr_location *al,
struct machine *machine)
{
struct thread *thread = al->thread;
struct perf_event_attr *attr = &evsel->attr;
struct perf_event_attr *attr = &evsel->core.attr;
unsigned int type = output_type(attr->type);
struct perf_evsel_script *es = evsel->priv;
struct evsel_script *es = evsel->priv;
FILE *fp = es->fp;
if (output[type].fields == 0)
@@ -1804,6 +1812,9 @@ static void process_event(struct perf_script *script,
if (!show_event(sample, evsel, thread, al))
return;
if (evswitch__discard(&script->evswitch, evsel))
return;
++es->samples;
perf_sample__fprintf_start(sample, thread, evsel,
@@ -1897,9 +1908,9 @@ static void process_event(struct perf_script *script,
static struct scripting_ops *scripting_ops;
static void __process_stat(struct perf_evsel *counter, u64 tstamp)
static void __process_stat(struct evsel *counter, u64 tstamp)
{
int nthreads = thread_map__nr(counter->threads);
int nthreads = perf_thread_map__nr(counter->core.threads);
int ncpus = perf_evsel__nr_cpus(counter);
int cpu, thread;
static int header_printed;
@@ -1920,8 +1931,8 @@ static void __process_stat(struct perf_evsel *counter, u64 tstamp)
counts = perf_counts(counter->counts, cpu, thread);
printf("%3d %8d %15" PRIu64 " %15" PRIu64 " %15" PRIu64 " %15" PRIu64 " %s\n",
counter->cpus->map[cpu],
thread_map__pid(counter->threads, thread),
counter->core.cpus->map[cpu],
perf_thread_map__pid(counter->core.threads, thread),
counts->val,
counts->ena,
counts->run,
@@ -1931,7 +1942,7 @@ static void __process_stat(struct perf_evsel *counter, u64 tstamp)
}
}
static void process_stat(struct perf_evsel *counter, u64 tstamp)
static void process_stat(struct evsel *counter, u64 tstamp)
{
if (scripting_ops && scripting_ops->process_stat)
scripting_ops->process_stat(&stat_config, counter, tstamp);
@@ -1973,7 +1984,7 @@ static bool filter_cpu(struct perf_sample *sample)
static int process_sample_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct perf_script *scr = container_of(tool, struct perf_script, tool);
@@ -2018,13 +2029,13 @@ out_put:
}
static int process_attr(struct perf_tool *tool, union perf_event *event,
struct perf_evlist **pevlist)
struct evlist **pevlist)
{
struct perf_script *scr = container_of(tool, struct perf_script, tool);
struct perf_evlist *evlist;
struct perf_evsel *evsel, *pos;
struct evlist *evlist;
struct evsel *evsel, *pos;
int err;
static struct perf_evsel_script *es;
static struct evsel_script *es;
err = perf_event__process_attr(tool, event, pevlist);
if (err)
@@ -2046,18 +2057,18 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
}
}
if (evsel->attr.type >= PERF_TYPE_MAX &&
evsel->attr.type != PERF_TYPE_SYNTH)
if (evsel->core.attr.type >= PERF_TYPE_MAX &&
evsel->core.attr.type != PERF_TYPE_SYNTH)
return 0;
evlist__for_each_entry(evlist, pos) {
if (pos->attr.type == evsel->attr.type && pos != evsel)
if (pos->core.attr.type == evsel->core.attr.type && pos != evsel)
return 0;
}
set_print_ip_opts(&evsel->attr);
set_print_ip_opts(&evsel->core.attr);
if (evsel->attr.sample_type)
if (evsel->core.attr.sample_type)
err = perf_evsel__check_attr(evsel, scr->session);
return err;
@@ -2071,7 +2082,7 @@ static int process_comm_event(struct perf_tool *tool,
struct thread *thread;
struct perf_script *script = container_of(tool, struct perf_script, tool);
struct perf_session *session = script->session;
struct perf_evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
int ret = -1;
thread = machine__findnew_thread(machine, event->comm.pid, event->comm.tid);
@@ -2083,7 +2094,7 @@ static int process_comm_event(struct perf_tool *tool,
if (perf_event__process_comm(tool, event, sample, machine) < 0)
goto out;
if (!evsel->attr.sample_id_all) {
if (!evsel->core.attr.sample_id_all) {
sample->cpu = 0;
sample->time = 0;
sample->tid = event->comm.tid;
@@ -2108,7 +2119,7 @@ static int process_namespaces_event(struct perf_tool *tool,
struct thread *thread;
struct perf_script *script = container_of(tool, struct perf_script, tool);
struct perf_session *session = script->session;
struct perf_evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
int ret = -1;
thread = machine__findnew_thread(machine, event->namespaces.pid,
@@ -2121,7 +2132,7 @@ static int process_namespaces_event(struct perf_tool *tool,
if (perf_event__process_namespaces(tool, event, sample, machine) < 0)
goto out;
if (!evsel->attr.sample_id_all) {
if (!evsel->core.attr.sample_id_all) {
sample->cpu = 0;
sample->time = 0;
sample->tid = event->namespaces.tid;
@@ -2146,7 +2157,7 @@ static int process_fork_event(struct perf_tool *tool,
struct thread *thread;
struct perf_script *script = container_of(tool, struct perf_script, tool);
struct perf_session *session = script->session;
struct perf_evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
if (perf_event__process_fork(tool, event, sample, machine) < 0)
return -1;
@@ -2157,7 +2168,7 @@ static int process_fork_event(struct perf_tool *tool,
return -1;
}
if (!evsel->attr.sample_id_all) {
if (!evsel->core.attr.sample_id_all) {
sample->cpu = 0;
sample->time = event->fork.time;
sample->tid = event->fork.tid;
@@ -2181,7 +2192,7 @@ static int process_exit_event(struct perf_tool *tool,
struct thread *thread;
struct perf_script *script = container_of(tool, struct perf_script, tool);
struct perf_session *session = script->session;
struct perf_evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
thread = machine__findnew_thread(machine, event->fork.pid, event->fork.tid);
if (thread == NULL) {
@@ -2189,7 +2200,7 @@ static int process_exit_event(struct perf_tool *tool,
return -1;
}
if (!evsel->attr.sample_id_all) {
if (!evsel->core.attr.sample_id_all) {
sample->cpu = 0;
sample->time = 0;
sample->tid = event->fork.tid;
@@ -2216,7 +2227,7 @@ static int process_mmap_event(struct perf_tool *tool,
struct thread *thread;
struct perf_script *script = container_of(tool, struct perf_script, tool);
struct perf_session *session = script->session;
struct perf_evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
if (perf_event__process_mmap(tool, event, sample, machine) < 0)
return -1;
@@ -2227,7 +2238,7 @@ static int process_mmap_event(struct perf_tool *tool,
return -1;
}
if (!evsel->attr.sample_id_all) {
if (!evsel->core.attr.sample_id_all) {
sample->cpu = 0;
sample->time = 0;
sample->tid = event->mmap.tid;
@@ -2250,7 +2261,7 @@ static int process_mmap2_event(struct perf_tool *tool,
struct thread *thread;
struct perf_script *script = container_of(tool, struct perf_script, tool);
struct perf_session *session = script->session;
struct perf_evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
if (perf_event__process_mmap2(tool, event, sample, machine) < 0)
return -1;
@@ -2261,7 +2272,7 @@ static int process_mmap2_event(struct perf_tool *tool,
return -1;
}
if (!evsel->attr.sample_id_all) {
if (!evsel->core.attr.sample_id_all) {
sample->cpu = 0;
sample->time = 0;
sample->tid = event->mmap2.tid;
@@ -2284,7 +2295,7 @@ static int process_switch_event(struct perf_tool *tool,
struct thread *thread;
struct perf_script *script = container_of(tool, struct perf_script, tool);
struct perf_session *session = script->session;
struct perf_evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
if (perf_event__process_switch(tool, event, sample, machine) < 0)
return -1;
@@ -2319,7 +2330,7 @@ process_lost_event(struct perf_tool *tool,
{
struct perf_script *script = container_of(tool, struct perf_script, tool);
struct perf_session *session = script->session;
struct perf_evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct thread *thread;
thread = machine__findnew_thread(machine, sample->pid,
@@ -2355,12 +2366,12 @@ process_bpf_events(struct perf_tool *tool __maybe_unused,
struct thread *thread;
struct perf_script *script = container_of(tool, struct perf_script, tool);
struct perf_session *session = script->session;
struct perf_evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
struct evsel *evsel = perf_evlist__id2evsel(session->evlist, sample->id);
if (machine__process_ksymbol(machine, event, sample) < 0)
return -1;
if (!evsel->attr.sample_id_all) {
if (!evsel->core.attr.sample_id_all) {
perf_event__fprintf(event, stdout);
return 0;
}
@@ -2388,8 +2399,8 @@ static void sig_handler(int sig __maybe_unused)
static void perf_script__fclose_per_event_dump(struct perf_script *script)
{
struct perf_evlist *evlist = script->session->evlist;
struct perf_evsel *evsel;
struct evlist *evlist = script->session->evlist;
struct evsel *evsel;
evlist__for_each_entry(evlist, evsel) {
if (!evsel->priv)
@@ -2401,7 +2412,7 @@ static void perf_script__fclose_per_event_dump(struct perf_script *script)
static int perf_script__fopen_per_event_dump(struct perf_script *script)
{
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(script->session->evlist, evsel) {
/*
@@ -2428,8 +2439,8 @@ out_err_fclose:
static int perf_script__setup_per_event_dump(struct perf_script *script)
{
struct perf_evsel *evsel;
static struct perf_evsel_script es_stdout;
struct evsel *evsel;
static struct evsel_script es_stdout;
if (script->per_event_dump)
return perf_script__fopen_per_event_dump(script);
@@ -2444,10 +2455,10 @@ static int perf_script__setup_per_event_dump(struct perf_script *script)
static void perf_script__exit_per_event_dump_stats(struct perf_script *script)
{
struct perf_evsel *evsel;
struct evsel *evsel;
evlist__for_each_entry(script->session->evlist, evsel) {
struct perf_evsel_script *es = evsel->priv;
struct evsel_script *es = evsel->priv;
perf_evsel_script__fprintf(es, stdout);
perf_evsel_script__delete(es);
@@ -2484,8 +2495,8 @@ static int __cmd_script(struct perf_script *script)
script->tool.finished_round = process_finished_round_event;
}
if (script->show_bpf_events) {
script->tool.ksymbol = process_bpf_events;
script->tool.bpf_event = process_bpf_events;
script->tool.ksymbol = process_bpf_events;
script->tool.bpf = process_bpf_events;
}
if (perf_script__setup_per_event_dump(script)) {
@@ -3003,7 +3014,7 @@ static int check_ev_match(char *dir_name, char *scriptname,
{
char filename[MAXPATHLEN], evname[128];
char line[BUFSIZ], *p;
struct perf_evsel *pos;
struct evsel *pos;
int match, len;
FILE *fp;
@@ -3235,8 +3246,8 @@ static void script__setup_sample_type(struct perf_script *script)
static int process_stat_round_event(struct perf_session *session,
union perf_event *event)
{
struct stat_round_event *round = &event->stat_round;
struct perf_evsel *counter;
struct perf_record_stat_round *round = &event->stat_round;
struct evsel *counter;
evlist__for_each_entry(session->evlist, counter) {
perf_stat_process_counter(&stat_config, counter);
@@ -3256,7 +3267,7 @@ static int process_stat_config_event(struct perf_session *session __maybe_unused
static int set_maps(struct perf_script *script)
{
struct perf_evlist *evlist = script->session->evlist;
struct evlist *evlist = script->session->evlist;
if (!script->cpus || !script->threads)
return 0;
@@ -3264,7 +3275,7 @@ static int set_maps(struct perf_script *script)
if (WARN_ONCE(script->allocated, "stats double allocation\n"))
return -EINVAL;
perf_evlist__set_maps(evlist, script->cpus, script->threads);
perf_evlist__set_maps(&evlist->core, script->cpus, script->threads);
if (perf_evlist__alloc_stats(evlist, true))
return -ENOMEM;
@@ -3537,6 +3548,7 @@ int cmd_script(int argc, const char **argv)
"file", "file saving guest os /proc/kallsyms"),
OPT_STRING(0, "guestmodules", &symbol_conf.default_guest_modules,
"file", "file saving guest os /proc/modules"),
OPTS_EVSWITCH(&script.evswitch),
OPT_END()
};
const char * const script_subcommands[] = { "record", "report", NULL };
@@ -3861,6 +3873,10 @@ int cmd_script(int argc, const char **argv)
script.range_num);
}
err = evswitch__init(&script.evswitch, session->evlist, stderr);
if (err)
goto out_delete;
err = __cmd_script(&script);
flush_scripting();

View File

@@ -40,8 +40,8 @@
* Jaswinder Singh Rajput <jaswinder@kernel.org>
*/
#include "perf.h"
#include "builtin.h"
#include "perf.h"
#include "util/cgroup.h"
#include <subcmd/parse-options.h>
#include "util/parse-events.h"
@@ -54,7 +54,6 @@
#include "util/stat.h"
#include "util/header.h"
#include "util/cpumap.h"
#include "util/thread.h"
#include "util/thread_map.h"
#include "util/counts.h"
#include "util/group.h"
@@ -62,6 +61,8 @@
#include "util/tool.h"
#include "util/string2.h"
#include "util/metricgroup.h"
#include "util/target.h"
#include "util/time-utils.h"
#include "util/top.h"
#include "asm/bug.h"
@@ -83,6 +84,7 @@
#include <sys/resource.h>
#include <linux/ctype.h>
#include <perf/evlist.h>
#define DEFAULT_SEPARATOR " "
#define FREEZE_ON_SMI_PATH "devices/cpu/freeze_on_smi"
@@ -130,7 +132,7 @@ static const char *smi_cost_attrs = {
"}"
};
static struct perf_evlist *evsel_list;
static struct evlist *evsel_list;
static struct target target = {
.uid = UINT_MAX,
@@ -164,8 +166,8 @@ struct perf_stat {
u64 bytes_written;
struct perf_tool tool;
bool maps_allocated;
struct cpu_map *cpus;
struct thread_map *threads;
struct perf_cpu_map *cpus;
struct perf_thread_map *threads;
enum aggr_mode aggr_mode;
};
@@ -234,7 +236,7 @@ static int write_stat_round_event(u64 tm, u64 type)
#define SID(e, x, y) xyarray__entry(e->sample_id, x, y)
static int
perf_evsel__write_stat_event(struct perf_evsel *counter, u32 cpu, u32 thread,
perf_evsel__write_stat_event(struct evsel *counter, u32 cpu, u32 thread,
struct perf_counts_values *count)
{
struct perf_sample_id *sid = SID(counter, cpu, thread);
@@ -243,7 +245,7 @@ perf_evsel__write_stat_event(struct perf_evsel *counter, u32 cpu, u32 thread,
process_synthesized_event, NULL);
}
static int read_single_counter(struct perf_evsel *counter, int cpu,
static int read_single_counter(struct evsel *counter, int cpu,
int thread, struct timespec *rs)
{
if (counter->tool_event == PERF_TOOL_DURATION_TIME) {
@@ -261,9 +263,9 @@ static int read_single_counter(struct perf_evsel *counter, int cpu,
* Read out the results of a single counter:
* do not aggregate counts across CPUs in system-wide mode
*/
static int read_counter(struct perf_evsel *counter, struct timespec *rs)
static int read_counter(struct evsel *counter, struct timespec *rs)
{
int nthreads = thread_map__nr(evsel_list->threads);
int nthreads = perf_thread_map__nr(evsel_list->core.threads);
int ncpus, cpu, thread;
if (target__has_cpu(&target) && !target__has_per_thread(&target))
@@ -287,7 +289,7 @@ static int read_counter(struct perf_evsel *counter, struct timespec *rs)
* The leader's group read loads data into its group members
* (via perf_evsel__read_counter) and sets threir count->loaded.
*/
if (!count->loaded &&
if (!perf_counts__is_loaded(counter->counts, cpu, thread) &&
read_single_counter(counter, cpu, thread, rs)) {
counter->counts->scaled = -1;
perf_counts(counter->counts, cpu, thread)->ena = 0;
@@ -295,7 +297,7 @@ static int read_counter(struct perf_evsel *counter, struct timespec *rs)
return -1;
}
count->loaded = false;
perf_counts__set_loaded(counter->counts, cpu, thread, false);
if (STAT_RECORD) {
if (perf_evsel__write_stat_event(counter, cpu, thread, count)) {
@@ -319,7 +321,7 @@ static int read_counter(struct perf_evsel *counter, struct timespec *rs)
static void read_counters(struct timespec *rs)
{
struct perf_evsel *counter;
struct evsel *counter;
int ret;
evlist__for_each_entry(evsel_list, counter) {
@@ -362,7 +364,7 @@ static void enable_counters(void)
* - we have initial delay configured
*/
if (!target__none(&target) || stat_config.initial_delay)
perf_evlist__enable(evsel_list);
evlist__enable(evsel_list);
}
static void disable_counters(void)
@@ -373,7 +375,7 @@ static void disable_counters(void)
* from counting before reading their constituent counters.
*/
if (!target__none(&target))
perf_evlist__disable(evsel_list);
evlist__disable(evsel_list);
}
static volatile int workload_exec_errno;
@@ -389,13 +391,13 @@ static void workload_exec_failed_signal(int signo __maybe_unused, siginfo_t *inf
workload_exec_errno = info->si_value.sival_int;
}
static bool perf_evsel__should_store_id(struct perf_evsel *counter)
static bool perf_evsel__should_store_id(struct evsel *counter)
{
return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID;
return STAT_RECORD || counter->core.attr.read_format & PERF_FORMAT_ID;
}
static bool is_target_alive(struct target *_target,
struct thread_map *threads)
struct perf_thread_map *threads)
{
struct stat st;
int i;
@@ -423,7 +425,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
int timeout = stat_config.timeout;
char msg[BUFSIZ];
unsigned long long t0, t1;
struct perf_evsel *counter;
struct evsel *counter;
struct timespec ts;
size_t l;
int status = 0;
@@ -478,22 +480,22 @@ try_again:
counter->supported = false;
if ((counter->leader != counter) ||
!(counter->leader->nr_members > 1))
!(counter->leader->core.nr_members > 1))
continue;
} else if (perf_evsel__fallback(counter, errno, msg, sizeof(msg))) {
if (verbose > 0)
ui__warning("%s\n", msg);
goto try_again;
} else if (target__has_per_thread(&target) &&
evsel_list->threads &&
evsel_list->threads->err_thread != -1) {
evsel_list->core.threads &&
evsel_list->core.threads->err_thread != -1) {
/*
* For global --per-thread case, skip current
* error thread.
*/
if (!thread_map__remove(evsel_list->threads,
evsel_list->threads->err_thread)) {
evsel_list->threads->err_thread = -1;
if (!thread_map__remove(evsel_list->core.threads,
evsel_list->core.threads->err_thread)) {
evsel_list->core.threads->err_thread = -1;
goto try_again;
}
}
@@ -579,7 +581,7 @@ try_again:
enable_counters();
while (!done) {
nanosleep(&ts, NULL);
if (!is_target_alive(&target, evsel_list->threads))
if (!is_target_alive(&target, evsel_list->core.threads))
break;
if (timeout)
break;
@@ -613,7 +615,7 @@ try_again:
* later the evsel_list will be closed after.
*/
if (!STAT_RECORD)
perf_evlist__close(evsel_list);
evlist__close(evsel_list);
return WEXITSTATUS(status);
}
@@ -803,24 +805,24 @@ static struct option stat_options[] = {
};
static int perf_stat__get_socket(struct perf_stat_config *config __maybe_unused,
struct cpu_map *map, int cpu)
struct perf_cpu_map *map, int cpu)
{
return cpu_map__get_socket(map, cpu, NULL);
}
static int perf_stat__get_die(struct perf_stat_config *config __maybe_unused,
struct cpu_map *map, int cpu)
struct perf_cpu_map *map, int cpu)
{
return cpu_map__get_die(map, cpu, NULL);
}
static int perf_stat__get_core(struct perf_stat_config *config __maybe_unused,
struct cpu_map *map, int cpu)
struct perf_cpu_map *map, int cpu)
{
return cpu_map__get_core(map, cpu, NULL);
}
static int cpu_map__get_max(struct cpu_map *map)
static int cpu_map__get_max(struct perf_cpu_map *map)
{
int i, max = -1;
@@ -833,7 +835,7 @@ static int cpu_map__get_max(struct cpu_map *map)
}
static int perf_stat__get_aggr(struct perf_stat_config *config,
aggr_get_id_t get_id, struct cpu_map *map, int idx)
aggr_get_id_t get_id, struct perf_cpu_map *map, int idx)
{
int cpu;
@@ -849,26 +851,26 @@ static int perf_stat__get_aggr(struct perf_stat_config *config,
}
static int perf_stat__get_socket_cached(struct perf_stat_config *config,
struct cpu_map *map, int idx)
struct perf_cpu_map *map, int idx)
{
return perf_stat__get_aggr(config, perf_stat__get_socket, map, idx);
}
static int perf_stat__get_die_cached(struct perf_stat_config *config,
struct cpu_map *map, int idx)
struct perf_cpu_map *map, int idx)
{
return perf_stat__get_aggr(config, perf_stat__get_die, map, idx);
}
static int perf_stat__get_core_cached(struct perf_stat_config *config,
struct cpu_map *map, int idx)
struct perf_cpu_map *map, int idx)
{
return perf_stat__get_aggr(config, perf_stat__get_core, map, idx);
}
static bool term_percore_set(void)
{
struct perf_evsel *counter;
struct evsel *counter;
evlist__for_each_entry(evsel_list, counter) {
if (counter->percore)
@@ -884,21 +886,21 @@ static int perf_stat_init_aggr_mode(void)
switch (stat_config.aggr_mode) {
case AGGR_SOCKET:
if (cpu_map__build_socket_map(evsel_list->cpus, &stat_config.aggr_map)) {
if (cpu_map__build_socket_map(evsel_list->core.cpus, &stat_config.aggr_map)) {
perror("cannot build socket map");
return -1;
}
stat_config.aggr_get_id = perf_stat__get_socket_cached;
break;
case AGGR_DIE:
if (cpu_map__build_die_map(evsel_list->cpus, &stat_config.aggr_map)) {
if (cpu_map__build_die_map(evsel_list->core.cpus, &stat_config.aggr_map)) {
perror("cannot build die map");
return -1;
}
stat_config.aggr_get_id = perf_stat__get_die_cached;
break;
case AGGR_CORE:
if (cpu_map__build_core_map(evsel_list->cpus, &stat_config.aggr_map)) {
if (cpu_map__build_core_map(evsel_list->core.cpus, &stat_config.aggr_map)) {
perror("cannot build core map");
return -1;
}
@@ -906,7 +908,7 @@ static int perf_stat_init_aggr_mode(void)
break;
case AGGR_NONE:
if (term_percore_set()) {
if (cpu_map__build_core_map(evsel_list->cpus,
if (cpu_map__build_core_map(evsel_list->core.cpus,
&stat_config.aggr_map)) {
perror("cannot build core map");
return -1;
@@ -926,20 +928,20 @@ static int perf_stat_init_aggr_mode(void)
* taking the highest cpu number to be the size of
* the aggregation translate cpumap.
*/
nr = cpu_map__get_max(evsel_list->cpus);
stat_config.cpus_aggr_map = cpu_map__empty_new(nr + 1);
nr = cpu_map__get_max(evsel_list->core.cpus);
stat_config.cpus_aggr_map = perf_cpu_map__empty_new(nr + 1);
return stat_config.cpus_aggr_map ? 0 : -ENOMEM;
}
static void perf_stat__exit_aggr_mode(void)
{
cpu_map__put(stat_config.aggr_map);
cpu_map__put(stat_config.cpus_aggr_map);
perf_cpu_map__put(stat_config.aggr_map);
perf_cpu_map__put(stat_config.cpus_aggr_map);
stat_config.aggr_map = NULL;
stat_config.cpus_aggr_map = NULL;
}
static inline int perf_env__get_cpu(struct perf_env *env, struct cpu_map *map, int idx)
static inline int perf_env__get_cpu(struct perf_env *env, struct perf_cpu_map *map, int idx)
{
int cpu;
@@ -954,7 +956,7 @@ static inline int perf_env__get_cpu(struct perf_env *env, struct cpu_map *map, i
return cpu;
}
static int perf_env__get_socket(struct cpu_map *map, int idx, void *data)
static int perf_env__get_socket(struct perf_cpu_map *map, int idx, void *data)
{
struct perf_env *env = data;
int cpu = perf_env__get_cpu(env, map, idx);
@@ -962,7 +964,7 @@ static int perf_env__get_socket(struct cpu_map *map, int idx, void *data)
return cpu == -1 ? -1 : env->cpu[cpu].socket_id;
}
static int perf_env__get_die(struct cpu_map *map, int idx, void *data)
static int perf_env__get_die(struct perf_cpu_map *map, int idx, void *data)
{
struct perf_env *env = data;
int die_id = -1, cpu = perf_env__get_cpu(env, map, idx);
@@ -986,7 +988,7 @@ static int perf_env__get_die(struct cpu_map *map, int idx, void *data)
return die_id;
}
static int perf_env__get_core(struct cpu_map *map, int idx, void *data)
static int perf_env__get_core(struct perf_cpu_map *map, int idx, void *data)
{
struct perf_env *env = data;
int core = -1, cpu = perf_env__get_cpu(env, map, idx);
@@ -1016,37 +1018,37 @@ static int perf_env__get_core(struct cpu_map *map, int idx, void *data)
return core;
}
static int perf_env__build_socket_map(struct perf_env *env, struct cpu_map *cpus,
struct cpu_map **sockp)
static int perf_env__build_socket_map(struct perf_env *env, struct perf_cpu_map *cpus,
struct perf_cpu_map **sockp)
{
return cpu_map__build_map(cpus, sockp, perf_env__get_socket, env);
}
static int perf_env__build_die_map(struct perf_env *env, struct cpu_map *cpus,
struct cpu_map **diep)
static int perf_env__build_die_map(struct perf_env *env, struct perf_cpu_map *cpus,
struct perf_cpu_map **diep)
{
return cpu_map__build_map(cpus, diep, perf_env__get_die, env);
}
static int perf_env__build_core_map(struct perf_env *env, struct cpu_map *cpus,
struct cpu_map **corep)
static int perf_env__build_core_map(struct perf_env *env, struct perf_cpu_map *cpus,
struct perf_cpu_map **corep)
{
return cpu_map__build_map(cpus, corep, perf_env__get_core, env);
}
static int perf_stat__get_socket_file(struct perf_stat_config *config __maybe_unused,
struct cpu_map *map, int idx)
struct perf_cpu_map *map, int idx)
{
return perf_env__get_socket(map, idx, &perf_stat.session->header.env);
}
static int perf_stat__get_die_file(struct perf_stat_config *config __maybe_unused,
struct cpu_map *map, int idx)
struct perf_cpu_map *map, int idx)
{
return perf_env__get_die(map, idx, &perf_stat.session->header.env);
}
static int perf_stat__get_core_file(struct perf_stat_config *config __maybe_unused,
struct cpu_map *map, int idx)
struct perf_cpu_map *map, int idx)
{
return perf_env__get_core(map, idx, &perf_stat.session->header.env);
}
@@ -1057,21 +1059,21 @@ static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
switch (stat_config.aggr_mode) {
case AGGR_SOCKET:
if (perf_env__build_socket_map(env, evsel_list->cpus, &stat_config.aggr_map)) {
if (perf_env__build_socket_map(env, evsel_list->core.cpus, &stat_config.aggr_map)) {
perror("cannot build socket map");
return -1;
}
stat_config.aggr_get_id = perf_stat__get_socket_file;
break;
case AGGR_DIE:
if (perf_env__build_die_map(env, evsel_list->cpus, &stat_config.aggr_map)) {
if (perf_env__build_die_map(env, evsel_list->core.cpus, &stat_config.aggr_map)) {
perror("cannot build die map");
return -1;
}
stat_config.aggr_get_id = perf_stat__get_die_file;
break;
case AGGR_CORE:
if (perf_env__build_core_map(env, evsel_list->cpus, &stat_config.aggr_map)) {
if (perf_env__build_core_map(env, evsel_list->core.cpus, &stat_config.aggr_map)) {
perror("cannot build core map");
return -1;
}
@@ -1366,7 +1368,7 @@ static int add_default_attributes(void)
free(str);
}
if (!evsel_list->nr_entries) {
if (!evsel_list->core.nr_entries) {
if (target__has_cpu(&target))
default_attrs0[0].config = PERF_COUNT_SW_CPU_CLOCK;
@@ -1461,8 +1463,8 @@ static int __cmd_record(int argc, const char **argv)
static int process_stat_round_event(struct perf_session *session,
union perf_event *event)
{
struct stat_round_event *stat_round = &event->stat_round;
struct perf_evsel *counter;
struct perf_record_stat_round *stat_round = &event->stat_round;
struct evsel *counter;
struct timespec tsh, *ts = NULL;
const char **argv = session->header.env.cmdline_argv;
int argc = session->header.env.nr_cmdline;
@@ -1492,7 +1494,7 @@ int process_stat_config_event(struct perf_session *session,
perf_event__read_stat_config(&stat_config, &event->stat_config);
if (cpu_map__empty(st->cpus)) {
if (perf_cpu_map__empty(st->cpus)) {
if (st->aggr_mode != AGGR_UNSET)
pr_warning("warning: processing task data, aggregation mode not set\n");
return 0;
@@ -1517,7 +1519,7 @@ static int set_maps(struct perf_stat *st)
if (WARN_ONCE(st->maps_allocated, "stats double allocation\n"))
return -EINVAL;
perf_evlist__set_maps(evsel_list, st->cpus, st->threads);
perf_evlist__set_maps(&evsel_list->core, st->cpus, st->threads);
if (perf_evlist__alloc_stats(evsel_list, true))
return -ENOMEM;
@@ -1551,7 +1553,7 @@ int process_cpu_map_event(struct perf_session *session,
{
struct perf_tool *tool = session->tool;
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
struct cpu_map *cpus;
struct perf_cpu_map *cpus;
if (st->cpus) {
pr_warning("Extra cpu map event, ignoring.\n");
@@ -1676,14 +1678,14 @@ static void setup_system_wide(int forks)
if (!forks)
target.system_wide = true;
else {
struct perf_evsel *counter;
struct evsel *counter;
evlist__for_each_entry(evsel_list, counter) {
if (!counter->system_wide)
return;
}
if (evsel_list->nr_entries)
if (evsel_list->core.nr_entries)
target.system_wide = true;
}
}
@@ -1702,7 +1704,7 @@ int cmd_stat(int argc, const char **argv)
setlocale(LC_ALL, "");
evsel_list = perf_evlist__new();
evsel_list = evlist__new();
if (evsel_list == NULL)
return -ENOMEM;
@@ -1889,10 +1891,10 @@ int cmd_stat(int argc, const char **argv)
* so we could print it out on output.
*/
if (stat_config.aggr_mode == AGGR_THREAD) {
thread_map__read_comms(evsel_list->threads);
thread_map__read_comms(evsel_list->core.threads);
if (target.system_wide) {
if (runtime_stat_new(&stat_config,
thread_map__nr(evsel_list->threads))) {
perf_thread_map__nr(evsel_list->core.threads))) {
goto out;
}
}
@@ -2003,7 +2005,7 @@ int cmd_stat(int argc, const char **argv)
perf_session__write_header(perf_stat.session, evsel_list, fd, true);
}
perf_evlist__close(evsel_list);
evlist__close(evsel_list);
perf_session__delete(perf_stat.session);
}
@@ -2015,7 +2017,7 @@ out:
if (smi_cost && smi_reset)
sysfs__write_int(FREEZE_ON_SMI_PATH, 0);
perf_evlist__delete(evsel_list);
evlist__delete(evsel_list);
runtime_stat_delete(&stat_config);

View File

@@ -10,13 +10,11 @@
#include <errno.h>
#include <inttypes.h>
#include <traceevent/event-parse.h>
#include "builtin.h"
#include "util/color.h"
#include <linux/list.h>
#include "util/cache.h"
#include "util/evlist.h"
#include "util/evlist.h" // for struct evsel_str_handler
#include "util/evsel.h"
#include <linux/kernel.h>
#include <linux/rbtree.h>
@@ -28,6 +26,7 @@
#include "perf.h"
#include "util/header.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include "util/parse-events.h"
#include "util/event.h"
@@ -545,19 +544,19 @@ exit:
}
typedef int (*tracepoint_handler)(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
const char *backtrace);
static int process_sample_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct perf_evsel *evsel,
struct evsel *evsel,
struct machine *machine)
{
struct timechart *tchart = container_of(tool, struct timechart, tool);
if (evsel->attr.sample_type & PERF_SAMPLE_TIME) {
if (evsel->core.attr.sample_type & PERF_SAMPLE_TIME) {
if (!tchart->first_time || tchart->first_time > sample->time)
tchart->first_time = sample->time;
if (tchart->last_time < sample->time)
@@ -575,7 +574,7 @@ static int process_sample_event(struct perf_tool *tool,
static int
process_sample_cpu_idle(struct timechart *tchart __maybe_unused,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
const char *backtrace __maybe_unused)
{
@@ -591,7 +590,7 @@ process_sample_cpu_idle(struct timechart *tchart __maybe_unused,
static int
process_sample_cpu_frequency(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
const char *backtrace __maybe_unused)
{
@@ -604,7 +603,7 @@ process_sample_cpu_frequency(struct timechart *tchart,
static int
process_sample_sched_wakeup(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
const char *backtrace)
{
@@ -618,7 +617,7 @@ process_sample_sched_wakeup(struct timechart *tchart,
static int
process_sample_sched_switch(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
const char *backtrace)
{
@@ -634,7 +633,7 @@ process_sample_sched_switch(struct timechart *tchart,
#ifdef SUPPORT_OLD_POWER_EVENTS
static int
process_sample_power_start(struct timechart *tchart __maybe_unused,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
const char *backtrace __maybe_unused)
{
@@ -647,7 +646,7 @@ process_sample_power_start(struct timechart *tchart __maybe_unused,
static int
process_sample_power_end(struct timechart *tchart,
struct perf_evsel *evsel __maybe_unused,
struct evsel *evsel __maybe_unused,
struct perf_sample *sample,
const char *backtrace __maybe_unused)
{
@@ -657,7 +656,7 @@ process_sample_power_end(struct timechart *tchart,
static int
process_sample_power_frequency(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
const char *backtrace __maybe_unused)
{
@@ -840,7 +839,7 @@ static int pid_end_io_sample(struct timechart *tchart, int pid, int type,
static int
process_enter_read(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long fd = perf_evsel__intval(evsel, sample, "fd");
@@ -850,7 +849,7 @@ process_enter_read(struct timechart *tchart,
static int
process_exit_read(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long ret = perf_evsel__intval(evsel, sample, "ret");
@@ -860,7 +859,7 @@ process_exit_read(struct timechart *tchart,
static int
process_enter_write(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long fd = perf_evsel__intval(evsel, sample, "fd");
@@ -870,7 +869,7 @@ process_enter_write(struct timechart *tchart,
static int
process_exit_write(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long ret = perf_evsel__intval(evsel, sample, "ret");
@@ -880,7 +879,7 @@ process_exit_write(struct timechart *tchart,
static int
process_enter_sync(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long fd = perf_evsel__intval(evsel, sample, "fd");
@@ -890,7 +889,7 @@ process_enter_sync(struct timechart *tchart,
static int
process_exit_sync(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long ret = perf_evsel__intval(evsel, sample, "ret");
@@ -900,7 +899,7 @@ process_exit_sync(struct timechart *tchart,
static int
process_enter_tx(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long fd = perf_evsel__intval(evsel, sample, "fd");
@@ -910,7 +909,7 @@ process_enter_tx(struct timechart *tchart,
static int
process_exit_tx(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long ret = perf_evsel__intval(evsel, sample, "ret");
@@ -920,7 +919,7 @@ process_exit_tx(struct timechart *tchart,
static int
process_enter_rx(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long fd = perf_evsel__intval(evsel, sample, "fd");
@@ -930,7 +929,7 @@ process_enter_rx(struct timechart *tchart,
static int
process_exit_rx(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long ret = perf_evsel__intval(evsel, sample, "ret");
@@ -940,7 +939,7 @@ process_exit_rx(struct timechart *tchart,
static int
process_enter_poll(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long fd = perf_evsel__intval(evsel, sample, "fd");
@@ -950,7 +949,7 @@ process_enter_poll(struct timechart *tchart,
static int
process_exit_poll(struct timechart *tchart,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample)
{
long ret = perf_evsel__intval(evsel, sample, "ret");
@@ -1518,10 +1517,7 @@ static int process_header(struct perf_file_section *section __maybe_unused,
if (!tchart->topology)
break;
if (svg_build_topology_map(ph->env.sibling_cores,
ph->env.nr_sibling_cores,
ph->env.sibling_threads,
ph->env.nr_sibling_threads))
if (svg_build_topology_map(&ph->env))
fprintf(stderr, "problem building topology\n");
break;
@@ -1534,7 +1530,7 @@ static int process_header(struct perf_file_section *section __maybe_unused,
static int __cmd_timechart(struct timechart *tchart, const char *output_name)
{
const struct perf_evsel_str_handler power_tracepoints[] = {
const struct evsel_str_handler power_tracepoints[] = {
{ "power:cpu_idle", process_sample_cpu_idle },
{ "power:cpu_frequency", process_sample_cpu_frequency },
{ "sched:sched_wakeup", process_sample_sched_wakeup },

View File

@@ -24,6 +24,7 @@
#include "util/bpf-event.h"
#include "util/config.h"
#include "util/color.h"
#include "util/dso.h"
#include "util/evlist.h"
#include "util/evsel.h"
#include "util/event.h"
@@ -31,20 +32,20 @@
#include "util/map.h"
#include "util/session.h"
#include "util/symbol.h"
#include "util/thread.h"
#include "util/thread_map.h"
#include "util/top.h"
#include "util/util.h"
#include <linux/rbtree.h>
#include <subcmd/parse-options.h>
#include "util/parse-events.h"
#include "util/callchain.h"
#include "util/cpumap.h"
#include "util/xyarray.h"
#include "util/sort.h"
#include "util/string2.h"
#include "util/term.h"
#include "util/intlist.h"
#include "util/parse-branch-options.h"
#include "arch/common.h"
#include "ui/ui.h"
#include "util/debug.h"
#include "util/ordered-events.h"
@@ -101,7 +102,7 @@ static void perf_top__resize(struct perf_top *top)
static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
{
struct perf_evsel *evsel;
struct evsel *evsel;
struct symbol *sym;
struct annotation *notes;
struct map *map;
@@ -129,7 +130,7 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
notes = symbol__annotation(sym);
pthread_mutex_lock(&notes->lock);
if (!symbol__hists(sym, top->evlist->nr_entries)) {
if (!symbol__hists(sym, top->evlist->core.nr_entries)) {
pthread_mutex_unlock(&notes->lock);
pr_err("Not enough memory for annotating '%s' symbol!\n",
sym->name);
@@ -186,7 +187,7 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
static void perf_top__record_precise_ip(struct perf_top *top,
struct hist_entry *he,
struct perf_sample *sample,
struct perf_evsel *evsel, u64 ip)
struct evsel *evsel, u64 ip)
{
struct annotation *notes;
struct symbol *sym = he->ms.sym;
@@ -228,7 +229,7 @@ static void perf_top__record_precise_ip(struct perf_top *top,
static void perf_top__show_details(struct perf_top *top)
{
struct hist_entry *he = top->sym_filter_entry;
struct perf_evsel *evsel;
struct evsel *evsel;
struct annotation *notes;
struct symbol *symbol;
int more;
@@ -265,12 +266,52 @@ out_unlock:
pthread_mutex_unlock(&notes->lock);
}
static void perf_top__resort_hists(struct perf_top *t)
{
struct evlist *evlist = t->evlist;
struct evsel *pos;
evlist__for_each_entry(evlist, pos) {
struct hists *hists = evsel__hists(pos);
/*
* unlink existing entries so that they can be linked
* in a correct order in hists__match() below.
*/
hists__unlink(hists);
if (evlist->enabled) {
if (t->zero) {
hists__delete_entries(hists);
} else {
hists__decay_entries(hists, t->hide_user_symbols,
t->hide_kernel_symbols);
}
}
hists__collapse_resort(hists, NULL);
/* Non-group events are considered as leader */
if (symbol_conf.event_group &&
!perf_evsel__is_group_leader(pos)) {
struct hists *leader_hists = evsel__hists(pos->leader);
hists__match(leader_hists, hists);
hists__link(leader_hists, hists);
}
}
evlist__for_each_entry(evlist, pos) {
perf_evsel__output_resort(pos, NULL);
}
}
static void perf_top__print_sym_table(struct perf_top *top)
{
char bf[160];
int printed = 0;
const int win_width = top->winsize.ws_col - 1;
struct perf_evsel *evsel = top->sym_evsel;
struct evsel *evsel = top->sym_evsel;
struct hists *hists = evsel__hists(evsel);
puts(CONSOLE_CLEAR);
@@ -296,17 +337,7 @@ static void perf_top__print_sym_table(struct perf_top *top)
return;
}
if (top->evlist->enabled) {
if (top->zero) {
hists__delete_entries(hists);
} else {
hists__decay_entries(hists, top->hide_user_symbols,
top->hide_kernel_symbols);
}
}
hists__collapse_resort(hists, NULL);
perf_evsel__output_resort(evsel, NULL);
perf_top__resort_hists(top);
hists__output_recalc_col_len(hists, top->print_entries - printed);
putchar('\n');
@@ -404,7 +435,7 @@ static void perf_top__print_mapped_keys(struct perf_top *top)
fprintf(stdout, "\t[d] display refresh delay. \t(%d)\n", top->delay_secs);
fprintf(stdout, "\t[e] display entries (lines). \t(%d)\n", top->print_entries);
if (top->evlist->nr_entries > 1)
if (top->evlist->core.nr_entries > 1)
fprintf(stdout, "\t[E] active event counter. \t(%s)\n", perf_evsel__name(top->sym_evsel));
fprintf(stdout, "\t[f] profile display filter (count). \t(%d)\n", top->count_filter);
@@ -439,7 +470,7 @@ static int perf_top__key_mapped(struct perf_top *top, int c)
case 'S':
return 1;
case 'E':
return top->evlist->nr_entries > 1 ? 1 : 0;
return top->evlist->core.nr_entries > 1 ? 1 : 0;
default:
break;
}
@@ -485,7 +516,7 @@ static bool perf_top__handle_keypress(struct perf_top *top, int c)
}
break;
case 'E':
if (top->evlist->nr_entries > 1) {
if (top->evlist->core.nr_entries > 1) {
/* Select 0 as the default event: */
int counter = 0;
@@ -496,7 +527,7 @@ static bool perf_top__handle_keypress(struct perf_top *top, int c)
prompt_integer(&counter, "Enter details event counter");
if (counter >= top->evlist->nr_entries) {
if (counter >= top->evlist->core.nr_entries) {
top->sym_evsel = perf_evlist__first(top->evlist);
fprintf(stderr, "Sorry, no such event, using %s.\n", perf_evsel__name(top->sym_evsel));
sleep(1);
@@ -554,25 +585,11 @@ static bool perf_top__handle_keypress(struct perf_top *top, int c)
static void perf_top__sort_new_samples(void *arg)
{
struct perf_top *t = arg;
struct perf_evsel *evsel = t->sym_evsel;
struct hists *hists;
if (t->evlist->selected != NULL)
t->sym_evsel = t->evlist->selected;
hists = evsel__hists(evsel);
if (t->evlist->enabled) {
if (t->zero) {
hists__delete_entries(hists);
} else {
hists__decay_entries(hists, t->hide_user_symbols,
t->hide_kernel_symbols);
}
}
hists__collapse_resort(hists, NULL);
perf_evsel__output_resort(evsel, NULL);
perf_top__resort_hists(t);
if (t->lost || t->drop)
pr_warning("Too slow to read ring buffer (change period (-c/-F) or limit CPUs (-C)\n");
@@ -586,7 +603,7 @@ static void stop_top(void)
static void *display_thread_tui(void *arg)
{
struct perf_evsel *pos;
struct evsel *pos;
struct perf_top *top = arg;
const char *help = "For a higher level overview, try: perf top --sort comm,dso";
struct hist_browser_timer hbt = {
@@ -602,6 +619,8 @@ static void *display_thread_tui(void *arg)
*/
unshare(CLONE_FS);
prctl(PR_SET_NAME, "perf-top-UI", 0, 0, 0);
perf_top__sort_new_samples(top);
/*
@@ -652,6 +671,8 @@ static void *display_thread(void *arg)
*/
unshare(CLONE_FS);
prctl(PR_SET_NAME, "perf-top-UI", 0, 0, 0);
display_setup_sig();
pthread__unblock_sigwinch();
repeat:
@@ -693,7 +714,7 @@ static int hist_iter__top_callback(struct hist_entry_iter *iter,
{
struct perf_top *top = arg;
struct hist_entry *he = iter->he;
struct perf_evsel *evsel = iter->evsel;
struct evsel *evsel = iter->evsel;
if (perf_hpp_list.sym && single)
perf_top__record_precise_ip(top, he, iter->sample, evsel, al->addr);
@@ -705,7 +726,7 @@ static int hist_iter__top_callback(struct hist_entry_iter *iter,
static void perf_event__process_sample(struct perf_tool *tool,
const union perf_event *event,
struct perf_evsel *evsel,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
{
@@ -745,7 +766,7 @@ static void perf_event__process_sample(struct perf_tool *tool,
if (!perf_evlist__exclude_kernel(top->session->evlist)) {
ui__warning(
"Kernel address maps (/proc/{kallsyms,modules}) are restricted.\n\n"
"Check /proc/sys/kernel/kptr_restrict.\n\n"
"Check /proc/sys/kernel/kptr_restrict and /proc/sys/kernel/perf_event_paranoid.\n\n"
"Kernel%s samples will not be resolved.\n",
al.map && map__has_symbols(al.map) ?
" modules" : "");
@@ -813,7 +834,7 @@ static void perf_event__process_sample(struct perf_tool *tool,
static void
perf_top__process_lost(struct perf_top *top, union perf_event *event,
struct perf_evsel *evsel)
struct evsel *evsel)
{
struct hists *hists = evsel__hists(evsel);
@@ -825,7 +846,7 @@ perf_top__process_lost(struct perf_top *top, union perf_event *event,
static void
perf_top__process_lost_samples(struct perf_top *top,
union perf_event *event,
struct perf_evsel *evsel)
struct evsel *evsel)
{
struct hists *hists = evsel__hists(evsel);
@@ -839,7 +860,7 @@ static u64 last_timestamp;
static void perf_top__mmap_read_idx(struct perf_top *top, int idx)
{
struct record_opts *opts = &top->record_opts;
struct perf_evlist *evlist = top->evlist;
struct evlist *evlist = top->evlist;
struct perf_mmap *md;
union perf_event *event;
@@ -874,7 +895,7 @@ static void perf_top__mmap_read_idx(struct perf_top *top, int idx)
static void perf_top__mmap_read(struct perf_top *top)
{
bool overwrite = top->record_opts.overwrite;
struct perf_evlist *evlist = top->evlist;
struct evlist *evlist = top->evlist;
int i;
if (overwrite)
@@ -909,10 +930,10 @@ static void perf_top__mmap_read(struct perf_top *top)
static int perf_top__overwrite_check(struct perf_top *top)
{
struct record_opts *opts = &top->record_opts;
struct perf_evlist *evlist = top->evlist;
struct evlist *evlist = top->evlist;
struct perf_evsel_config_term *term;
struct list_head *config_terms;
struct perf_evsel *evsel;
struct evsel *evsel;
int set, overwrite = -1;
evlist__for_each_entry(evlist, evsel) {
@@ -952,11 +973,11 @@ static int perf_top__overwrite_check(struct perf_top *top)
}
static int perf_top_overwrite_fallback(struct perf_top *top,
struct perf_evsel *evsel)
struct evsel *evsel)
{
struct record_opts *opts = &top->record_opts;
struct perf_evlist *evlist = top->evlist;
struct perf_evsel *counter;
struct evlist *evlist = top->evlist;
struct evsel *counter;
if (!opts->overwrite)
return 0;
@@ -966,7 +987,7 @@ static int perf_top_overwrite_fallback(struct perf_top *top,
return 0;
evlist__for_each_entry(evlist, counter)
counter->attr.write_backward = false;
counter->core.attr.write_backward = false;
opts->overwrite = false;
pr_debug2("fall back to non-overwrite mode\n");
return 1;
@@ -975,8 +996,8 @@ static int perf_top_overwrite_fallback(struct perf_top *top,
static int perf_top__start_counters(struct perf_top *top)
{
char msg[BUFSIZ];
struct perf_evsel *counter;
struct perf_evlist *evlist = top->evlist;
struct evsel *counter;
struct evlist *evlist = top->evlist;
struct record_opts *opts = &top->record_opts;
if (perf_top__overwrite_check(top)) {
@@ -989,8 +1010,8 @@ static int perf_top__start_counters(struct perf_top *top)
evlist__for_each_entry(evlist, counter) {
try_again:
if (perf_evsel__open(counter, top->evlist->cpus,
top->evlist->threads) < 0) {
if (evsel__open(counter, top->evlist->core.cpus,
top->evlist->core.threads) < 0) {
/*
* Specially handle overwrite fall back.
@@ -1100,11 +1121,11 @@ static int deliver_event(struct ordered_events *qe,
struct ordered_event *qevent)
{
struct perf_top *top = qe->data;
struct perf_evlist *evlist = top->evlist;
struct evlist *evlist = top->evlist;
struct perf_session *session = top->session;
union perf_event *event = qevent->event;
struct perf_sample sample;
struct perf_evsel *evsel;
struct evsel *evsel;
struct machine *machine;
int ret = -1;
@@ -1123,8 +1144,11 @@ static int deliver_event(struct ordered_events *qe,
evsel = perf_evlist__id2evsel(session->evlist, sample.id);
assert(evsel != NULL);
if (event->header.type == PERF_RECORD_SAMPLE)
if (event->header.type == PERF_RECORD_SAMPLE) {
if (evswitch__discard(&top->evswitch, evsel))
return 0;
++top->samples;
}
switch (sample.cpumode) {
case PERF_RECORD_MISC_USER:
@@ -1222,7 +1246,7 @@ static int __cmd_top(struct perf_top *top)
pr_debug("Couldn't synthesize BPF events: Pre-existing BPF programs won't have symbols resolved.\n");
machine__synthesize_threads(&top->session->machines.host, &opts->target,
top->evlist->threads, false,
top->evlist->core.threads, false,
top->nr_threads_synthesize);
if (top->nr_threads_synthesize > 1)
@@ -1255,7 +1279,7 @@ static int __cmd_top(struct perf_top *top)
* so leave the check here.
*/
if (!target__none(&opts->target))
perf_evlist__enable(top->evlist);
evlist__enable(top->evlist);
ret = -1;
if (pthread_create(&thread_process, NULL, process_thread, top)) {
@@ -1509,9 +1533,10 @@ int cmd_top(int argc, const char **argv)
"number of thread to run event synthesize"),
OPT_BOOLEAN(0, "namespaces", &opts->record_namespaces,
"Record namespaces events"),
OPTS_EVSWITCH(&top.evswitch),
OPT_END()
};
struct perf_evlist *sb_evlist = NULL;
struct evlist *sb_evlist = NULL;
const char * const top_usage[] = {
"perf top [<options>]",
NULL
@@ -1524,7 +1549,7 @@ int cmd_top(int argc, const char **argv)
top.annotation_opts.min_pcnt = 5;
top.annotation_opts.context = 4;
top.evlist = perf_evlist__new();
top.evlist = evlist__new();
if (top.evlist == NULL)
return -ENOMEM;
@@ -1536,12 +1561,16 @@ int cmd_top(int argc, const char **argv)
if (argc)
usage_with_options(top_usage, options);
if (!top.evlist->nr_entries &&
if (!top.evlist->core.nr_entries &&
perf_evlist__add_default(top.evlist) < 0) {
pr_err("Not enough memory for event selector list\n");
goto out_delete_evlist;
}
status = evswitch__init(&top.evswitch, top.evlist, stderr);
if (status)
goto out_delete_evlist;
if (symbol_conf.report_hierarchy) {
/* disable incompatible options */
symbol_conf.event_group = false;
@@ -1661,7 +1690,7 @@ int cmd_top(int argc, const char **argv)
perf_evlist__stop_sb_thread(sb_evlist);
out_delete_evlist:
perf_evlist__delete(top.evlist);
evlist__delete(top.evlist);
perf_session__delete(top.session);
return status;

File diff suppressed because it is too large Load Diff

View File

@@ -2,8 +2,8 @@
#include "builtin.h"
#include "perf.h"
#include "color.h"
#include <linux/compiler.h>
#include <tools/config.h>
#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <subcmd/parse-options.h>

View File

@@ -2,8 +2,6 @@
#ifndef BUILTIN_H
#define BUILTIN_H
#include "util/util.h"
extern const char perf_usage_string[];
extern const char perf_more_info_string[];

View File

@@ -1,7 +1,8 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
HEADERS='
FILES='
include/uapi/linux/const.h
include/uapi/drm/drm.h
include/uapi/drm/i915_drm.h
include/uapi/linux/fadvise.h
@@ -19,12 +20,16 @@ include/uapi/linux/usbdevice_fs.h
include/uapi/linux/vhost.h
include/uapi/sound/asound.h
include/linux/bits.h
include/linux/const.h
include/linux/hash.h
include/uapi/linux/hw_breakpoint.h
arch/x86/include/asm/disabled-features.h
arch/x86/include/asm/required-features.h
arch/x86/include/asm/cpufeatures.h
arch/x86/include/asm/inat_types.h
arch/x86/include/uapi/asm/prctl.h
arch/x86/lib/x86-opcode-map.txt
arch/x86/tools/gen-insn-attr-x86.awk
arch/arm/include/uapi/asm/perf_regs.h
arch/arm64/include/uapi/asm/perf_regs.h
arch/powerpc/include/uapi/asm/perf_regs.h
@@ -96,7 +101,7 @@ test -d ../../include || exit 0
cd ../..
# simple diff check
for i in $HEADERS; do
for i in $FILES; do
check $i -B
done
@@ -107,6 +112,10 @@ check include/uapi/asm-generic/mman.h '-I "^#include <\(uapi/\)*asm-generic/mman
check include/uapi/linux/mman.h '-I "^#include <\(uapi/\)*asm/mman.h>"'
check include/linux/ctype.h '-I "isdigit("'
check lib/ctype.c '-I "^EXPORT_SYMBOL" -I "^#include <linux/export.h>" -B'
check arch/x86/include/asm/inat.h '-I "^#include [\"<]\(asm/\)*inat_types.h[\">]"'
check arch/x86/include/asm/insn.h '-I "^#include [\"<]\(asm/\)*inat.h[\">]"'
check arch/x86/lib/inat.c '-I "^#include [\"<]\(../include/\)*asm/insn.h[\">]"'
check arch/x86/lib/insn.c '-I "^#include [\"<]\(../include/\)*asm/in\(at\|sn\).h[\">]"'
# diff non-symmetric files
check_2 tools/perf/arch/x86/entry/syscalls/syscall_64.tbl arch/x86/entry/syscalls/syscall_64.tbl

View File

@@ -16,6 +16,7 @@
#include <unistd.h>
#include <linux/limits.h>
#include <linux/socket.h>
#include <pid_filter.h>
/* bpf-output associated map */
@@ -33,6 +34,20 @@ struct syscall {
bpf_map(syscalls, ARRAY, int, struct syscall, 512);
/*
* What to augment at entry?
*
* Pointer arg payloads (filenames, etc) passed from userspace to the kernel
*/
bpf_map(syscalls_sys_enter, PROG_ARRAY, u32, u32, 512);
/*
* What to augment at exit?
*
* Pointer arg payloads returned from the kernel (struct stat, etc) to userspace.
*/
bpf_map(syscalls_sys_exit, PROG_ARRAY, u32, u32, 512);
struct syscall_enter_args {
unsigned long long common_tp_fields;
long syscall_nr;
@@ -45,7 +60,7 @@ struct syscall_exit_args {
long ret;
};
struct augmented_filename {
struct augmented_arg {
unsigned int size;
int err;
char value[PATH_MAX];
@@ -53,45 +68,176 @@ struct augmented_filename {
pid_filter(pids_filtered);
struct augmented_args_filename {
struct augmented_args_payload {
struct syscall_enter_args args;
struct augmented_filename filename;
union {
struct {
struct augmented_arg arg, arg2;
};
struct sockaddr_storage saddr;
};
};
bpf_map(augmented_filename_map, PERCPU_ARRAY, int, struct augmented_args_filename, 1);
// We need more tmp space than the BPF stack can give us
bpf_map(augmented_args_tmp, PERCPU_ARRAY, int, struct augmented_args_payload, 1);
static inline struct augmented_args_payload *augmented_args_payload(void)
{
int key = 0;
return bpf_map_lookup_elem(&augmented_args_tmp, &key);
}
static inline int augmented__output(void *ctx, struct augmented_args_payload *args, int len)
{
/* If perf_event_output fails, return non-zero so that it gets recorded unaugmented */
return perf_event_output(ctx, &__augmented_syscalls__, BPF_F_CURRENT_CPU, args, len);
}
static inline
unsigned int augmented_filename__read(struct augmented_filename *augmented_filename,
const void *filename_arg, unsigned int filename_len)
unsigned int augmented_arg__read_str(struct augmented_arg *augmented_arg, const void *arg, unsigned int arg_len)
{
unsigned int len = sizeof(*augmented_filename);
int size = probe_read_str(&augmented_filename->value, filename_len, filename_arg);
unsigned int augmented_len = sizeof(*augmented_arg);
int string_len = probe_read_str(&augmented_arg->value, arg_len, arg);
augmented_filename->size = augmented_filename->err = 0;
augmented_arg->size = augmented_arg->err = 0;
/*
* probe_read_str may return < 0, e.g. -EFAULT
* So we leave that in the augmented_filename->size that userspace will
* So we leave that in the augmented_arg->size that userspace will
*/
if (size > 0) {
len -= sizeof(augmented_filename->value) - size;
len &= sizeof(augmented_filename->value) - 1;
augmented_filename->size = size;
if (string_len > 0) {
augmented_len -= sizeof(augmented_arg->value) - string_len;
augmented_len &= sizeof(augmented_arg->value) - 1;
augmented_arg->size = string_len;
} else {
/*
* So that username notice the error while still being able
* to skip this augmented arg record
*/
augmented_filename->err = size;
len = offsetof(struct augmented_filename, value);
augmented_arg->err = string_len;
augmented_len = offsetof(struct augmented_arg, value);
}
return len;
return augmented_len;
}
SEC("!raw_syscalls:unaugmented")
int syscall_unaugmented(struct syscall_enter_args *args)
{
return 1;
}
/*
* These will be tail_called from SEC("raw_syscalls:sys_enter"), so will find in
* augmented_args_tmp what was read by that raw_syscalls:sys_enter and go
* on from there, reading the first syscall arg as a string, i.e. open's
* filename.
*/
SEC("!syscalls:sys_enter_connect")
int sys_enter_connect(struct syscall_enter_args *args)
{
struct augmented_args_payload *augmented_args = augmented_args_payload();
const void *sockaddr_arg = (const void *)args->args[1];
unsigned int socklen = args->args[2];
unsigned int len = sizeof(augmented_args->args);
if (augmented_args == NULL)
return 1; /* Failure: don't filter */
if (socklen > sizeof(augmented_args->saddr))
socklen = sizeof(augmented_args->saddr);
probe_read(&augmented_args->saddr, socklen, sockaddr_arg);
return augmented__output(args, augmented_args, len + socklen);
}
SEC("!syscalls:sys_enter_sendto")
int sys_enter_sendto(struct syscall_enter_args *args)
{
struct augmented_args_payload *augmented_args = augmented_args_payload();
const void *sockaddr_arg = (const void *)args->args[4];
unsigned int socklen = args->args[5];
unsigned int len = sizeof(augmented_args->args);
if (augmented_args == NULL)
return 1; /* Failure: don't filter */
if (socklen > sizeof(augmented_args->saddr))
socklen = sizeof(augmented_args->saddr);
probe_read(&augmented_args->saddr, socklen, sockaddr_arg);
return augmented__output(args, augmented_args, len + socklen);
}
SEC("!syscalls:sys_enter_open")
int sys_enter_open(struct syscall_enter_args *args)
{
struct augmented_args_payload *augmented_args = augmented_args_payload();
const void *filename_arg = (const void *)args->args[0];
unsigned int len = sizeof(augmented_args->args);
if (augmented_args == NULL)
return 1; /* Failure: don't filter */
len += augmented_arg__read_str(&augmented_args->arg, filename_arg, sizeof(augmented_args->arg.value));
return augmented__output(args, augmented_args, len);
}
SEC("!syscalls:sys_enter_openat")
int sys_enter_openat(struct syscall_enter_args *args)
{
struct augmented_args_payload *augmented_args = augmented_args_payload();
const void *filename_arg = (const void *)args->args[1];
unsigned int len = sizeof(augmented_args->args);
if (augmented_args == NULL)
return 1; /* Failure: don't filter */
len += augmented_arg__read_str(&augmented_args->arg, filename_arg, sizeof(augmented_args->arg.value));
return augmented__output(args, augmented_args, len);
}
SEC("!syscalls:sys_enter_rename")
int sys_enter_rename(struct syscall_enter_args *args)
{
struct augmented_args_payload *augmented_args = augmented_args_payload();
const void *oldpath_arg = (const void *)args->args[0],
*newpath_arg = (const void *)args->args[1];
unsigned int len = sizeof(augmented_args->args), oldpath_len;
if (augmented_args == NULL)
return 1; /* Failure: don't filter */
oldpath_len = augmented_arg__read_str(&augmented_args->arg, oldpath_arg, sizeof(augmented_args->arg.value));
len += oldpath_len + augmented_arg__read_str((void *)(&augmented_args->arg) + oldpath_len, newpath_arg, sizeof(augmented_args->arg.value));
return augmented__output(args, augmented_args, len);
}
SEC("!syscalls:sys_enter_renameat")
int sys_enter_renameat(struct syscall_enter_args *args)
{
struct augmented_args_payload *augmented_args = augmented_args_payload();
const void *oldpath_arg = (const void *)args->args[1],
*newpath_arg = (const void *)args->args[3];
unsigned int len = sizeof(augmented_args->args), oldpath_len;
if (augmented_args == NULL)
return 1; /* Failure: don't filter */
oldpath_len = augmented_arg__read_str(&augmented_args->arg, oldpath_arg, sizeof(augmented_args->arg.value));
len += oldpath_len + augmented_arg__read_str((void *)(&augmented_args->arg) + oldpath_len, newpath_arg, sizeof(augmented_args->arg.value));
return augmented__output(args, augmented_args, len);
}
SEC("raw_syscalls:sys_enter")
int sys_enter(struct syscall_enter_args *args)
{
struct augmented_args_filename *augmented_args;
struct augmented_args_payload *augmented_args;
/*
* We start len, the amount of data that will be in the perf ring
* buffer, if this is not filtered out by one of pid_filter__has(),
@@ -103,142 +249,46 @@ int sys_enter(struct syscall_enter_args *args)
*/
unsigned int len = sizeof(augmented_args->args);
struct syscall *syscall;
int key = 0;
augmented_args = bpf_map_lookup_elem(&augmented_filename_map, &key);
if (augmented_args == NULL)
return 1;
if (pid_filter__has(&pids_filtered, getpid()))
return 0;
augmented_args = augmented_args_payload();
if (augmented_args == NULL)
return 1;
probe_read(&augmented_args->args, sizeof(augmented_args->args), args);
syscall = bpf_map_lookup_elem(&syscalls, &augmented_args->args.syscall_nr);
if (syscall == NULL || !syscall->enabled)
return 0;
/*
* Yonghong and Edward Cree sayz:
*
* https://www.spinics.net/lists/netdev/msg531645.html
*
* >> R0=inv(id=0) R1=inv2 R6=ctx(id=0,off=0,imm=0) R7=inv64 R10=fp0,call_-1
* >> 10: (bf) r1 = r6
* >> 11: (07) r1 += 16
* >> 12: (05) goto pc+2
* >> 15: (79) r3 = *(u64 *)(r1 +0)
* >> dereference of modified ctx ptr R1 off=16 disallowed
* > Aha, we at least got a different error message this time.
* > And indeed llvm has done that optimisation, rather than the more obvious
* > 11: r3 = *(u64 *)(r1 +16)
* > because it wants to have lots of reads share a single insn. You may be able
* > to defeat that optimisation by adding compiler barriers, idk. Maybe someone
* > with llvm knowledge can figure out how to stop it (ideally, llvm would know
* > when it's generating for bpf backend and not do that). -O0? ¯\_(ツ)_/¯
*
* The optimization mostly likes below:
*
* br1:
* ...
* r1 += 16
* goto merge
* br2:
* ...
* r1 += 20
* goto merge
* merge:
* *(u64 *)(r1 + 0)
*
* The compiler tries to merge common loads. There is no easy way to
* stop this compiler optimization without turning off a lot of other
* optimizations. The easiest way is to add barriers:
*
* __asm__ __volatile__("": : :"memory")
*
* after the ctx memory access to prevent their down stream merging.
* Jump to syscall specific augmenter, even if the default one,
* "!raw_syscalls:unaugmented" that will just return 1 to return the
* unagmented tracepoint payload.
*/
/*
* For now copy just the first string arg, we need to improve the protocol
* and have more than one.
*
* Using the unrolled loop is not working, only when we do it manually,
* check this out later...
bpf_tail_call(args, &syscalls_sys_enter, augmented_args->args.syscall_nr);
u8 arg;
#pragma clang loop unroll(full)
for (arg = 0; arg < 6; ++arg) {
if (syscall->string_args_len[arg] != 0) {
filename_len = syscall->string_args_len[arg];
filename_arg = (const void *)args->args[arg];
__asm__ __volatile__("": : :"memory");
break;
}
}
verifier log:
; if (syscall->string_args_len[arg] != 0) {
37: (69) r3 = *(u16 *)(r0 +2)
R0=map_value(id=0,off=0,ks=4,vs=14,imm=0) R1_w=inv0 R2_w=map_value(id=0,off=2,ks=4,vs=14,imm=0) R6=ctx(id=0,off=0,imm=0) R7=map_value(id=0,off=0,ks=4,vs=4168,imm=0) R10=fp0,call_-1 fp-8=mmmmmmmm
; if (syscall->string_args_len[arg] != 0) {
38: (55) if r3 != 0x0 goto pc+5
R0=map_value(id=0,off=0,ks=4,vs=14,imm=0) R1=inv0 R2=map_value(id=0,off=2,ks=4,vs=14,imm=0) R3=inv0 R6=ctx(id=0,off=0,imm=0) R7=map_value(id=0,off=0,ks=4,vs=4168,imm=0) R10=fp0,call_-1 fp-8=mmmmmmmm
39: (b7) r1 = 1
; if (syscall->string_args_len[arg] != 0) {
40: (bf) r2 = r0
41: (07) r2 += 4
42: (69) r3 = *(u16 *)(r0 +4)
R0=map_value(id=0,off=0,ks=4,vs=14,imm=0) R1_w=inv1 R2_w=map_value(id=0,off=4,ks=4,vs=14,imm=0) R3_w=inv0 R6=ctx(id=0,off=0,imm=0) R7=map_value(id=0,off=0,ks=4,vs=4168,imm=0) R10=fp0,call_-1 fp-8=mmmmmmmm
; if (syscall->string_args_len[arg] != 0) {
43: (15) if r3 == 0x0 goto pc+32
R0=map_value(id=0,off=0,ks=4,vs=14,imm=0) R1=inv1 R2=map_value(id=0,off=4,ks=4,vs=14,imm=0) R3=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff)) R6=ctx(id=0,off=0,imm=0) R7=map_value(id=0,off=0,ks=4,vs=4168,imm=0) R10=fp0,call_-1 fp-8=mmmmmmmm
; filename_arg = (const void *)args->args[arg];
44: (67) r1 <<= 3
45: (bf) r3 = r6
46: (0f) r3 += r1
47: (b7) r5 = 64
48: (79) r3 = *(u64 *)(r3 +16)
dereference of modified ctx ptr R3 off=8 disallowed
processed 46 insns (limit 1000000) max_states_per_insn 0 total_states 12 peak_states 12 mark_read 7
*/
#define __loop_iter(arg) \
if (syscall->string_args_len[arg] != 0) { \
unsigned int filename_len = syscall->string_args_len[arg]; \
const void *filename_arg = (const void *)args->args[arg]; \
if (filename_len <= sizeof(augmented_args->filename.value)) \
len += augmented_filename__read(&augmented_args->filename, filename_arg, filename_len);
#define loop_iter_first() __loop_iter(0); }
#define loop_iter(arg) else __loop_iter(arg); }
#define loop_iter_last(arg) else __loop_iter(arg); __asm__ __volatile__("": : :"memory"); }
loop_iter_first()
loop_iter(1)
loop_iter(2)
loop_iter(3)
loop_iter(4)
loop_iter_last(5)
/* If perf_event_output fails, return non-zero so that it gets recorded unaugmented */
return perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU, augmented_args, len);
// If not found on the PROG_ARRAY syscalls map, then we're filtering it:
return 0;
}
SEC("raw_syscalls:sys_exit")
int sys_exit(struct syscall_exit_args *args)
{
struct syscall_exit_args exit_args;
struct syscall *syscall;
if (pid_filter__has(&pids_filtered, getpid()))
return 0;
probe_read(&exit_args, sizeof(exit_args), args);
syscall = bpf_map_lookup_elem(&syscalls, &exit_args.syscall_nr);
if (syscall == NULL || !syscall->enabled)
return 0;
return 1;
/*
* Jump to syscall specific return augmenter, even if the default one,
* "!raw_syscalls:unaugmented" that will just return 1 to return the
* unagmented tracepoint payload.
*/
bpf_tail_call(args, &syscalls_sys_exit, exit_args.syscall_nr);
/*
* If not found on the PROG_ARRAY syscalls map, then we're filtering it:
*/
return 0;
}
license(GPL);

View File

@@ -45,6 +45,8 @@ struct ____btf_map_##name __attribute__((section(".maps." #name), used)) \
static int (*bpf_map_update_elem)(struct bpf_map *map, void *key, void *value, u64 flags) = (void *)BPF_FUNC_map_update_elem;
static void *(*bpf_map_lookup_elem)(struct bpf_map *map, void *key) = (void *)BPF_FUNC_map_lookup_elem;
static void (*bpf_tail_call)(void *ctx, void *map, int index) = (void *)BPF_FUNC_tail_call;
#define SEC(NAME) __attribute__((section(NAME), used))
#define probe(function, vars) \

12
tools/perf/lib/Build Normal file
View File

@@ -0,0 +1,12 @@
libperf-y += core.o
libperf-y += cpumap.o
libperf-y += threadmap.o
libperf-y += evsel.o
libperf-y += evlist.o
libperf-y += zalloc.o
libperf-y += xyarray.o
libperf-y += lib.o
$(OUTPUT)zalloc.o: ../../lib/zalloc.c FORCE
$(call rule_mkdir)
$(call if_changed_dep,cc_o_c)

View File

@@ -0,0 +1,7 @@
all:
rst2man man/libperf.rst > man/libperf.7
rst2pdf tutorial/tutorial.rst
clean:
rm -f man/libperf.7
rm -f tutorial/tutorial.pdf

View File

@@ -0,0 +1,100 @@
.. SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
libperf
The libperf library provides an API to access the linux kernel perf
events subsystem. It provides the following high level objects:
- struct perf_cpu_map
- struct perf_thread_map
- struct perf_evlist
- struct perf_evsel
reference
=========
Function reference by header files:
perf/core.h
-----------
.. code-block:: c
typedef int (\*libperf_print_fn_t)(enum libperf_print_level level,
const char \*, va_list ap);
void libperf_set_print(libperf_print_fn_t fn);
perf/cpumap.h
-------------
.. code-block:: c
struct perf_cpu_map \*perf_cpu_map__dummy_new(void);
struct perf_cpu_map \*perf_cpu_map__new(const char \*cpu_list);
struct perf_cpu_map \*perf_cpu_map__read(FILE \*file);
struct perf_cpu_map \*perf_cpu_map__get(struct perf_cpu_map \*map);
void perf_cpu_map__put(struct perf_cpu_map \*map);
int perf_cpu_map__cpu(const struct perf_cpu_map \*cpus, int idx);
int perf_cpu_map__nr(const struct perf_cpu_map \*cpus);
perf_cpu_map__for_each_cpu(cpu, idx, cpus)
perf/threadmap.h
----------------
.. code-block:: c
struct perf_thread_map \*perf_thread_map__new_dummy(void);
void perf_thread_map__set_pid(struct perf_thread_map \*map, int thread, pid_t pid);
char \*perf_thread_map__comm(struct perf_thread_map \*map, int thread);
struct perf_thread_map \*perf_thread_map__get(struct perf_thread_map \*map);
void perf_thread_map__put(struct perf_thread_map \*map);
perf/evlist.h
-------------
.. code-block::
void perf_evlist__init(struct perf_evlist \*evlist);
void perf_evlist__add(struct perf_evlist \*evlist,
struct perf_evsel \*evsel);
void perf_evlist__remove(struct perf_evlist \*evlist,
struct perf_evsel \*evsel);
struct perf_evlist \*perf_evlist__new(void);
void perf_evlist__delete(struct perf_evlist \*evlist);
struct perf_evsel\* perf_evlist__next(struct perf_evlist \*evlist,
struct perf_evsel \*evsel);
int perf_evlist__open(struct perf_evlist \*evlist);
void perf_evlist__close(struct perf_evlist \*evlist);
void perf_evlist__enable(struct perf_evlist \*evlist);
void perf_evlist__disable(struct perf_evlist \*evlist);
perf_evlist__for_each_evsel(evlist, pos)
void perf_evlist__set_maps(struct perf_evlist \*evlist,
struct perf_cpu_map \*cpus,
struct perf_thread_map \*threads);
perf/evsel.h
------------
.. code-block:: c
struct perf_counts_values {
union {
struct {
uint64_t val;
uint64_t ena;
uint64_t run;
};
uint64_t values[3];
};
};
void perf_evsel__init(struct perf_evsel \*evsel,
struct perf_event_attr \*attr);
struct perf_evsel \*perf_evsel__new(struct perf_event_attr \*attr);
void perf_evsel__delete(struct perf_evsel \*evsel);
int perf_evsel__open(struct perf_evsel \*evsel, struct perf_cpu_map \*cpus,
struct perf_thread_map \*threads);
void perf_evsel__close(struct perf_evsel \*evsel);
int perf_evsel__read(struct perf_evsel \*evsel, int cpu, int thread,
struct perf_counts_values \*count);
int perf_evsel__enable(struct perf_evsel \*evsel);
int perf_evsel__disable(struct perf_evsel \*evsel);
int perf_evsel__apply_filter(struct perf_evsel \*evsel, const char \*filter);
struct perf_cpu_map \*perf_evsel__cpus(struct perf_evsel \*evsel);
struct perf_thread_map \*perf_evsel__threads(struct perf_evsel \*evsel);
struct perf_event_attr \*perf_evsel__attr(struct perf_evsel \*evsel);

View File

@@ -0,0 +1,123 @@
.. SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
libperf tutorial
================
Compile and install libperf from kernel sources
===============================================
.. code-block:: bash
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
cd linux/tools/perf/lib
make
sudo make install prefix=/usr
Libperf object
==============
The libperf library provides several high level objects:
struct perf_cpu_map
Provides a cpu list abstraction.
struct perf_thread_map
Provides a thread list abstraction.
struct perf_evsel
Provides an abstraction for single a perf event.
struct perf_evlist
Gathers several struct perf_evsel object and performs functions on all of them.
The exported API binds these objects together,
for full reference see the libperf.7 man page.
Examples
========
Examples aim to explain libperf functionality on simple use cases.
They are based in on a checked out linux kernel git tree:
.. code-block:: bash
$ cd tools/perf/lib/Documentation/tutorial/
$ ls -d ex-*
ex-1-compile ex-2-evsel-stat ex-3-evlist-stat
ex-1-compile example
====================
This example shows the basic usage of *struct perf_cpu_map*,
how to create it and display its cpus:
.. code-block:: bash
$ cd ex-1-compile/
$ make
gcc -o test test.c -lperf
$ ./test
0 1 2 3 4 5 6 7
The full code listing is here:
.. code-block:: c
1 #include <perf/cpumap.h>
2
3 int main(int argc, char **Argv)
4 {
5 struct perf_cpu_map *cpus;
6 int cpu, tmp;
7
8 cpus = perf_cpu_map__new(NULL);
9
10 perf_cpu_map__for_each_cpu(cpu, tmp, cpus)
11 fprintf(stdout, "%d ", cpu);
12
13 fprintf(stdout, "\n");
14
15 perf_cpu_map__put(cpus);
16 return 0;
17 }
First you need to include the proper header to have *struct perf_cpumap*
declaration and functions:
.. code-block:: c
1 #include <perf/cpumap.h>
The *struct perf_cpumap* object is created by *perf_cpu_map__new* call.
The *NULL* argument asks it to populate the object with the current online CPUs list:
.. code-block:: c
8 cpus = perf_cpu_map__new(NULL);
This is paired with a *perf_cpu_map__put*, that drops its reference at the end, possibly deleting it.
.. code-block:: c
15 perf_cpu_map__put(cpus);
The iteration through the *struct perf_cpumap* CPUs is done using the *perf_cpu_map__for_each_cpu*
macro which requires 3 arguments:
- cpu - the cpu numer
- tmp - iteration helper variable
- cpus - the *struct perf_cpumap* object
.. code-block:: c
10 perf_cpu_map__for_each_cpu(cpu, tmp, cpus)
11 fprintf(stdout, "%d ", cpu);
ex-2-evsel-stat example
=======================
TBD
ex-3-evlist-stat example
========================
TBD

158
tools/perf/lib/Makefile Normal file
View File

@@ -0,0 +1,158 @@
# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
# Most of this file is copied from tools/lib/bpf/Makefile
LIBPERF_VERSION = 0
LIBPERF_PATCHLEVEL = 0
LIBPERF_EXTRAVERSION = 1
MAKEFLAGS += --no-print-directory
ifeq ($(srctree),)
srctree := $(patsubst %/,%,$(dir $(CURDIR)))
srctree := $(patsubst %/,%,$(dir $(srctree)))
srctree := $(patsubst %/,%,$(dir $(srctree)))
#$(info Determined 'srctree' to be $(srctree))
endif
INSTALL = install
# Use DESTDIR for installing into a different root directory.
# This is useful for building a package. The program will be
# installed in this directory as if it was the root directory.
# Then the build tool can move it later.
DESTDIR ?=
DESTDIR_SQ = '$(subst ','\'',$(DESTDIR))'
include $(srctree)/tools/scripts/Makefile.include
include $(srctree)/tools/scripts/Makefile.arch
ifeq ($(LP64), 1)
libdir_relative = lib64
else
libdir_relative = lib
endif
prefix ?=
libdir = $(prefix)/$(libdir_relative)
# Shell quotes
libdir_SQ = $(subst ','\'',$(libdir))
libdir_relative_SQ = $(subst ','\'',$(libdir_relative))
ifeq ("$(origin V)", "command line")
VERBOSE = $(V)
endif
ifndef VERBOSE
VERBOSE = 0
endif
ifeq ($(VERBOSE),1)
Q =
else
Q = @
endif
# Set compile option CFLAGS
ifdef EXTRA_CFLAGS
CFLAGS := $(EXTRA_CFLAGS)
else
CFLAGS := -g -Wall
endif
INCLUDES = -I$(srctree)/tools/perf/lib/include -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(SRCARCH)/include/ -I$(srctree)/tools/arch/$(SRCARCH)/include/uapi -I$(srctree)/tools/include/uapi
# Append required CFLAGS
override CFLAGS += $(EXTRA_WARNINGS)
override CFLAGS += -Werror -Wall
override CFLAGS += -fPIC
override CFLAGS += $(INCLUDES)
override CFLAGS += -fvisibility=hidden
all:
export srctree OUTPUT CC LD CFLAGS V
export DESTDIR DESTDIR_SQ
include $(srctree)/tools/build/Makefile.include
VERSION_SCRIPT := libperf.map
PATCHLEVEL = $(LIBPERF_PATCHLEVEL)
EXTRAVERSION = $(LIBPERF_EXTRAVERSION)
VERSION = $(LIBPERF_VERSION).$(LIBPERF_PATCHLEVEL).$(LIBPERF_EXTRAVERSION)
LIBPERF_SO := $(OUTPUT)libperf.so.$(VERSION)
LIBPERF_A := $(OUTPUT)libperf.a
LIBPERF_IN := $(OUTPUT)libperf-in.o
LIBPERF_PC := $(OUTPUT)libperf.pc
LIBPERF_ALL := $(LIBPERF_A) $(OUTPUT)libperf.so*
$(LIBPERF_IN): FORCE
$(Q)$(MAKE) $(build)=libperf
$(LIBPERF_A): $(LIBPERF_IN)
$(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $(LIBPERF_IN)
$(LIBPERF_SO): $(LIBPERF_IN)
$(QUIET_LINK)$(CC) --shared -Wl,-soname,libperf.so \
-Wl,--version-script=$(VERSION_SCRIPT) $^ -o $@
@ln -sf $(@F) $(OUTPUT)libperf.so
@ln -sf $(@F) $(OUTPUT)libperf.so.$(LIBPERF_VERSION)
libs: $(LIBPERF_A) $(LIBPERF_SO) $(LIBPERF_PC)
all: fixdep
$(Q)$(MAKE) libs
clean:
$(call QUIET_CLEAN, libperf) $(RM) $(LIBPERF_A) \
*.o *~ *.a *.so *.so.$(VERSION) *.so.$(LIBPERF_VERSION) .*.d .*.cmd LIBPERF-CFLAGS $(LIBPERF_PC)
$(Q)$(MAKE) -C tests clean
tests:
$(Q)$(MAKE) -C tests
$(Q)$(MAKE) -C tests run
$(LIBPERF_PC):
$(QUIET_GEN)sed -e "s|@PREFIX@|$(prefix)|" \
-e "s|@LIBDIR@|$(libdir_SQ)|" \
-e "s|@VERSION@|$(VERSION)|" \
< libperf.pc.template > $@
define do_install_mkdir
if [ ! -d '$(DESTDIR_SQ)$1' ]; then \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$1'; \
fi
endef
define do_install
if [ ! -d '$(DESTDIR_SQ)$2' ]; then \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$2'; \
fi; \
$(INSTALL) $1 $(if $3,-m $3,) '$(DESTDIR_SQ)$2'
endef
install_lib: libs
$(call QUIET_INSTALL, $(LIBPERF_ALL)) \
$(call do_install_mkdir,$(libdir_SQ)); \
cp -fpR $(LIBPERF_ALL) $(DESTDIR)$(libdir_SQ)
install_headers:
$(call QUIET_INSTALL, headers) \
$(call do_install,include/perf/core.h,$(prefix)/include/perf,644); \
$(call do_install,include/perf/cpumap.h,$(prefix)/include/perf,644); \
$(call do_install,include/perf/threadmap.h,$(prefix)/include/perf,644); \
$(call do_install,include/perf/evlist.h,$(prefix)/include/perf,644); \
$(call do_install,include/perf/evsel.h,$(prefix)/include/perf,644);
install_pkgconfig: $(LIBPERF_PC)
$(call QUIET_INSTALL, $(LIBPERF_PC)) \
$(call do_install,$(LIBPERF_PC),$(libdir_SQ)/pkgconfig,644)
install: install_lib install_headers install_pkgconfig
FORCE:
.PHONY: all install clean tests FORCE

34
tools/perf/lib/core.c Normal file
View File

@@ -0,0 +1,34 @@
// SPDX-License-Identifier: GPL-2.0-only
#define __printf(a, b) __attribute__((format(printf, a, b)))
#include <stdio.h>
#include <stdarg.h>
#include <perf/core.h>
#include "internal.h"
static int __base_pr(enum libperf_print_level level, const char *format,
va_list args)
{
return vfprintf(stderr, format, args);
}
static libperf_print_fn_t __libperf_pr = __base_pr;
void libperf_set_print(libperf_print_fn_t fn)
{
__libperf_pr = fn;
}
__printf(2, 3)
void libperf_print(enum libperf_print_level level, const char *format, ...)
{
va_list args;
if (!__libperf_pr)
return;
va_start(args, format);
__libperf_pr(level, format, args);
va_end(args);
}

262
tools/perf/lib/cpumap.c Normal file
View File

@@ -0,0 +1,262 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <perf/cpumap.h>
#include <stdlib.h>
#include <linux/refcount.h>
#include <internal/cpumap.h>
#include <asm/bug.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <ctype.h>
#include <limits.h>
struct perf_cpu_map *perf_cpu_map__dummy_new(void)
{
struct perf_cpu_map *cpus = malloc(sizeof(*cpus) + sizeof(int));
if (cpus != NULL) {
cpus->nr = 1;
cpus->map[0] = -1;
refcount_set(&cpus->refcnt, 1);
}
return cpus;
}
static void cpu_map__delete(struct perf_cpu_map *map)
{
if (map) {
WARN_ONCE(refcount_read(&map->refcnt) != 0,
"cpu_map refcnt unbalanced\n");
free(map);
}
}
struct perf_cpu_map *perf_cpu_map__get(struct perf_cpu_map *map)
{
if (map)
refcount_inc(&map->refcnt);
return map;
}
void perf_cpu_map__put(struct perf_cpu_map *map)
{
if (map && refcount_dec_and_test(&map->refcnt))
cpu_map__delete(map);
}
static struct perf_cpu_map *cpu_map__default_new(void)
{
struct perf_cpu_map *cpus;
int nr_cpus;
nr_cpus = sysconf(_SC_NPROCESSORS_ONLN);
if (nr_cpus < 0)
return NULL;
cpus = malloc(sizeof(*cpus) + nr_cpus * sizeof(int));
if (cpus != NULL) {
int i;
for (i = 0; i < nr_cpus; ++i)
cpus->map[i] = i;
cpus->nr = nr_cpus;
refcount_set(&cpus->refcnt, 1);
}
return cpus;
}
static struct perf_cpu_map *cpu_map__trim_new(int nr_cpus, int *tmp_cpus)
{
size_t payload_size = nr_cpus * sizeof(int);
struct perf_cpu_map *cpus = malloc(sizeof(*cpus) + payload_size);
if (cpus != NULL) {
cpus->nr = nr_cpus;
memcpy(cpus->map, tmp_cpus, payload_size);
refcount_set(&cpus->refcnt, 1);
}
return cpus;
}
struct perf_cpu_map *perf_cpu_map__read(FILE *file)
{
struct perf_cpu_map *cpus = NULL;
int nr_cpus = 0;
int *tmp_cpus = NULL, *tmp;
int max_entries = 0;
int n, cpu, prev;
char sep;
sep = 0;
prev = -1;
for (;;) {
n = fscanf(file, "%u%c", &cpu, &sep);
if (n <= 0)
break;
if (prev >= 0) {
int new_max = nr_cpus + cpu - prev - 1;
WARN_ONCE(new_max >= MAX_NR_CPUS, "Perf can support %d CPUs. "
"Consider raising MAX_NR_CPUS\n", MAX_NR_CPUS);
if (new_max >= max_entries) {
max_entries = new_max + MAX_NR_CPUS / 2;
tmp = realloc(tmp_cpus, max_entries * sizeof(int));
if (tmp == NULL)
goto out_free_tmp;
tmp_cpus = tmp;
}
while (++prev < cpu)
tmp_cpus[nr_cpus++] = prev;
}
if (nr_cpus == max_entries) {
max_entries += MAX_NR_CPUS;
tmp = realloc(tmp_cpus, max_entries * sizeof(int));
if (tmp == NULL)
goto out_free_tmp;
tmp_cpus = tmp;
}
tmp_cpus[nr_cpus++] = cpu;
if (n == 2 && sep == '-')
prev = cpu;
else
prev = -1;
if (n == 1 || sep == '\n')
break;
}
if (nr_cpus > 0)
cpus = cpu_map__trim_new(nr_cpus, tmp_cpus);
else
cpus = cpu_map__default_new();
out_free_tmp:
free(tmp_cpus);
return cpus;
}
static struct perf_cpu_map *cpu_map__read_all_cpu_map(void)
{
struct perf_cpu_map *cpus = NULL;
FILE *onlnf;
onlnf = fopen("/sys/devices/system/cpu/online", "r");
if (!onlnf)
return cpu_map__default_new();
cpus = perf_cpu_map__read(onlnf);
fclose(onlnf);
return cpus;
}
struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list)
{
struct perf_cpu_map *cpus = NULL;
unsigned long start_cpu, end_cpu = 0;
char *p = NULL;
int i, nr_cpus = 0;
int *tmp_cpus = NULL, *tmp;
int max_entries = 0;
if (!cpu_list)
return cpu_map__read_all_cpu_map();
/*
* must handle the case of empty cpumap to cover
* TOPOLOGY header for NUMA nodes with no CPU
* ( e.g., because of CPU hotplug)
*/
if (!isdigit(*cpu_list) && *cpu_list != '\0')
goto out;
while (isdigit(*cpu_list)) {
p = NULL;
start_cpu = strtoul(cpu_list, &p, 0);
if (start_cpu >= INT_MAX
|| (*p != '\0' && *p != ',' && *p != '-'))
goto invalid;
if (*p == '-') {
cpu_list = ++p;
p = NULL;
end_cpu = strtoul(cpu_list, &p, 0);
if (end_cpu >= INT_MAX || (*p != '\0' && *p != ','))
goto invalid;
if (end_cpu < start_cpu)
goto invalid;
} else {
end_cpu = start_cpu;
}
WARN_ONCE(end_cpu >= MAX_NR_CPUS, "Perf can support %d CPUs. "
"Consider raising MAX_NR_CPUS\n", MAX_NR_CPUS);
for (; start_cpu <= end_cpu; start_cpu++) {
/* check for duplicates */
for (i = 0; i < nr_cpus; i++)
if (tmp_cpus[i] == (int)start_cpu)
goto invalid;
if (nr_cpus == max_entries) {
max_entries += MAX_NR_CPUS;
tmp = realloc(tmp_cpus, max_entries * sizeof(int));
if (tmp == NULL)
goto invalid;
tmp_cpus = tmp;
}
tmp_cpus[nr_cpus++] = (int)start_cpu;
}
if (*p)
++p;
cpu_list = p;
}
if (nr_cpus > 0)
cpus = cpu_map__trim_new(nr_cpus, tmp_cpus);
else if (*cpu_list != '\0')
cpus = cpu_map__default_new();
else
cpus = perf_cpu_map__dummy_new();
invalid:
free(tmp_cpus);
out:
return cpus;
}
int perf_cpu_map__cpu(const struct perf_cpu_map *cpus, int idx)
{
if (idx < cpus->nr)
return cpus->map[idx];
return -1;
}
int perf_cpu_map__nr(const struct perf_cpu_map *cpus)
{
return cpus ? cpus->nr : 1;
}
bool perf_cpu_map__empty(const struct perf_cpu_map *map)
{
return map ? map->map[0] == -1 : true;
}
int perf_cpu_map__idx(struct perf_cpu_map *cpus, int cpu)
{
int i;
for (i = 0; i < cpus->nr; ++i) {
if (cpus->map[i] == cpu)
return i;
}
return -1;
}

159
tools/perf/lib/evlist.c Normal file
View File

@@ -0,0 +1,159 @@
// SPDX-License-Identifier: GPL-2.0
#include <perf/evlist.h>
#include <perf/evsel.h>
#include <linux/list.h>
#include <internal/evlist.h>
#include <internal/evsel.h>
#include <linux/zalloc.h>
#include <stdlib.h>
#include <perf/cpumap.h>
#include <perf/threadmap.h>
void perf_evlist__init(struct perf_evlist *evlist)
{
INIT_LIST_HEAD(&evlist->entries);
evlist->nr_entries = 0;
}
static void __perf_evlist__propagate_maps(struct perf_evlist *evlist,
struct perf_evsel *evsel)
{
/*
* We already have cpus for evsel (via PMU sysfs) so
* keep it, if there's no target cpu list defined.
*/
if (!evsel->own_cpus || evlist->has_user_cpus) {
perf_cpu_map__put(evsel->cpus);
evsel->cpus = perf_cpu_map__get(evlist->cpus);
} else if (evsel->cpus != evsel->own_cpus) {
perf_cpu_map__put(evsel->cpus);
evsel->cpus = perf_cpu_map__get(evsel->own_cpus);
}
perf_thread_map__put(evsel->threads);
evsel->threads = perf_thread_map__get(evlist->threads);
}
static void perf_evlist__propagate_maps(struct perf_evlist *evlist)
{
struct perf_evsel *evsel;
perf_evlist__for_each_evsel(evlist, evsel)
__perf_evlist__propagate_maps(evlist, evsel);
}
void perf_evlist__add(struct perf_evlist *evlist,
struct perf_evsel *evsel)
{
list_add_tail(&evsel->node, &evlist->entries);
evlist->nr_entries += 1;
__perf_evlist__propagate_maps(evlist, evsel);
}
void perf_evlist__remove(struct perf_evlist *evlist,
struct perf_evsel *evsel)
{
list_del_init(&evsel->node);
evlist->nr_entries -= 1;
}
struct perf_evlist *perf_evlist__new(void)
{
struct perf_evlist *evlist = zalloc(sizeof(*evlist));
if (evlist != NULL)
perf_evlist__init(evlist);
return evlist;
}
struct perf_evsel *
perf_evlist__next(struct perf_evlist *evlist, struct perf_evsel *prev)
{
struct perf_evsel *next;
if (!prev) {
next = list_first_entry(&evlist->entries,
struct perf_evsel,
node);
} else {
next = list_next_entry(prev, node);
}
/* Empty list is noticed here so don't need checking on entry. */
if (&next->node == &evlist->entries)
return NULL;
return next;
}
void perf_evlist__delete(struct perf_evlist *evlist)
{
free(evlist);
}
void perf_evlist__set_maps(struct perf_evlist *evlist,
struct perf_cpu_map *cpus,
struct perf_thread_map *threads)
{
/*
* Allow for the possibility that one or another of the maps isn't being
* changed i.e. don't put it. Note we are assuming the maps that are
* being applied are brand new and evlist is taking ownership of the
* original reference count of 1. If that is not the case it is up to
* the caller to increase the reference count.
*/
if (cpus != evlist->cpus) {
perf_cpu_map__put(evlist->cpus);
evlist->cpus = perf_cpu_map__get(cpus);
}
if (threads != evlist->threads) {
perf_thread_map__put(evlist->threads);
evlist->threads = perf_thread_map__get(threads);
}
perf_evlist__propagate_maps(evlist);
}
int perf_evlist__open(struct perf_evlist *evlist)
{
struct perf_evsel *evsel;
int err;
perf_evlist__for_each_entry(evlist, evsel) {
err = perf_evsel__open(evsel, evsel->cpus, evsel->threads);
if (err < 0)
goto out_err;
}
return 0;
out_err:
perf_evlist__close(evlist);
return err;
}
void perf_evlist__close(struct perf_evlist *evlist)
{
struct perf_evsel *evsel;
perf_evlist__for_each_entry_reverse(evlist, evsel)
perf_evsel__close(evsel);
}
void perf_evlist__enable(struct perf_evlist *evlist)
{
struct perf_evsel *evsel;
perf_evlist__for_each_entry(evlist, evsel)
perf_evsel__enable(evsel);
}
void perf_evlist__disable(struct perf_evlist *evlist)
{
struct perf_evsel *evsel;
perf_evlist__for_each_entry(evlist, evsel)
perf_evsel__disable(evsel);
}

232
tools/perf/lib/evsel.c Normal file
View File

@@ -0,0 +1,232 @@
// SPDX-License-Identifier: GPL-2.0
#include <errno.h>
#include <unistd.h>
#include <sys/syscall.h>
#include <perf/evsel.h>
#include <perf/cpumap.h>
#include <perf/threadmap.h>
#include <linux/list.h>
#include <internal/evsel.h>
#include <linux/zalloc.h>
#include <stdlib.h>
#include <internal/xyarray.h>
#include <internal/cpumap.h>
#include <internal/threadmap.h>
#include <internal/lib.h>
#include <linux/string.h>
#include <sys/ioctl.h>
void perf_evsel__init(struct perf_evsel *evsel, struct perf_event_attr *attr)
{
INIT_LIST_HEAD(&evsel->node);
evsel->attr = *attr;
}
struct perf_evsel *perf_evsel__new(struct perf_event_attr *attr)
{
struct perf_evsel *evsel = zalloc(sizeof(*evsel));
if (evsel != NULL)
perf_evsel__init(evsel, attr);
return evsel;
}
void perf_evsel__delete(struct perf_evsel *evsel)
{
free(evsel);
}
#define FD(e, x, y) (*(int *) xyarray__entry(e->fd, x, y))
int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads)
{
evsel->fd = xyarray__new(ncpus, nthreads, sizeof(int));
if (evsel->fd) {
int cpu, thread;
for (cpu = 0; cpu < ncpus; cpu++) {
for (thread = 0; thread < nthreads; thread++) {
FD(evsel, cpu, thread) = -1;
}
}
}
return evsel->fd != NULL ? 0 : -ENOMEM;
}
static int
sys_perf_event_open(struct perf_event_attr *attr,
pid_t pid, int cpu, int group_fd,
unsigned long flags)
{
return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags);
}
int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus,
struct perf_thread_map *threads)
{
int cpu, thread, err = 0;
if (cpus == NULL) {
static struct perf_cpu_map *empty_cpu_map;
if (empty_cpu_map == NULL) {
empty_cpu_map = perf_cpu_map__dummy_new();
if (empty_cpu_map == NULL)
return -ENOMEM;
}
cpus = empty_cpu_map;
}
if (threads == NULL) {
static struct perf_thread_map *empty_thread_map;
if (empty_thread_map == NULL) {
empty_thread_map = perf_thread_map__new_dummy();
if (empty_thread_map == NULL)
return -ENOMEM;
}
threads = empty_thread_map;
}
if (evsel->fd == NULL &&
perf_evsel__alloc_fd(evsel, cpus->nr, threads->nr) < 0)
return -ENOMEM;
for (cpu = 0; cpu < cpus->nr; cpu++) {
for (thread = 0; thread < threads->nr; thread++) {
int fd;
fd = sys_perf_event_open(&evsel->attr,
threads->map[thread].pid,
cpus->map[cpu], -1, 0);
if (fd < 0)
return -errno;
FD(evsel, cpu, thread) = fd;
}
}
return err;
}
void perf_evsel__close_fd(struct perf_evsel *evsel)
{
int cpu, thread;
for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++)
for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
close(FD(evsel, cpu, thread));
FD(evsel, cpu, thread) = -1;
}
}
void perf_evsel__free_fd(struct perf_evsel *evsel)
{
xyarray__delete(evsel->fd);
evsel->fd = NULL;
}
void perf_evsel__close(struct perf_evsel *evsel)
{
if (evsel->fd == NULL)
return;
perf_evsel__close_fd(evsel);
perf_evsel__free_fd(evsel);
}
int perf_evsel__read_size(struct perf_evsel *evsel)
{
u64 read_format = evsel->attr.read_format;
int entry = sizeof(u64); /* value */
int size = 0;
int nr = 1;
if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED)
size += sizeof(u64);
if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING)
size += sizeof(u64);
if (read_format & PERF_FORMAT_ID)
entry += sizeof(u64);
if (read_format & PERF_FORMAT_GROUP) {
nr = evsel->nr_members;
size += sizeof(u64);
}
size += entry * nr;
return size;
}
int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
struct perf_counts_values *count)
{
size_t size = perf_evsel__read_size(evsel);
memset(count, 0, sizeof(*count));
if (FD(evsel, cpu, thread) < 0)
return -EINVAL;
if (readn(FD(evsel, cpu, thread), count->values, size) <= 0)
return -errno;
return 0;
}
static int perf_evsel__run_ioctl(struct perf_evsel *evsel,
int ioc, void *arg)
{
int cpu, thread;
for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++) {
for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
int fd = FD(evsel, cpu, thread),
err = ioctl(fd, ioc, arg);
if (err)
return err;
}
}
return 0;
}
int perf_evsel__enable(struct perf_evsel *evsel)
{
return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0);
}
int perf_evsel__disable(struct perf_evsel *evsel)
{
return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, 0);
}
int perf_evsel__apply_filter(struct perf_evsel *evsel, const char *filter)
{
return perf_evsel__run_ioctl(evsel,
PERF_EVENT_IOC_SET_FILTER,
(void *)filter);
}
struct perf_cpu_map *perf_evsel__cpus(struct perf_evsel *evsel)
{
return evsel->cpus;
}
struct perf_thread_map *perf_evsel__threads(struct perf_evsel *evsel)
{
return evsel->threads;
}
struct perf_event_attr *perf_evsel__attr(struct perf_evsel *evsel)
{
return &evsel->attr;
}

View File

@@ -0,0 +1,19 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __LIBPERF_INTERNAL_CPUMAP_H
#define __LIBPERF_INTERNAL_CPUMAP_H
#include <linux/refcount.h>
struct perf_cpu_map {
refcount_t refcnt;
int nr;
int map[];
};
#ifndef MAX_NR_CPUS
#define MAX_NR_CPUS 2048
#endif
int perf_cpu_map__idx(struct perf_cpu_map *cpus, int cpu);
#endif /* __LIBPERF_INTERNAL_CPUMAP_H */

View File

@@ -0,0 +1,50 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __LIBPERF_INTERNAL_EVLIST_H
#define __LIBPERF_INTERNAL_EVLIST_H
#include <linux/list.h>
struct perf_cpu_map;
struct perf_thread_map;
struct perf_evlist {
struct list_head entries;
int nr_entries;
bool has_user_cpus;
struct perf_cpu_map *cpus;
struct perf_thread_map *threads;
};
/**
* __perf_evlist__for_each_entry - iterate thru all the evsels
* @list: list_head instance to iterate
* @evsel: struct perf_evsel iterator
*/
#define __perf_evlist__for_each_entry(list, evsel) \
list_for_each_entry(evsel, list, node)
/**
* evlist__for_each_entry - iterate thru all the evsels
* @evlist: perf_evlist instance to iterate
* @evsel: struct perf_evsel iterator
*/
#define perf_evlist__for_each_entry(evlist, evsel) \
__perf_evlist__for_each_entry(&(evlist)->entries, evsel)
/**
* __perf_evlist__for_each_entry_reverse - iterate thru all the evsels in reverse order
* @list: list_head instance to iterate
* @evsel: struct evsel iterator
*/
#define __perf_evlist__for_each_entry_reverse(list, evsel) \
list_for_each_entry_reverse(evsel, list, node)
/**
* perf_evlist__for_each_entry_reverse - iterate thru all the evsels in reverse order
* @evlist: evlist instance to iterate
* @evsel: struct evsel iterator
*/
#define perf_evlist__for_each_entry_reverse(evlist, evsel) \
__perf_evlist__for_each_entry_reverse(&(evlist)->entries, evsel)
#endif /* __LIBPERF_INTERNAL_EVLIST_H */

View File

@@ -0,0 +1,29 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __LIBPERF_INTERNAL_EVSEL_H
#define __LIBPERF_INTERNAL_EVSEL_H
#include <linux/types.h>
#include <linux/perf_event.h>
struct perf_cpu_map;
struct perf_thread_map;
struct perf_evsel {
struct list_head node;
struct perf_event_attr attr;
struct perf_cpu_map *cpus;
struct perf_cpu_map *own_cpus;
struct perf_thread_map *threads;
struct xyarray *fd;
/* parse modifier helper */
int nr_members;
};
int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads);
void perf_evsel__close_fd(struct perf_evsel *evsel);
void perf_evsel__free_fd(struct perf_evsel *evsel);
int perf_evsel__read_size(struct perf_evsel *evsel);
int perf_evsel__apply_filter(struct perf_evsel *evsel, const char *filter);
#endif /* __LIBPERF_INTERNAL_EVSEL_H */

View File

@@ -0,0 +1,10 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __LIBPERF_INTERNAL_LIB_H
#define __LIBPERF_INTERNAL_LIB_H
#include <unistd.h>
ssize_t readn(int fd, void *buf, size_t n);
ssize_t writen(int fd, const void *buf, size_t n);
#endif /* __LIBPERF_INTERNAL_CPUMAP_H */

View File

@@ -0,0 +1,19 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __LIBPERF_INTERNAL_TESTS_H
#define __LIBPERF_INTERNAL_TESTS_H
#include <stdio.h>
#define __T_START fprintf(stdout, "- running %s...", __FILE__)
#define __T_OK fprintf(stdout, "OK\n")
#define __T_FAIL fprintf(stdout, "FAIL\n")
#define __T(text, cond) \
do { \
if (!(cond)) { \
fprintf(stderr, "FAILED %s:%d %s\n", __FILE__, __LINE__, text); \
return -1; \
} \
} while (0)
#endif /* __LIBPERF_INTERNAL_TESTS_H */

View File

@@ -0,0 +1,23 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __LIBPERF_INTERNAL_THREADMAP_H
#define __LIBPERF_INTERNAL_THREADMAP_H
#include <linux/refcount.h>
#include <sys/types.h>
#include <unistd.h>
struct thread_map_data {
pid_t pid;
char *comm;
};
struct perf_thread_map {
refcount_t refcnt;
int nr;
int err_thread;
struct thread_map_data map[];
};
struct perf_thread_map *perf_thread_map__realloc(struct perf_thread_map *map, int nr);
#endif /* __LIBPERF_INTERNAL_THREADMAP_H */

Some files were not shown because too many files have changed in this diff Show More