topdown.txt 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344
  1. Using TopDown metrics in user space
  2. -----------------------------------
  3. Intel CPUs (since Sandy Bridge and Silvermont) support a TopDown
  4. methodology to break down CPU pipeline execution into 4 bottlenecks:
  5. frontend bound, backend bound, bad speculation, retiring.
  6. For more details on Topdown see [1][5]
  7. Traditionally this was implemented by events in generic counters
  8. and specific formulas to compute the bottlenecks.
  9. perf stat --topdown implements this.
  10. Full Top Down includes more levels that can break down the
  11. bottlenecks further. This is not directly implemented in perf,
  12. but available in other tools that can run on top of perf,
  13. such as toplev[2] or vtune[3]
  14. New Topdown features in Ice Lake
  15. ===============================
  16. With Ice Lake CPUs the TopDown metrics are directly available as
  17. fixed counters and do not require generic counters. This allows
  18. to collect TopDown always in addition to other events.
  19. % perf stat -a --topdown -I1000
  20. # time retiring bad speculation frontend bound backend bound
  21. 1.001281330 23.0% 15.3% 29.6% 32.1%
  22. 2.003009005 5.0% 6.8% 46.6% 41.6%
  23. 3.004646182 6.7% 6.7% 46.0% 40.6%
  24. 4.006326375 5.0% 6.4% 47.6% 41.0%
  25. 5.007991804 5.1% 6.3% 46.3% 42.3%
  26. 6.009626773 6.2% 7.1% 47.3% 39.3%
  27. 7.011296356 4.7% 6.7% 46.2% 42.4%
  28. 8.012951831 4.7% 6.7% 47.5% 41.1%
  29. ...
  30. This also enables measuring TopDown per thread/process instead
  31. of only per core.
  32. Using TopDown through RDPMC in applications on Ice Lake
  33. ======================================================
  34. For more fine grained measurements it can be useful to
  35. access the new directly from user space. This is more complicated,
  36. but drastically lowers overhead.
  37. On Ice Lake, there is a new fixed counter 3: SLOTS, which reports
  38. "pipeline SLOTS" (cycles multiplied by core issue width) and a
  39. metric register that reports slots ratios for the different bottleneck
  40. categories.
  41. The metrics counter is CPU model specific and is not available on older
  42. CPUs.
  43. Example code
  44. ============
  45. Library functions to do the functionality described below
  46. is also available in libjevents [4]
  47. The application opens a group with fixed counter 3 (SLOTS) and any
  48. metric event, and allow user programs to read the performance counters.
  49. Fixed counter 3 is mapped to a pseudo event event=0x00, umask=04,
  50. so the perf_event_attr structure should be initialized with
  51. { .config = 0x0400, .type = PERF_TYPE_RAW }
  52. The metric events are mapped to the pseudo event event=0x00, umask=0x8X.
  53. For example, the perf_event_attr structure can be initialized with
  54. { .config = 0x8000, .type = PERF_TYPE_RAW } for Retiring metric event
  55. The Fixed counter 3 must be the leader of the group.
  56. #include <linux/perf_event.h>
  57. #include <sys/mman.h>
  58. #include <sys/syscall.h>
  59. #include <unistd.h>
  60. /* Provide own perf_event_open stub because glibc doesn't */
  61. __attribute__((weak))
  62. int perf_event_open(struct perf_event_attr *attr, pid_t pid,
  63. int cpu, int group_fd, unsigned long flags)
  64. {
  65. return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags);
  66. }
  67. /* Open slots counter file descriptor for current task. */
  68. struct perf_event_attr slots = {
  69. .type = PERF_TYPE_RAW,
  70. .size = sizeof(struct perf_event_attr),
  71. .config = 0x400,
  72. .exclude_kernel = 1,
  73. };
  74. int slots_fd = perf_event_open(&slots, 0, -1, -1, 0);
  75. if (slots_fd < 0)
  76. ... error ...
  77. /* Memory mapping the fd permits _rdpmc calls from userspace */
  78. void *slots_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, slots_fd, 0);
  79. if (!slot_p)
  80. .... error ...
  81. /*
  82. * Open metrics event file descriptor for current task.
  83. * Set slots event as the leader of the group.
  84. */
  85. struct perf_event_attr metrics = {
  86. .type = PERF_TYPE_RAW,
  87. .size = sizeof(struct perf_event_attr),
  88. .config = 0x8000,
  89. .exclude_kernel = 1,
  90. };
  91. int metrics_fd = perf_event_open(&metrics, 0, -1, slots_fd, 0);
  92. if (metrics_fd < 0)
  93. ... error ...
  94. /* Memory mapping the fd permits _rdpmc calls from userspace */
  95. void *metrics_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, metrics_fd, 0);
  96. if (!metrics_p)
  97. ... error ...
  98. Note: the file descriptors returned by the perf_event_open calls must be memory
  99. mapped to permit calls to the _rdpmd instruction. Permission may also be granted
  100. by writing the /sys/devices/cpu/rdpmc sysfs node.
  101. The RDPMC instruction (or _rdpmc compiler intrinsic) can now be used
  102. to read slots and the topdown metrics at different points of the program:
  103. #include <stdint.h>
  104. #include <x86intrin.h>
  105. #define RDPMC_FIXED (1 << 30) /* return fixed counters */
  106. #define RDPMC_METRIC (1 << 29) /* return metric counters */
  107. #define FIXED_COUNTER_SLOTS 3
  108. #define METRIC_COUNTER_TOPDOWN_L1_L2 0
  109. static inline uint64_t read_slots(void)
  110. {
  111. return _rdpmc(RDPMC_FIXED | FIXED_COUNTER_SLOTS);
  112. }
  113. static inline uint64_t read_metrics(void)
  114. {
  115. return _rdpmc(RDPMC_METRIC | METRIC_COUNTER_TOPDOWN_L1_L2);
  116. }
  117. Then the program can be instrumented to read these metrics at different
  118. points.
  119. It's not a good idea to do this with too short code regions,
  120. as the parallelism and overlap in the CPU program execution will
  121. cause too much measurement inaccuracy. For example instrumenting
  122. individual basic blocks is definitely too fine grained.
  123. _rdpmc calls should not be mixed with reading the metrics and slots counters
  124. through system calls, as the kernel will reset these counters after each system
  125. call.
  126. Decoding metrics values
  127. =======================
  128. The value reported by read_metrics() contains four 8 bit fields
  129. that represent a scaled ratio that represent the Level 1 bottleneck.
  130. All four fields add up to 0xff (= 100%)
  131. The binary ratios in the metric value can be converted to float ratios:
  132. #define GET_METRIC(m, i) (((m) >> (i*8)) & 0xff)
  133. /* L1 Topdown metric events */
  134. #define TOPDOWN_RETIRING(val) ((float)GET_METRIC(val, 0) / 0xff)
  135. #define TOPDOWN_BAD_SPEC(val) ((float)GET_METRIC(val, 1) / 0xff)
  136. #define TOPDOWN_FE_BOUND(val) ((float)GET_METRIC(val, 2) / 0xff)
  137. #define TOPDOWN_BE_BOUND(val) ((float)GET_METRIC(val, 3) / 0xff)
  138. /*
  139. * L2 Topdown metric events.
  140. * Available on Sapphire Rapids and later platforms.
  141. */
  142. #define TOPDOWN_HEAVY_OPS(val) ((float)GET_METRIC(val, 4) / 0xff)
  143. #define TOPDOWN_BR_MISPREDICT(val) ((float)GET_METRIC(val, 5) / 0xff)
  144. #define TOPDOWN_FETCH_LAT(val) ((float)GET_METRIC(val, 6) / 0xff)
  145. #define TOPDOWN_MEM_BOUND(val) ((float)GET_METRIC(val, 7) / 0xff)
  146. and then converted to percent for printing.
  147. The ratios in the metric accumulate for the time when the counter
  148. is enabled. For measuring programs it is often useful to measure
  149. specific sections. For this it is needed to deltas on metrics.
  150. This can be done by scaling the metrics with the slots counter
  151. read at the same time.
  152. Then it's possible to take deltas of these slots counts
  153. measured at different points, and determine the metrics
  154. for that time period.
  155. slots_a = read_slots();
  156. metric_a = read_metrics();
  157. ... larger code region ...
  158. slots_b = read_slots()
  159. metric_b = read_metrics()
  160. # compute scaled metrics for measurement a
  161. retiring_slots_a = GET_METRIC(metric_a, 0) * slots_a
  162. bad_spec_slots_a = GET_METRIC(metric_a, 1) * slots_a
  163. fe_bound_slots_a = GET_METRIC(metric_a, 2) * slots_a
  164. be_bound_slots_a = GET_METRIC(metric_a, 3) * slots_a
  165. # compute delta scaled metrics between b and a
  166. retiring_slots = GET_METRIC(metric_b, 0) * slots_b - retiring_slots_a
  167. bad_spec_slots = GET_METRIC(metric_b, 1) * slots_b - bad_spec_slots_a
  168. fe_bound_slots = GET_METRIC(metric_b, 2) * slots_b - fe_bound_slots_a
  169. be_bound_slots = GET_METRIC(metric_b, 3) * slots_b - be_bound_slots_a
  170. Later the individual ratios of L1 metric events for the measurement period can
  171. be recreated from these counts.
  172. slots_delta = slots_b - slots_a
  173. retiring_ratio = (float)retiring_slots / slots_delta
  174. bad_spec_ratio = (float)bad_spec_slots / slots_delta
  175. fe_bound_ratio = (float)fe_bound_slots / slots_delta
  176. be_bound_ratio = (float)be_bound_slots / slota_delta
  177. printf("Retiring %.2f%% Bad Speculation %.2f%% FE Bound %.2f%% BE Bound %.2f%%\n",
  178. retiring_ratio * 100.,
  179. bad_spec_ratio * 100.,
  180. fe_bound_ratio * 100.,
  181. be_bound_ratio * 100.);
  182. The individual ratios of L2 metric events for the measurement period can be
  183. recreated from L1 and L2 metric counters. (Available on Sapphire Rapids and
  184. later platforms)
  185. # compute scaled metrics for measurement a
  186. heavy_ops_slots_a = GET_METRIC(metric_a, 4) * slots_a
  187. br_mispredict_slots_a = GET_METRIC(metric_a, 5) * slots_a
  188. fetch_lat_slots_a = GET_METRIC(metric_a, 6) * slots_a
  189. mem_bound_slots_a = GET_METRIC(metric_a, 7) * slots_a
  190. # compute delta scaled metrics between b and a
  191. heavy_ops_slots = GET_METRIC(metric_b, 4) * slots_b - heavy_ops_slots_a
  192. br_mispredict_slots = GET_METRIC(metric_b, 5) * slots_b - br_mispredict_slots_a
  193. fetch_lat_slots = GET_METRIC(metric_b, 6) * slots_b - fetch_lat_slots_a
  194. mem_bound_slots = GET_METRIC(metric_b, 7) * slots_b - mem_bound_slots_a
  195. slots_delta = slots_b - slots_a
  196. heavy_ops_ratio = (float)heavy_ops_slots / slots_delta
  197. light_ops_ratio = retiring_ratio - heavy_ops_ratio;
  198. br_mispredict_ratio = (float)br_mispredict_slots / slots_delta
  199. machine_clears_ratio = bad_spec_ratio - br_mispredict_ratio;
  200. fetch_lat_ratio = (float)fetch_lat_slots / slots_delta
  201. fetch_bw_ratio = fe_bound_ratio - fetch_lat_ratio;
  202. mem_bound_ratio = (float)mem_bound_slots / slota_delta
  203. core_bound_ratio = be_bound_ratio - mem_bound_ratio;
  204. printf("Heavy Operations %.2f%% Light Operations %.2f%% "
  205. "Branch Mispredict %.2f%% Machine Clears %.2f%% "
  206. "Fetch Latency %.2f%% Fetch Bandwidth %.2f%% "
  207. "Mem Bound %.2f%% Core Bound %.2f%%\n",
  208. heavy_ops_ratio * 100.,
  209. light_ops_ratio * 100.,
  210. br_mispredict_ratio * 100.,
  211. machine_clears_ratio * 100.,
  212. fetch_lat_ratio * 100.,
  213. fetch_bw_ratio * 100.,
  214. mem_bound_ratio * 100.,
  215. core_bound_ratio * 100.);
  216. Resetting metrics counters
  217. ==========================
  218. Since the individual metrics are only 8bit they lose precision for
  219. short regions over time because the number of cycles covered by each
  220. fraction bit shrinks. So the counters need to be reset regularly.
  221. When using the kernel perf API the kernel resets on every read.
  222. So as long as the reading is at reasonable intervals (every few
  223. seconds) the precision is good.
  224. When using perf stat it is recommended to always use the -I option,
  225. with no longer interval than a few seconds
  226. perf stat -I 1000 --topdown ...
  227. For user programs using RDPMC directly the counter can
  228. be reset explicitly using ioctl:
  229. ioctl(perf_fd, PERF_EVENT_IOC_RESET, 0);
  230. This "opens" a new measurement period.
  231. A program using RDPMC for TopDown should schedule such a reset
  232. regularly, as in every few seconds.
  233. Limits on Ice Lake
  234. ==================
  235. Four pseudo TopDown metric events are exposed for the end-users,
  236. topdown-retiring, topdown-bad-spec, topdown-fe-bound and topdown-be-bound.
  237. They can be used to collect the TopDown value under the following
  238. rules:
  239. - All the TopDown metric events must be in a group with the SLOTS event.
  240. - The SLOTS event must be the leader of the group.
  241. - The PERF_FORMAT_GROUP flag must be applied for each TopDown metric
  242. events
  243. The SLOTS event and the TopDown metric events can be counting members of
  244. a sampling read group. Since the SLOTS event must be the leader of a TopDown
  245. group, the second event of the group is the sampling event.
  246. For example, perf record -e '{slots, $sampling_event, topdown-retiring}:S'
  247. Extension on Sapphire Rapids Server
  248. ===================================
  249. The metrics counter is extended to support TMA method level 2 metrics.
  250. The lower half of the register is the TMA level 1 metrics (legacy).
  251. The upper half is also divided into four 8-bit fields for the new level 2
  252. metrics. Four more TopDown metric events are exposed for the end-users,
  253. topdown-heavy-ops, topdown-br-mispredict, topdown-fetch-lat and
  254. topdown-mem-bound.
  255. Each of the new level 2 metrics in the upper half is a subset of the
  256. corresponding level 1 metric in the lower half. Software can deduce the
  257. other four level 2 metrics by subtracting corresponding metrics as below.
  258. Light_Operations = Retiring - Heavy_Operations
  259. Machine_Clears = Bad_Speculation - Branch_Mispredicts
  260. Fetch_Bandwidth = Frontend_Bound - Fetch_Latency
  261. Core_Bound = Backend_Bound - Memory_Bound
  262. [1] https://software.intel.com/en-us/top-down-microarchitecture-analysis-method-win
  263. [2] https://github.com/andikleen/pmu-tools/wiki/toplev-manual
  264. [3] https://software.intel.com/en-us/intel-vtune-amplifier-xe
  265. [4] https://github.com/andikleen/pmu-tools/tree/master/jevents
  266. [5] https://sites.google.com/site/analysismethods/yasin-pubs