1llvm-mca - LLVM Machine Code Analyzer 2===================================== 3 4.. program:: llvm-mca 5 6SYNOPSIS 7-------- 8 9:program:`llvm-mca` [*options*] [input] 10 11DESCRIPTION 12----------- 13 14:program:`llvm-mca` is a performance analysis tool that uses information 15available in LLVM (e.g. scheduling models) to statically measure the performance 16of machine code in a specific CPU. 17 18Performance is measured in terms of throughput as well as processor resource 19consumption. The tool currently works for processors with an out-of-order 20backend, for which there is a scheduling model available in LLVM. 21 22The main goal of this tool is not just to predict the performance of the code 23when run on the target, but also help with diagnosing potential performance 24issues. 25 26Given an assembly code sequence, :program:`llvm-mca` estimates the Instructions 27Per Cycle (IPC), as well as hardware resource pressure. The analysis and 28reporting style were inspired by the IACA tool from Intel. 29 30For example, you can compile code with clang, output assembly, and pipe it 31directly into :program:`llvm-mca` for analysis: 32 33.. code-block:: bash 34 35 $ clang foo.c -O2 -target x86_64-unknown-unknown -S -o - | llvm-mca -mcpu=btver2 36 37Or for Intel syntax: 38 39.. code-block:: bash 40 41 $ clang foo.c -O2 -target x86_64-unknown-unknown -mllvm -x86-asm-syntax=intel -S -o - | llvm-mca -mcpu=btver2 42 43(:program:`llvm-mca` detects Intel syntax by the presence of an `.intel_syntax` 44directive at the beginning of the input. By default its output syntax matches 45that of its input.) 46 47Scheduling models are not just used to compute instruction latencies and 48throughput, but also to understand what processor resources are available 49and how to simulate them. 50 51By design, the quality of the analysis conducted by :program:`llvm-mca` is 52inevitably affected by the quality of the scheduling models in LLVM. 53 54If you see that the performance report is not accurate for a processor, 55please `file a bug <https://bugs.llvm.org/enter_bug.cgi?product=libraries>`_ 56against the appropriate backend. 57 58OPTIONS 59------- 60 61If ``input`` is "``-``" or omitted, :program:`llvm-mca` reads from standard 62input. Otherwise, it will read from the specified filename. 63 64If the :option:`-o` option is omitted, then :program:`llvm-mca` will send its output 65to standard output if the input is from standard input. If the :option:`-o` 66option specifies "``-``", then the output will also be sent to standard output. 67 68 69.. option:: -help 70 71 Print a summary of command line options. 72 73.. option:: -o <filename> 74 75 Use ``<filename>`` as the output filename. See the summary above for more 76 details. 77 78.. option:: -mtriple=<target triple> 79 80 Specify a target triple string. 81 82.. option:: -march=<arch> 83 84 Specify the architecture for which to analyze the code. It defaults to the 85 host default target. 86 87.. option:: -mcpu=<cpuname> 88 89 Specify the processor for which to analyze the code. By default, the cpu name 90 is autodetected from the host. 91 92.. option:: -output-asm-variant=<variant id> 93 94 Specify the output assembly variant for the report generated by the tool. 95 On x86, possible values are [0, 1]. A value of 0 (vic. 1) for this flag enables 96 the AT&T (vic. Intel) assembly format for the code printed out by the tool in 97 the analysis report. 98 99.. option:: -print-imm-hex 100 101 Prefer hex format for numeric literals in the output assembly printed as part 102 of the report. 103 104.. option:: -dispatch=<width> 105 106 Specify a different dispatch width for the processor. The dispatch width 107 defaults to field 'IssueWidth' in the processor scheduling model. If width is 108 zero, then the default dispatch width is used. 109 110.. option:: -register-file-size=<size> 111 112 Specify the size of the register file. When specified, this flag limits how 113 many physical registers are available for register renaming purposes. A value 114 of zero for this flag means "unlimited number of physical registers". 115 116.. option:: -iterations=<number of iterations> 117 118 Specify the number of iterations to run. If this flag is set to 0, then the 119 tool sets the number of iterations to a default value (i.e. 100). 120 121.. option:: -noalias=<bool> 122 123 If set, the tool assumes that loads and stores don't alias. This is the 124 default behavior. 125 126.. option:: -lqueue=<load queue size> 127 128 Specify the size of the load queue in the load/store unit emulated by the tool. 129 By default, the tool assumes an unbound number of entries in the load queue. 130 A value of zero for this flag is ignored, and the default load queue size is 131 used instead. 132 133.. option:: -squeue=<store queue size> 134 135 Specify the size of the store queue in the load/store unit emulated by the 136 tool. By default, the tool assumes an unbound number of entries in the store 137 queue. A value of zero for this flag is ignored, and the default store queue 138 size is used instead. 139 140.. option:: -timeline 141 142 Enable the timeline view. 143 144.. option:: -timeline-max-iterations=<iterations> 145 146 Limit the number of iterations to print in the timeline view. By default, the 147 timeline view prints information for up to 10 iterations. 148 149.. option:: -timeline-max-cycles=<cycles> 150 151 Limit the number of cycles in the timeline view. By default, the number of 152 cycles is set to 80. 153 154.. option:: -resource-pressure 155 156 Enable the resource pressure view. This is enabled by default. 157 158.. option:: -register-file-stats 159 160 Enable register file usage statistics. 161 162.. option:: -dispatch-stats 163 164 Enable extra dispatch statistics. This view collects and analyzes instruction 165 dispatch events, as well as static/dynamic dispatch stall events. This view 166 is disabled by default. 167 168.. option:: -scheduler-stats 169 170 Enable extra scheduler statistics. This view collects and analyzes instruction 171 issue events. This view is disabled by default. 172 173.. option:: -retire-stats 174 175 Enable extra retire control unit statistics. This view is disabled by default. 176 177.. option:: -instruction-info 178 179 Enable the instruction info view. This is enabled by default. 180 181.. option:: -show-encoding 182 183 Enable the printing of instruction encodings within the instruction info view. 184 185.. option:: -all-stats 186 187 Print all hardware statistics. This enables extra statistics related to the 188 dispatch logic, the hardware schedulers, the register file(s), and the retire 189 control unit. This option is disabled by default. 190 191.. option:: -all-views 192 193 Enable all the view. 194 195.. option:: -instruction-tables 196 197 Prints resource pressure information based on the static information 198 available from the processor model. This differs from the resource pressure 199 view because it doesn't require that the code is simulated. It instead prints 200 the theoretical uniform distribution of resource pressure for every 201 instruction in sequence. 202 203.. option:: -bottleneck-analysis 204 205 Print information about bottlenecks that affect the throughput. This analysis 206 can be expensive, and it is disabled by default. Bottlenecks are highlighted 207 in the summary view. 208 209 210EXIT STATUS 211----------- 212 213:program:`llvm-mca` returns 0 on success. Otherwise, an error message is printed 214to standard error, and the tool returns 1. 215 216USING MARKERS TO ANALYZE SPECIFIC CODE BLOCKS 217--------------------------------------------- 218:program:`llvm-mca` allows for the optional usage of special code comments to 219mark regions of the assembly code to be analyzed. A comment starting with 220substring ``LLVM-MCA-BEGIN`` marks the beginning of a code region. A comment 221starting with substring ``LLVM-MCA-END`` marks the end of a code region. For 222example: 223 224.. code-block:: none 225 226 # LLVM-MCA-BEGIN 227 ... 228 # LLVM-MCA-END 229 230If no user-defined region is specified, then :program:`llvm-mca` assumes a 231default region which contains every instruction in the input file. Every region 232is analyzed in isolation, and the final performance report is the union of all 233the reports generated for every code region. 234 235Code regions can have names. For example: 236 237.. code-block:: none 238 239 # LLVM-MCA-BEGIN A simple example 240 add %eax, %eax 241 # LLVM-MCA-END 242 243The code from the example above defines a region named "A simple example" with a 244single instruction in it. Note how the region name doesn't have to be repeated 245in the ``LLVM-MCA-END`` directive. In the absence of overlapping regions, 246an anonymous ``LLVM-MCA-END`` directive always ends the currently active user 247defined region. 248 249Example of nesting regions: 250 251.. code-block:: none 252 253 # LLVM-MCA-BEGIN foo 254 add %eax, %edx 255 # LLVM-MCA-BEGIN bar 256 sub %eax, %edx 257 # LLVM-MCA-END bar 258 # LLVM-MCA-END foo 259 260Example of overlapping regions: 261 262.. code-block:: none 263 264 # LLVM-MCA-BEGIN foo 265 add %eax, %edx 266 # LLVM-MCA-BEGIN bar 267 sub %eax, %edx 268 # LLVM-MCA-END foo 269 add %eax, %edx 270 # LLVM-MCA-END bar 271 272Note that multiple anonymous regions cannot overlap. Also, overlapping regions 273cannot have the same name. 274 275There is no support for marking regions from high-level source code, like C or 276C++. As a workaround, inline assembly directives may be used: 277 278.. code-block:: c++ 279 280 int foo(int a, int b) { 281 __asm volatile("# LLVM-MCA-BEGIN foo"); 282 a += 42; 283 __asm volatile("# LLVM-MCA-END"); 284 a *= b; 285 return a; 286 } 287 288However, this interferes with optimizations like loop vectorization and may have 289an impact on the code generated. This is because the ``__asm`` statements are 290seen as real code having important side effects, which limits how the code 291around them can be transformed. If users want to make use of inline assembly 292to emit markers, then the recommendation is to always verify that the output 293assembly is equivalent to the assembly generated in the absence of markers. 294The `Clang options to emit optimization reports <https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports>`_ 295can also help in detecting missed optimizations. 296 297HOW LLVM-MCA WORKS 298------------------ 299 300:program:`llvm-mca` takes assembly code as input. The assembly code is parsed 301into a sequence of MCInst with the help of the existing LLVM target assembly 302parsers. The parsed sequence of MCInst is then analyzed by a ``Pipeline`` module 303to generate a performance report. 304 305The Pipeline module simulates the execution of the machine code sequence in a 306loop of iterations (default is 100). During this process, the pipeline collects 307a number of execution related statistics. At the end of this process, the 308pipeline generates and prints a report from the collected statistics. 309 310Here is an example of a performance report generated by the tool for a 311dot-product of two packed float vectors of four elements. The analysis is 312conducted for target x86, cpu btver2. The following result can be produced via 313the following command using the example located at 314``test/tools/llvm-mca/X86/BtVer2/dot-product.s``: 315 316.. code-block:: bash 317 318 $ llvm-mca -mtriple=x86_64-unknown-unknown -mcpu=btver2 -iterations=300 dot-product.s 319 320.. code-block:: none 321 322 Iterations: 300 323 Instructions: 900 324 Total Cycles: 610 325 Total uOps: 900 326 327 Dispatch Width: 2 328 uOps Per Cycle: 1.48 329 IPC: 1.48 330 Block RThroughput: 2.0 331 332 333 Instruction Info: 334 [1]: #uOps 335 [2]: Latency 336 [3]: RThroughput 337 [4]: MayLoad 338 [5]: MayStore 339 [6]: HasSideEffects (U) 340 341 [1] [2] [3] [4] [5] [6] Instructions: 342 1 2 1.00 vmulps %xmm0, %xmm1, %xmm2 343 1 3 1.00 vhaddps %xmm2, %xmm2, %xmm3 344 1 3 1.00 vhaddps %xmm3, %xmm3, %xmm4 345 346 347 Resources: 348 [0] - JALU0 349 [1] - JALU1 350 [2] - JDiv 351 [3] - JFPA 352 [4] - JFPM 353 [5] - JFPU0 354 [6] - JFPU1 355 [7] - JLAGU 356 [8] - JMul 357 [9] - JSAGU 358 [10] - JSTC 359 [11] - JVALU0 360 [12] - JVALU1 361 [13] - JVIMUL 362 363 364 Resource pressure per iteration: 365 [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] 366 - - - 2.00 1.00 2.00 1.00 - - - - - - - 367 368 Resource pressure by instruction: 369 [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] Instructions: 370 - - - - 1.00 - 1.00 - - - - - - - vmulps %xmm0, %xmm1, %xmm2 371 - - - 1.00 - 1.00 - - - - - - - - vhaddps %xmm2, %xmm2, %xmm3 372 - - - 1.00 - 1.00 - - - - - - - - vhaddps %xmm3, %xmm3, %xmm4 373 374According to this report, the dot-product kernel has been executed 300 times, 375for a total of 900 simulated instructions. The total number of simulated micro 376opcodes (uOps) is also 900. 377 378The report is structured in three main sections. The first section collects a 379few performance numbers; the goal of this section is to give a very quick 380overview of the performance throughput. Important performance indicators are 381**IPC**, **uOps Per Cycle**, and **Block RThroughput** (Block Reciprocal 382Throughput). 383 384Field *DispatchWidth* is the maximum number of micro opcodes that are dispatched 385to the out-of-order backend every simulated cycle. 386 387IPC is computed dividing the total number of simulated instructions by the total 388number of cycles. 389 390Field *Block RThroughput* is the reciprocal of the block throughput. Block 391throughput is a theoretical quantity computed as the maximum number of blocks 392(i.e. iterations) that can be executed per simulated clock cycle in the absence 393of loop carried dependencies. Block throughput is superiorly limited by the 394dispatch rate, and the availability of hardware resources. 395 396In the absence of loop-carried data dependencies, the observed IPC tends to a 397theoretical maximum which can be computed by dividing the number of instructions 398of a single iteration by the `Block RThroughput`. 399 400Field 'uOps Per Cycle' is computed dividing the total number of simulated micro 401opcodes by the total number of cycles. A delta between Dispatch Width and this 402field is an indicator of a performance issue. In the absence of loop-carried 403data dependencies, the observed 'uOps Per Cycle' should tend to a theoretical 404maximum throughput which can be computed by dividing the number of uOps of a 405single iteration by the `Block RThroughput`. 406 407Field *uOps Per Cycle* is bounded from above by the dispatch width. That is 408because the dispatch width limits the maximum size of a dispatch group. Both IPC 409and 'uOps Per Cycle' are limited by the amount of hardware parallelism. The 410availability of hardware resources affects the resource pressure distribution, 411and it limits the number of instructions that can be executed in parallel every 412cycle. A delta between Dispatch Width and the theoretical maximum uOps per 413Cycle (computed by dividing the number of uOps of a single iteration by the 414`Block RThroughput`) is an indicator of a performance bottleneck caused by the 415lack of hardware resources. 416In general, the lower the Block RThroughput, the better. 417 418In this example, ``uOps per iteration/Block RThroughput`` is 1.50. Since there 419are no loop-carried dependencies, the observed `uOps Per Cycle` is expected to 420approach 1.50 when the number of iterations tends to infinity. The delta between 421the Dispatch Width (2.00), and the theoretical maximum uOp throughput (1.50) is 422an indicator of a performance bottleneck caused by the lack of hardware 423resources, and the *Resource pressure view* can help to identify the problematic 424resource usage. 425 426The second section of the report is the `instruction info view`. It shows the 427latency and reciprocal throughput of every instruction in the sequence. It also 428reports extra information related to the number of micro opcodes, and opcode 429properties (i.e., 'MayLoad', 'MayStore', and 'HasSideEffects'). 430 431Field *RThroughput* is the reciprocal of the instruction throughput. Throughput 432is computed as the maximum number of instructions of a same type that can be 433executed per clock cycle in the absence of operand dependencies. In this 434example, the reciprocal throughput of a vector float multiply is 1 435cycles/instruction. That is because the FP multiplier JFPM is only available 436from pipeline JFPU1. 437 438Instruction encodings are displayed within the instruction info view when flag 439`-show-encoding` is specified. 440 441Below is an example of `-show-encoding` output for the dot-product kernel: 442 443.. code-block:: none 444 445 Instruction Info: 446 [1]: #uOps 447 [2]: Latency 448 [3]: RThroughput 449 [4]: MayLoad 450 [5]: MayStore 451 [6]: HasSideEffects (U) 452 [7]: Encoding Size 453 454 [1] [2] [3] [4] [5] [6] [7] Encodings: Instructions: 455 1 2 1.00 4 c5 f0 59 d0 vmulps %xmm0, %xmm1, %xmm2 456 1 4 1.00 4 c5 eb 7c da vhaddps %xmm2, %xmm2, %xmm3 457 1 4 1.00 4 c5 e3 7c e3 vhaddps %xmm3, %xmm3, %xmm4 458 459The `Encoding Size` column shows the size in bytes of instructions. The 460`Encodings` column shows the actual instruction encodings (byte sequences in 461hex). 462 463The third section is the *Resource pressure view*. This view reports 464the average number of resource cycles consumed every iteration by instructions 465for every processor resource unit available on the target. Information is 466structured in two tables. The first table reports the number of resource cycles 467spent on average every iteration. The second table correlates the resource 468cycles to the machine instruction in the sequence. For example, every iteration 469of the instruction vmulps always executes on resource unit [6] 470(JFPU1 - floating point pipeline #1), consuming an average of 1 resource cycle 471per iteration. Note that on AMD Jaguar, vector floating-point multiply can 472only be issued to pipeline JFPU1, while horizontal floating-point additions can 473only be issued to pipeline JFPU0. 474 475The resource pressure view helps with identifying bottlenecks caused by high 476usage of specific hardware resources. Situations with resource pressure mainly 477concentrated on a few resources should, in general, be avoided. Ideally, 478pressure should be uniformly distributed between multiple resources. 479 480Timeline View 481^^^^^^^^^^^^^ 482The timeline view produces a detailed report of each instruction's state 483transitions through an instruction pipeline. This view is enabled by the 484command line option ``-timeline``. As instructions transition through the 485various stages of the pipeline, their states are depicted in the view report. 486These states are represented by the following characters: 487 488* D : Instruction dispatched. 489* e : Instruction executing. 490* E : Instruction executed. 491* R : Instruction retired. 492* = : Instruction already dispatched, waiting to be executed. 493* \- : Instruction executed, waiting to be retired. 494 495Below is the timeline view for a subset of the dot-product example located in 496``test/tools/llvm-mca/X86/BtVer2/dot-product.s`` and processed by 497:program:`llvm-mca` using the following command: 498 499.. code-block:: bash 500 501 $ llvm-mca -mtriple=x86_64-unknown-unknown -mcpu=btver2 -iterations=3 -timeline dot-product.s 502 503.. code-block:: none 504 505 Timeline view: 506 012345 507 Index 0123456789 508 509 [0,0] DeeER. . . vmulps %xmm0, %xmm1, %xmm2 510 [0,1] D==eeeER . . vhaddps %xmm2, %xmm2, %xmm3 511 [0,2] .D====eeeER . vhaddps %xmm3, %xmm3, %xmm4 512 [1,0] .DeeE-----R . vmulps %xmm0, %xmm1, %xmm2 513 [1,1] . D=eeeE---R . vhaddps %xmm2, %xmm2, %xmm3 514 [1,2] . D====eeeER . vhaddps %xmm3, %xmm3, %xmm4 515 [2,0] . DeeE-----R . vmulps %xmm0, %xmm1, %xmm2 516 [2,1] . D====eeeER . vhaddps %xmm2, %xmm2, %xmm3 517 [2,2] . D======eeeER vhaddps %xmm3, %xmm3, %xmm4 518 519 520 Average Wait times (based on the timeline view): 521 [0]: Executions 522 [1]: Average time spent waiting in a scheduler's queue 523 [2]: Average time spent waiting in a scheduler's queue while ready 524 [3]: Average time elapsed from WB until retire stage 525 526 [0] [1] [2] [3] 527 0. 3 1.0 1.0 3.3 vmulps %xmm0, %xmm1, %xmm2 528 1. 3 3.3 0.7 1.0 vhaddps %xmm2, %xmm2, %xmm3 529 2. 3 5.7 0.0 0.0 vhaddps %xmm3, %xmm3, %xmm4 530 3 3.3 0.5 1.4 <total> 531 532The timeline view is interesting because it shows instruction state changes 533during execution. It also gives an idea of how the tool processes instructions 534executed on the target, and how their timing information might be calculated. 535 536The timeline view is structured in two tables. The first table shows 537instructions changing state over time (measured in cycles); the second table 538(named *Average Wait times*) reports useful timing statistics, which should 539help diagnose performance bottlenecks caused by long data dependencies and 540sub-optimal usage of hardware resources. 541 542An instruction in the timeline view is identified by a pair of indices, where 543the first index identifies an iteration, and the second index is the 544instruction index (i.e., where it appears in the code sequence). Since this 545example was generated using 3 iterations: ``-iterations=3``, the iteration 546indices range from 0-2 inclusively. 547 548Excluding the first and last column, the remaining columns are in cycles. 549Cycles are numbered sequentially starting from 0. 550 551From the example output above, we know the following: 552 553* Instruction [1,0] was dispatched at cycle 1. 554* Instruction [1,0] started executing at cycle 2. 555* Instruction [1,0] reached the write back stage at cycle 4. 556* Instruction [1,0] was retired at cycle 10. 557 558Instruction [1,0] (i.e., vmulps from iteration #1) does not have to wait in the 559scheduler's queue for the operands to become available. By the time vmulps is 560dispatched, operands are already available, and pipeline JFPU1 is ready to 561serve another instruction. So the instruction can be immediately issued on the 562JFPU1 pipeline. That is demonstrated by the fact that the instruction only 563spent 1cy in the scheduler's queue. 564 565There is a gap of 5 cycles between the write-back stage and the retire event. 566That is because instructions must retire in program order, so [1,0] has to wait 567for [0,2] to be retired first (i.e., it has to wait until cycle 10). 568 569In the example, all instructions are in a RAW (Read After Write) dependency 570chain. Register %xmm2 written by vmulps is immediately used by the first 571vhaddps, and register %xmm3 written by the first vhaddps is used by the second 572vhaddps. Long data dependencies negatively impact the ILP (Instruction Level 573Parallelism). 574 575In the dot-product example, there are anti-dependencies introduced by 576instructions from different iterations. However, those dependencies can be 577removed at register renaming stage (at the cost of allocating register aliases, 578and therefore consuming physical registers). 579 580Table *Average Wait times* helps diagnose performance issues that are caused by 581the presence of long latency instructions and potentially long data dependencies 582which may limit the ILP. Last row, ``<total>``, shows a global average over all 583instructions measured. Note that :program:`llvm-mca`, by default, assumes at 584least 1cy between the dispatch event and the issue event. 585 586When the performance is limited by data dependencies and/or long latency 587instructions, the number of cycles spent while in the *ready* state is expected 588to be very small when compared with the total number of cycles spent in the 589scheduler's queue. The difference between the two counters is a good indicator 590of how large of an impact data dependencies had on the execution of the 591instructions. When performance is mostly limited by the lack of hardware 592resources, the delta between the two counters is small. However, the number of 593cycles spent in the queue tends to be larger (i.e., more than 1-3cy), 594especially when compared to other low latency instructions. 595 596Bottleneck Analysis 597^^^^^^^^^^^^^^^^^^^ 598The ``-bottleneck-analysis`` command line option enables the analysis of 599performance bottlenecks. 600 601This analysis is potentially expensive. It attempts to correlate increases in 602backend pressure (caused by pipeline resource pressure and data dependencies) to 603dynamic dispatch stalls. 604 605Below is an example of ``-bottleneck-analysis`` output generated by 606:program:`llvm-mca` for 500 iterations of the dot-product example on btver2. 607 608.. code-block:: none 609 610 611 Cycles with backend pressure increase [ 48.07% ] 612 Throughput Bottlenecks: 613 Resource Pressure [ 47.77% ] 614 - JFPA [ 47.77% ] 615 - JFPU0 [ 47.77% ] 616 Data Dependencies: [ 0.30% ] 617 - Register Dependencies [ 0.30% ] 618 - Memory Dependencies [ 0.00% ] 619 620 Critical sequence based on the simulation: 621 622 Instruction Dependency Information 623 +----< 2. vhaddps %xmm3, %xmm3, %xmm4 624 | 625 | < loop carried > 626 | 627 | 0. vmulps %xmm0, %xmm1, %xmm2 628 +----> 1. vhaddps %xmm2, %xmm2, %xmm3 ## RESOURCE interference: JFPA [ probability: 74% ] 629 +----> 2. vhaddps %xmm3, %xmm3, %xmm4 ## REGISTER dependency: %xmm3 630 | 631 | < loop carried > 632 | 633 +----> 1. vhaddps %xmm2, %xmm2, %xmm3 ## RESOURCE interference: JFPA [ probability: 74% ] 634 635 636According to the analysis, throughput is limited by resource pressure and not by 637data dependencies. The analysis observed increases in backend pressure during 63848.07% of the simulated run. Almost all those pressure increase events were 639caused by contention on processor resources JFPA/JFPU0. 640 641The `critical sequence` is the most expensive sequence of instructions according 642to the simulation. It is annotated to provide extra information about critical 643register dependencies and resource interferences between instructions. 644 645Instructions from the critical sequence are expected to significantly impact 646performance. By construction, the accuracy of this analysis is strongly 647dependent on the simulation and (as always) by the quality of the processor 648model in llvm. 649 650 651Extra Statistics to Further Diagnose Performance Issues 652^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 653The ``-all-stats`` command line option enables extra statistics and performance 654counters for the dispatch logic, the reorder buffer, the retire control unit, 655and the register file. 656 657Below is an example of ``-all-stats`` output generated by :program:`llvm-mca` 658for 300 iterations of the dot-product example discussed in the previous 659sections. 660 661.. code-block:: none 662 663 Dynamic Dispatch Stall Cycles: 664 RAT - Register unavailable: 0 665 RCU - Retire tokens unavailable: 0 666 SCHEDQ - Scheduler full: 272 (44.6%) 667 LQ - Load queue full: 0 668 SQ - Store queue full: 0 669 GROUP - Static restrictions on the dispatch group: 0 670 671 672 Dispatch Logic - number of cycles where we saw N micro opcodes dispatched: 673 [# dispatched], [# cycles] 674 0, 24 (3.9%) 675 1, 272 (44.6%) 676 2, 314 (51.5%) 677 678 679 Schedulers - number of cycles where we saw N micro opcodes issued: 680 [# issued], [# cycles] 681 0, 7 (1.1%) 682 1, 306 (50.2%) 683 2, 297 (48.7%) 684 685 Scheduler's queue usage: 686 [1] Resource name. 687 [2] Average number of used buffer entries. 688 [3] Maximum number of used buffer entries. 689 [4] Total number of buffer entries. 690 691 [1] [2] [3] [4] 692 JALU01 0 0 20 693 JFPU01 17 18 18 694 JLSAGU 0 0 12 695 696 697 Retire Control Unit - number of cycles where we saw N instructions retired: 698 [# retired], [# cycles] 699 0, 109 (17.9%) 700 1, 102 (16.7%) 701 2, 399 (65.4%) 702 703 Total ROB Entries: 64 704 Max Used ROB Entries: 35 ( 54.7% ) 705 Average Used ROB Entries per cy: 32 ( 50.0% ) 706 707 708 Register File statistics: 709 Total number of mappings created: 900 710 Max number of mappings used: 35 711 712 * Register File #1 -- JFpuPRF: 713 Number of physical registers: 72 714 Total number of mappings created: 900 715 Max number of mappings used: 35 716 717 * Register File #2 -- JIntegerPRF: 718 Number of physical registers: 64 719 Total number of mappings created: 0 720 Max number of mappings used: 0 721 722If we look at the *Dynamic Dispatch Stall Cycles* table, we see the counter for 723SCHEDQ reports 272 cycles. This counter is incremented every time the dispatch 724logic is unable to dispatch a full group because the scheduler's queue is full. 725 726Looking at the *Dispatch Logic* table, we see that the pipeline was only able to 727dispatch two micro opcodes 51.5% of the time. The dispatch group was limited to 728one micro opcode 44.6% of the cycles, which corresponds to 272 cycles. The 729dispatch statistics are displayed by either using the command option 730``-all-stats`` or ``-dispatch-stats``. 731 732The next table, *Schedulers*, presents a histogram displaying a count, 733representing the number of micro opcodes issued on some number of cycles. In 734this case, of the 610 simulated cycles, single opcodes were issued 306 times 735(50.2%) and there were 7 cycles where no opcodes were issued. 736 737The *Scheduler's queue usage* table shows that the average and maximum number of 738buffer entries (i.e., scheduler queue entries) used at runtime. Resource JFPU01 739reached its maximum (18 of 18 queue entries). Note that AMD Jaguar implements 740three schedulers: 741 742* JALU01 - A scheduler for ALU instructions. 743* JFPU01 - A scheduler floating point operations. 744* JLSAGU - A scheduler for address generation. 745 746The dot-product is a kernel of three floating point instructions (a vector 747multiply followed by two horizontal adds). That explains why only the floating 748point scheduler appears to be used. 749 750A full scheduler queue is either caused by data dependency chains or by a 751sub-optimal usage of hardware resources. Sometimes, resource pressure can be 752mitigated by rewriting the kernel using different instructions that consume 753different scheduler resources. Schedulers with a small queue are less resilient 754to bottlenecks caused by the presence of long data dependencies. The scheduler 755statistics are displayed by using the command option ``-all-stats`` or 756``-scheduler-stats``. 757 758The next table, *Retire Control Unit*, presents a histogram displaying a count, 759representing the number of instructions retired on some number of cycles. In 760this case, of the 610 simulated cycles, two instructions were retired during the 761same cycle 399 times (65.4%) and there were 109 cycles where no instructions 762were retired. The retire statistics are displayed by using the command option 763``-all-stats`` or ``-retire-stats``. 764 765The last table presented is *Register File statistics*. Each physical register 766file (PRF) used by the pipeline is presented in this table. In the case of AMD 767Jaguar, there are two register files, one for floating-point registers (JFpuPRF) 768and one for integer registers (JIntegerPRF). The table shows that of the 900 769instructions processed, there were 900 mappings created. Since this dot-product 770example utilized only floating point registers, the JFPuPRF was responsible for 771creating the 900 mappings. However, we see that the pipeline only used a 772maximum of 35 of 72 available register slots at any given time. We can conclude 773that the floating point PRF was the only register file used for the example, and 774that it was never resource constrained. The register file statistics are 775displayed by using the command option ``-all-stats`` or 776``-register-file-stats``. 777 778In this example, we can conclude that the IPC is mostly limited by data 779dependencies, and not by resource pressure. 780 781Instruction Flow 782^^^^^^^^^^^^^^^^ 783This section describes the instruction flow through the default pipeline of 784:program:`llvm-mca`, as well as the functional units involved in the process. 785 786The default pipeline implements the following sequence of stages used to 787process instructions. 788 789* Dispatch (Instruction is dispatched to the schedulers). 790* Issue (Instruction is issued to the processor pipelines). 791* Write Back (Instruction is executed, and results are written back). 792* Retire (Instruction is retired; writes are architecturally committed). 793 794The default pipeline only models the out-of-order portion of a processor. 795Therefore, the instruction fetch and decode stages are not modeled. Performance 796bottlenecks in the frontend are not diagnosed. :program:`llvm-mca` assumes that 797instructions have all been decoded and placed into a queue before the simulation 798start. Also, :program:`llvm-mca` does not model branch prediction. 799 800Instruction Dispatch 801"""""""""""""""""""" 802During the dispatch stage, instructions are picked in program order from a 803queue of already decoded instructions, and dispatched in groups to the 804simulated hardware schedulers. 805 806The size of a dispatch group depends on the availability of the simulated 807hardware resources. The processor dispatch width defaults to the value 808of the ``IssueWidth`` in LLVM's scheduling model. 809 810An instruction can be dispatched if: 811 812* The size of the dispatch group is smaller than processor's dispatch width. 813* There are enough entries in the reorder buffer. 814* There are enough physical registers to do register renaming. 815* The schedulers are not full. 816 817Scheduling models can optionally specify which register files are available on 818the processor. :program:`llvm-mca` uses that information to initialize register 819file descriptors. Users can limit the number of physical registers that are 820globally available for register renaming by using the command option 821``-register-file-size``. A value of zero for this option means *unbounded*. By 822knowing how many registers are available for renaming, the tool can predict 823dispatch stalls caused by the lack of physical registers. 824 825The number of reorder buffer entries consumed by an instruction depends on the 826number of micro-opcodes specified for that instruction by the target scheduling 827model. The reorder buffer is responsible for tracking the progress of 828instructions that are "in-flight", and retiring them in program order. The 829number of entries in the reorder buffer defaults to the value specified by field 830`MicroOpBufferSize` in the target scheduling model. 831 832Instructions that are dispatched to the schedulers consume scheduler buffer 833entries. :program:`llvm-mca` queries the scheduling model to determine the set 834of buffered resources consumed by an instruction. Buffered resources are 835treated like scheduler resources. 836 837Instruction Issue 838""""""""""""""""" 839Each processor scheduler implements a buffer of instructions. An instruction 840has to wait in the scheduler's buffer until input register operands become 841available. Only at that point, does the instruction becomes eligible for 842execution and may be issued (potentially out-of-order) for execution. 843Instruction latencies are computed by :program:`llvm-mca` with the help of the 844scheduling model. 845 846:program:`llvm-mca`'s scheduler is designed to simulate multiple processor 847schedulers. The scheduler is responsible for tracking data dependencies, and 848dynamically selecting which processor resources are consumed by instructions. 849It delegates the management of processor resource units and resource groups to a 850resource manager. The resource manager is responsible for selecting resource 851units that are consumed by instructions. For example, if an instruction 852consumes 1cy of a resource group, the resource manager selects one of the 853available units from the group; by default, the resource manager uses a 854round-robin selector to guarantee that resource usage is uniformly distributed 855between all units of a group. 856 857:program:`llvm-mca`'s scheduler internally groups instructions into three sets: 858 859* WaitSet: a set of instructions whose operands are not ready. 860* ReadySet: a set of instructions ready to execute. 861* IssuedSet: a set of instructions executing. 862 863Depending on the operands availability, instructions that are dispatched to the 864scheduler are either placed into the WaitSet or into the ReadySet. 865 866Every cycle, the scheduler checks if instructions can be moved from the WaitSet 867to the ReadySet, and if instructions from the ReadySet can be issued to the 868underlying pipelines. The algorithm prioritizes older instructions over younger 869instructions. 870 871Write-Back and Retire Stage 872""""""""""""""""""""""""""" 873Issued instructions are moved from the ReadySet to the IssuedSet. There, 874instructions wait until they reach the write-back stage. At that point, they 875get removed from the queue and the retire control unit is notified. 876 877When instructions are executed, the retire control unit flags the instruction as 878"ready to retire." 879 880Instructions are retired in program order. The register file is notified of the 881retirement so that it can free the physical registers that were allocated for 882the instruction during the register renaming stage. 883 884Load/Store Unit and Memory Consistency Model 885"""""""""""""""""""""""""""""""""""""""""""" 886To simulate an out-of-order execution of memory operations, :program:`llvm-mca` 887utilizes a simulated load/store unit (LSUnit) to simulate the speculative 888execution of loads and stores. 889 890Each load (or store) consumes an entry in the load (or store) queue. Users can 891specify flags ``-lqueue`` and ``-squeue`` to limit the number of entries in the 892load and store queues respectively. The queues are unbounded by default. 893 894The LSUnit implements a relaxed consistency model for memory loads and stores. 895The rules are: 896 8971. A younger load is allowed to pass an older load only if there are no 898 intervening stores or barriers between the two loads. 8992. A younger load is allowed to pass an older store provided that the load does 900 not alias with the store. 9013. A younger store is not allowed to pass an older store. 9024. A younger store is not allowed to pass an older load. 903 904By default, the LSUnit optimistically assumes that loads do not alias 905(`-noalias=true`) store operations. Under this assumption, younger loads are 906always allowed to pass older stores. Essentially, the LSUnit does not attempt 907to run any alias analysis to predict when loads and stores do not alias with 908each other. 909 910Note that, in the case of write-combining memory, rule 3 could be relaxed to 911allow reordering of non-aliasing store operations. That being said, at the 912moment, there is no way to further relax the memory model (``-noalias`` is the 913only option). Essentially, there is no option to specify a different memory 914type (e.g., write-back, write-combining, write-through; etc.) and consequently 915to weaken, or strengthen, the memory model. 916 917Other limitations are: 918 919* The LSUnit does not know when store-to-load forwarding may occur. 920* The LSUnit does not know anything about cache hierarchy and memory types. 921* The LSUnit does not know how to identify serializing operations and memory 922 fences. 923 924The LSUnit does not attempt to predict if a load or store hits or misses the L1 925cache. It only knows if an instruction "MayLoad" and/or "MayStore." For 926loads, the scheduling model provides an "optimistic" load-to-use latency (which 927usually matches the load-to-use latency for when there is a hit in the L1D). 928 929:program:`llvm-mca` does not know about serializing operations or memory-barrier 930like instructions. The LSUnit conservatively assumes that an instruction which 931has both "MayLoad" and unmodeled side effects behaves like a "soft" 932load-barrier. That means, it serializes loads without forcing a flush of the 933load queue. Similarly, instructions that "MayStore" and have unmodeled side 934effects are treated like store barriers. A full memory barrier is a "MayLoad" 935and "MayStore" instruction with unmodeled side effects. This is inaccurate, but 936it is the best that we can do at the moment with the current information 937available in LLVM. 938 939A load/store barrier consumes one entry of the load/store queue. A load/store 940barrier enforces ordering of loads/stores. A younger load cannot pass a load 941barrier. Also, a younger store cannot pass a store barrier. A younger load 942has to wait for the memory/load barrier to execute. A load/store barrier is 943"executed" when it becomes the oldest entry in the load/store queue(s). That 944also means, by construction, all of the older loads/stores have been executed. 945 946In conclusion, the full set of load/store consistency rules are: 947 948#. A store may not pass a previous store. 949#. A store may not pass a previous load (regardless of ``-noalias``). 950#. A store has to wait until an older store barrier is fully executed. 951#. A load may pass a previous load. 952#. A load may not pass a previous store unless ``-noalias`` is set. 953#. A load has to wait until an older load barrier is fully executed. 954