1================================= 2LLVM Testing Infrastructure Guide 3================================= 4 5.. contents:: 6 :local: 7 8.. toctree:: 9 :hidden: 10 11 TestSuiteGuide 12 TestSuiteMakefileGuide 13 14Overview 15======== 16 17This document is the reference manual for the LLVM testing 18infrastructure. It documents the structure of the LLVM testing 19infrastructure, the tools needed to use it, and how to add and run 20tests. 21 22Requirements 23============ 24 25In order to use the LLVM testing infrastructure, you will need all of the 26software required to build LLVM, as well as `Python <http://python.org>`_ 2.7 or 27later. 28 29LLVM Testing Infrastructure Organization 30======================================== 31 32The LLVM testing infrastructure contains three major categories of tests: 33unit tests, regression tests and whole programs. The unit tests and regression 34tests are contained inside the LLVM repository itself under ``llvm/unittests`` 35and ``llvm/test`` respectively and are expected to always pass -- they should be 36run before every commit. 37 38The whole programs tests are referred to as the "LLVM test suite" (or 39"test-suite") and are in the ``test-suite`` module in subversion. For 40historical reasons, these tests are also referred to as the "nightly 41tests" in places, which is less ambiguous than "test-suite" and remains 42in use although we run them much more often than nightly. 43 44Unit tests 45---------- 46 47Unit tests are written using `Google Test <https://github.com/google/googletest/blob/master/googletest/docs/primer.md>`_ 48and `Google Mock <https://github.com/google/googletest/blob/master/googlemock/docs/ForDummies.md>`_ 49and are located in the ``llvm/unittests`` directory. 50 51Regression tests 52---------------- 53 54The regression tests are small pieces of code that test a specific 55feature of LLVM or trigger a specific bug in LLVM. The language they are 56written in depends on the part of LLVM being tested. These tests are driven by 57the :doc:`Lit <CommandGuide/lit>` testing tool (which is part of LLVM), and 58are located in the ``llvm/test`` directory. 59 60Typically when a bug is found in LLVM, a regression test containing just 61enough code to reproduce the problem should be written and placed 62somewhere underneath this directory. For example, it can be a small 63piece of LLVM IR distilled from an actual application or benchmark. 64 65``test-suite`` 66-------------- 67 68The test suite contains whole programs, which are pieces of code which 69can be compiled and linked into a stand-alone program that can be 70executed. These programs are generally written in high level languages 71such as C or C++. 72 73These programs are compiled using a user specified compiler and set of 74flags, and then executed to capture the program output and timing 75information. The output of these programs is compared to a reference 76output to ensure that the program is being compiled correctly. 77 78In addition to compiling and executing programs, whole program tests 79serve as a way of benchmarking LLVM performance, both in terms of the 80efficiency of the programs generated as well as the speed with which 81LLVM compiles, optimizes, and generates code. 82 83The test-suite is located in the ``test-suite`` Subversion module. 84 85See the :doc:`TestSuiteGuide` for details. 86 87Debugging Information tests 88--------------------------- 89 90The test suite contains tests to check quality of debugging information. 91The test are written in C based languages or in LLVM assembly language. 92 93These tests are compiled and run under a debugger. The debugger output 94is checked to validate of debugging information. See README.txt in the 95test suite for more information. This test suite is located in the 96``debuginfo-tests`` Subversion module. 97 98Quick start 99=========== 100 101The tests are located in two separate Subversion modules. The unit and 102regression tests are in the main "llvm" module under the directories 103``llvm/unittests`` and ``llvm/test`` (so you get these tests for free with the 104main LLVM tree). Use ``make check-all`` to run the unit and regression tests 105after building LLVM. 106 107The ``test-suite`` module contains more comprehensive tests including whole C 108and C++ programs. See the :doc:`TestSuiteGuide` for details. 109 110Unit and Regression tests 111------------------------- 112 113To run all of the LLVM unit tests use the check-llvm-unit target: 114 115.. code-block:: bash 116 117 % make check-llvm-unit 118 119To run all of the LLVM regression tests use the check-llvm target: 120 121.. code-block:: bash 122 123 % make check-llvm 124 125In order to get reasonable testing performance, build LLVM and subprojects 126in release mode, i.e. 127 128.. code-block:: bash 129 130 % cmake -DCMAKE_BUILD_TYPE="Release" -DLLVM_ENABLE_ASSERTIONS=On 131 132If you have `Clang <http://clang.llvm.org/>`_ checked out and built, you 133can run the LLVM and Clang tests simultaneously using: 134 135.. code-block:: bash 136 137 % make check-all 138 139To run the tests with Valgrind (Memcheck by default), use the ``LIT_ARGS`` make 140variable to pass the required options to lit. For example, you can use: 141 142.. code-block:: bash 143 144 % make check LIT_ARGS="-v --vg --vg-leak" 145 146to enable testing with valgrind and with leak checking enabled. 147 148To run individual tests or subsets of tests, you can use the ``llvm-lit`` 149script which is built as part of LLVM. For example, to run the 150``Integer/BitPacked.ll`` test by itself you can run: 151 152.. code-block:: bash 153 154 % llvm-lit ~/llvm/test/Integer/BitPacked.ll 155 156or to run all of the ARM CodeGen tests: 157 158.. code-block:: bash 159 160 % llvm-lit ~/llvm/test/CodeGen/ARM 161 162For more information on using the :program:`lit` tool, see ``llvm-lit --help`` 163or the :doc:`lit man page <CommandGuide/lit>`. 164 165Debugging Information tests 166--------------------------- 167 168To run debugging information tests simply add the ``debuginfo-tests`` 169project to your ``LLVM_ENABLE_PROJECTS`` define on the cmake 170command-line. 171 172Regression test structure 173========================= 174 175The LLVM regression tests are driven by :program:`lit` and are located in the 176``llvm/test`` directory. 177 178This directory contains a large array of small tests that exercise 179various features of LLVM and to ensure that regressions do not occur. 180The directory is broken into several sub-directories, each focused on a 181particular area of LLVM. 182 183Writing new regression tests 184---------------------------- 185 186The regression test structure is very simple, but does require some 187information to be set. This information is gathered via ``configure`` 188and is written to a file, ``test/lit.site.cfg`` in the build directory. 189The ``llvm/test`` Makefile does this work for you. 190 191In order for the regression tests to work, each directory of tests must 192have a ``lit.local.cfg`` file. :program:`lit` looks for this file to determine 193how to run the tests. This file is just Python code and thus is very 194flexible, but we've standardized it for the LLVM regression tests. If 195you're adding a directory of tests, just copy ``lit.local.cfg`` from 196another directory to get running. The standard ``lit.local.cfg`` simply 197specifies which files to look in for tests. Any directory that contains 198only directories does not need the ``lit.local.cfg`` file. Read the :doc:`Lit 199documentation <CommandGuide/lit>` for more information. 200 201Each test file must contain lines starting with "RUN:" that tell :program:`lit` 202how to run it. If there are no RUN lines, :program:`lit` will issue an error 203while running a test. 204 205RUN lines are specified in the comments of the test program using the 206keyword ``RUN`` followed by a colon, and lastly the command (pipeline) 207to execute. Together, these lines form the "script" that :program:`lit` 208executes to run the test case. The syntax of the RUN lines is similar to a 209shell's syntax for pipelines including I/O redirection and variable 210substitution. However, even though these lines may *look* like a shell 211script, they are not. RUN lines are interpreted by :program:`lit`. 212Consequently, the syntax differs from shell in a few ways. You can specify 213as many RUN lines as needed. 214 215:program:`lit` performs substitution on each RUN line to replace LLVM tool names 216with the full paths to the executable built for each tool (in 217``$(LLVM_OBJ_ROOT)/$(BuildMode)/bin)``. This ensures that :program:`lit` does 218not invoke any stray LLVM tools in the user's path during testing. 219 220Each RUN line is executed on its own, distinct from other lines unless 221its last character is ``\``. This continuation character causes the RUN 222line to be concatenated with the next one. In this way you can build up 223long pipelines of commands without making huge line lengths. The lines 224ending in ``\`` are concatenated until a RUN line that doesn't end in 225``\`` is found. This concatenated set of RUN lines then constitutes one 226execution. :program:`lit` will substitute variables and arrange for the pipeline 227to be executed. If any process in the pipeline fails, the entire line (and 228test case) fails too. 229 230Below is an example of legal RUN lines in a ``.ll`` file: 231 232.. code-block:: llvm 233 234 ; RUN: llvm-as < %s | llvm-dis > %t1 235 ; RUN: llvm-dis < %s.bc-13 > %t2 236 ; RUN: diff %t1 %t2 237 238As with a Unix shell, the RUN lines permit pipelines and I/O 239redirection to be used. 240 241There are some quoting rules that you must pay attention to when writing 242your RUN lines. In general nothing needs to be quoted. :program:`lit` won't 243strip off any quote characters so they will get passed to the invoked program. 244To avoid this use curly braces to tell :program:`lit` that it should treat 245everything enclosed as one value. 246 247In general, you should strive to keep your RUN lines as simple as possible, 248using them only to run tools that generate textual output you can then examine. 249The recommended way to examine output to figure out if the test passes is using 250the :doc:`FileCheck tool <CommandGuide/FileCheck>`. *[The usage of grep in RUN 251lines is deprecated - please do not send or commit patches that use it.]* 252 253Put related tests into a single file rather than having a separate file per 254test. Check if there are files already covering your feature and consider 255adding your code there instead of creating a new file. 256 257Extra files 258----------- 259 260If your test requires extra files besides the file containing the ``RUN:`` 261lines, the idiomatic place to put them is in a subdirectory ``Inputs``. 262You can then refer to the extra files as ``%S/Inputs/foo.bar``. 263 264For example, consider ``test/Linker/ident.ll``. The directory structure is 265as follows:: 266 267 test/ 268 Linker/ 269 ident.ll 270 Inputs/ 271 ident.a.ll 272 ident.b.ll 273 274For convenience, these are the contents: 275 276.. code-block:: llvm 277 278 ;;;;; ident.ll: 279 280 ; RUN: llvm-link %S/Inputs/ident.a.ll %S/Inputs/ident.b.ll -S | FileCheck %s 281 282 ; Verify that multiple input llvm.ident metadata are linked together. 283 284 ; CHECK-DAG: !llvm.ident = !{!0, !1, !2} 285 ; CHECK-DAG: "Compiler V1" 286 ; CHECK-DAG: "Compiler V2" 287 ; CHECK-DAG: "Compiler V3" 288 289 ;;;;; Inputs/ident.a.ll: 290 291 !llvm.ident = !{!0, !1} 292 !0 = metadata !{metadata !"Compiler V1"} 293 !1 = metadata !{metadata !"Compiler V2"} 294 295 ;;;;; Inputs/ident.b.ll: 296 297 !llvm.ident = !{!0} 298 !0 = metadata !{metadata !"Compiler V3"} 299 300For symmetry reasons, ``ident.ll`` is just a dummy file that doesn't 301actually participate in the test besides holding the ``RUN:`` lines. 302 303.. note:: 304 305 Some existing tests use ``RUN: true`` in extra files instead of just 306 putting the extra files in an ``Inputs/`` directory. This pattern is 307 deprecated. 308 309Fragile tests 310------------- 311 312It is easy to write a fragile test that would fail spuriously if the tool being 313tested outputs a full path to the input file. For example, :program:`opt` by 314default outputs a ``ModuleID``: 315 316.. code-block:: console 317 318 $ cat example.ll 319 define i32 @main() nounwind { 320 ret i32 0 321 } 322 323 $ opt -S /path/to/example.ll 324 ; ModuleID = '/path/to/example.ll' 325 326 define i32 @main() nounwind { 327 ret i32 0 328 } 329 330``ModuleID`` can unexpectedly match against ``CHECK`` lines. For example: 331 332.. code-block:: llvm 333 334 ; RUN: opt -S %s | FileCheck 335 336 define i32 @main() nounwind { 337 ; CHECK-NOT: load 338 ret i32 0 339 } 340 341This test will fail if placed into a ``download`` directory. 342 343To make your tests robust, always use ``opt ... < %s`` in the RUN line. 344:program:`opt` does not output a ``ModuleID`` when input comes from stdin. 345 346Platform-Specific Tests 347----------------------- 348 349Whenever adding tests that require the knowledge of a specific platform, 350either related to code generated, specific output or back-end features, 351you must make sure to isolate the features, so that buildbots that 352run on different architectures (and don't even compile all back-ends), 353don't fail. 354 355The first problem is to check for target-specific output, for example sizes 356of structures, paths and architecture names, for example: 357 358* Tests containing Windows paths will fail on Linux and vice-versa. 359* Tests that check for ``x86_64`` somewhere in the text will fail anywhere else. 360* Tests where the debug information calculates the size of types and structures. 361 362Also, if the test rely on any behaviour that is coded in any back-end, it must 363go in its own directory. So, for instance, code generator tests for ARM go 364into ``test/CodeGen/ARM`` and so on. Those directories contain a special 365``lit`` configuration file that ensure all tests in that directory will 366only run if a specific back-end is compiled and available. 367 368For instance, on ``test/CodeGen/ARM``, the ``lit.local.cfg`` is: 369 370.. code-block:: python 371 372 config.suffixes = ['.ll', '.c', '.cpp', '.test'] 373 if not 'ARM' in config.root.targets: 374 config.unsupported = True 375 376Other platform-specific tests are those that depend on a specific feature 377of a specific sub-architecture, for example only to Intel chips that support ``AVX2``. 378 379For instance, ``test/CodeGen/X86/psubus.ll`` tests three sub-architecture 380variants: 381 382.. code-block:: llvm 383 384 ; RUN: llc -mcpu=core2 < %s | FileCheck %s -check-prefix=SSE2 385 ; RUN: llc -mcpu=corei7-avx < %s | FileCheck %s -check-prefix=AVX1 386 ; RUN: llc -mcpu=core-avx2 < %s | FileCheck %s -check-prefix=AVX2 387 388And the checks are different: 389 390.. code-block:: llvm 391 392 ; SSE2: @test1 393 ; SSE2: psubusw LCPI0_0(%rip), %xmm0 394 ; AVX1: @test1 395 ; AVX1: vpsubusw LCPI0_0(%rip), %xmm0, %xmm0 396 ; AVX2: @test1 397 ; AVX2: vpsubusw LCPI0_0(%rip), %xmm0, %xmm0 398 399So, if you're testing for a behaviour that you know is platform-specific or 400depends on special features of sub-architectures, you must add the specific 401triple, test with the specific FileCheck and put it into the specific 402directory that will filter out all other architectures. 403 404 405Constraining test execution 406--------------------------- 407 408Some tests can be run only in specific configurations, such as 409with debug builds or on particular platforms. Use ``REQUIRES`` 410and ``UNSUPPORTED`` to control when the test is enabled. 411 412Some tests are expected to fail. For example, there may be a known bug 413that the test detect. Use ``XFAIL`` to mark a test as an expected failure. 414An ``XFAIL`` test will be successful if its execution fails, and 415will be a failure if its execution succeeds. 416 417.. code-block:: llvm 418 419 ; This test will be only enabled in the build with asserts. 420 ; REQUIRES: asserts 421 ; This test is disabled on Linux. 422 ; UNSUPPORTED: -linux- 423 ; This test is expected to fail on PowerPC. 424 ; XFAIL: powerpc 425 426``REQUIRES`` and ``UNSUPPORTED`` and ``XFAIL`` all accept a comma-separated 427list of boolean expressions. The values in each expression may be: 428 429- Features added to ``config.available_features`` by 430 configuration files such as ``lit.cfg``. 431- Substrings of the target triple (``UNSUPPORTED`` and ``XFAIL`` only). 432 433| ``REQUIRES`` enables the test if all expressions are true. 434| ``UNSUPPORTED`` disables the test if any expression is true. 435| ``XFAIL`` expects the test to fail if any expression is true. 436 437As a special case, ``XFAIL: *`` is expected to fail everywhere. 438 439.. code-block:: llvm 440 441 ; This test is disabled on Windows, 442 ; and is disabled on Linux, except for Android Linux. 443 ; UNSUPPORTED: windows, linux && !android 444 ; This test is expected to fail on both PowerPC and ARM. 445 ; XFAIL: powerpc || arm 446 447 448Substitutions 449------------- 450 451Besides replacing LLVM tool names the following substitutions are performed in 452RUN lines: 453 454``%%`` 455 Replaced by a single ``%``. This allows escaping other substitutions. 456 457``%s`` 458 File path to the test case's source. This is suitable for passing on the 459 command line as the input to an LLVM tool. 460 461 Example: ``/home/user/llvm/test/MC/ELF/foo_test.s`` 462 463``%S`` 464 Directory path to the test case's source. 465 466 Example: ``/home/user/llvm/test/MC/ELF`` 467 468``%t`` 469 File path to a temporary file name that could be used for this test case. 470 The file name won't conflict with other test cases. You can append to it 471 if you need multiple temporaries. This is useful as the destination of 472 some redirected output. 473 474 Example: ``/home/user/llvm.build/test/MC/ELF/Output/foo_test.s.tmp`` 475 476``%T`` 477 Directory of ``%t``. Deprecated. Shouldn't be used, because it can be easily 478 misused and cause race conditions between tests. 479 480 Use ``rm -rf %t && mkdir %t`` instead if a temporary directory is necessary. 481 482 Example: ``/home/user/llvm.build/test/MC/ELF/Output`` 483 484``%{pathsep}`` 485 486 Expands to the path separator, i.e. ``:`` (or ``;`` on Windows). 487 488``%/s, %/S, %/t, %/T:`` 489 490 Act like the corresponding substitution above but replace any ``\`` 491 character with a ``/``. This is useful to normalize path separators. 492 493 Example: ``%s: C:\Desktop Files/foo_test.s.tmp`` 494 495 Example: ``%/s: C:/Desktop Files/foo_test.s.tmp`` 496 497``%:s, %:S, %:t, %:T:`` 498 499 Act like the corresponding substitution above but remove colons at 500 the beginning of Windows paths. This is useful to allow concatenation 501 of absolute paths on Windows to produce a legal path. 502 503 Example: ``%s: C:\Desktop Files\foo_test.s.tmp`` 504 505 Example: ``%:s: C\Desktop Files\foo_test.s.tmp`` 506 507 508**LLVM-specific substitutions:** 509 510``%shlibext`` 511 The suffix for the host platforms shared library files. This includes the 512 period as the first character. 513 514 Example: ``.so`` (Linux), ``.dylib`` (macOS), ``.dll`` (Windows) 515 516``%exeext`` 517 The suffix for the host platforms executable files. This includes the 518 period as the first character. 519 520 Example: ``.exe`` (Windows), empty on Linux. 521 522``%(line)``, ``%(line+<number>)``, ``%(line-<number>)`` 523 The number of the line where this substitution is used, with an optional 524 integer offset. This can be used in tests with multiple RUN lines, which 525 reference test file's line numbers. 526 527 528**Clang-specific substitutions:** 529 530``%clang`` 531 Invokes the Clang driver. 532 533``%clang_cpp`` 534 Invokes the Clang driver for C++. 535 536``%clang_cl`` 537 Invokes the CL-compatible Clang driver. 538 539``%clangxx`` 540 Invokes the G++-compatible Clang driver. 541 542``%clang_cc1`` 543 Invokes the Clang frontend. 544 545``%itanium_abi_triple``, ``%ms_abi_triple`` 546 These substitutions can be used to get the current target triple adjusted to 547 the desired ABI. For example, if the test suite is running with the 548 ``i686-pc-win32`` target, ``%itanium_abi_triple`` will expand to 549 ``i686-pc-mingw32``. This allows a test to run with a specific ABI without 550 constraining it to a specific triple. 551 552**FileCheck-specific substitutions:** 553 554``%ProtectFileCheckOutput`` 555 This should precede a ``FileCheck`` call if and only if the call's textual 556 output affects test results. It's usually easy to tell: just look for 557 redirection or piping of the ``FileCheck`` call's stdout or stderr. 558 559To add more substituations, look at ``test/lit.cfg`` or ``lit.local.cfg``. 560 561 562Options 563------- 564 565The llvm lit configuration allows to customize some things with user options: 566 567``llc``, ``opt``, ... 568 Substitute the respective llvm tool name with a custom command line. This 569 allows to specify custom paths and default arguments for these tools. 570 Example: 571 572 % llvm-lit "-Dllc=llc -verify-machineinstrs" 573 574``run_long_tests`` 575 Enable the execution of long running tests. 576 577``llvm_site_config`` 578 Load the specified lit configuration instead of the default one. 579 580 581Other Features 582-------------- 583 584To make RUN line writing easier, there are several helper programs. These 585helpers are in the PATH when running tests, so you can just call them using 586their name. For example: 587 588``not`` 589 This program runs its arguments and then inverts the result code from it. 590 Zero result codes become 1. Non-zero result codes become 0. 591 592To make the output more useful, :program:`lit` will scan 593the lines of the test case for ones that contain a pattern that matches 594``PR[0-9]+``. This is the syntax for specifying a PR (Problem Report) number 595that is related to the test case. The number after "PR" specifies the 596LLVM bugzilla number. When a PR number is specified, it will be used in 597the pass/fail reporting. This is useful to quickly get some context when 598a test fails. 599 600Finally, any line that contains "END." will cause the special 601interpretation of lines to terminate. This is generally done right after 602the last RUN: line. This has two side effects: 603 604(a) it prevents special interpretation of lines that are part of the test 605 program, not the instructions to the test case, and 606 607(b) it speeds things up for really big test cases by avoiding 608 interpretation of the remainder of the file. 609