1Testing 2======= 3 4.. contents:: 5 :local: 6 7Test Suite Structure 8-------------------- 9 10The LLDB test suite consists of three different kinds of test: 11 12* **Unit tests**: written in C++ using the googletest unit testing library. 13* **Shell tests**: Integration tests that test the debugger through the command 14 line. These tests interact with the debugger either through the command line 15 driver or through ``lldb-test`` which is a tool that exposes the internal 16 data structures in an easy-to-parse way for testing. Most people will know 17 these as *lit tests* in LLVM, although lit is the test driver and ShellTest 18 is the test format that uses ``RUN:`` lines. `FileCheck 19 <https://llvm.org/docs/CommandGuide/FileCheck.html>`_ is used to verify 20 the output. 21* **API tests**: Integration tests that interact with the debugger through the 22 SB API. These are written in Python and use LLDB's ``dotest.py`` testing 23 framework on top of Python's `unittest2 24 <https://docs.python.org/2/library/unittest.html>`_. 25 26All three test suites use ``lit`` (`LLVM Integrated Tester 27<https://llvm.org/docs/CommandGuide/lit.html>`_ ) as the test driver. The test 28suites can be run as a whole or separately. 29 30 31Unit Tests 32`````````` 33 34Unit tests are located under ``lldb/unittests``. If it's possible to test 35something in isolation or as a single unit, you should make it a unit test. 36 37Often you need instances of the core objects such as a debugger, target or 38process, in order to test something meaningful. We already have a handful of 39tests that have the necessary boiler plate, but this is something we could 40abstract away and make it more user friendly. 41 42Shell Tests 43``````````` 44 45Shell tests are located under ``lldb/test/Shell``. These tests are generally 46built around checking the output of ``lldb`` (the command line driver) or 47``lldb-test`` using ``FileCheck``. Shell tests are generally small and fast to 48write because they require little boilerplate. 49 50``lldb-test`` is a relatively new addition to the test suite. It was the first 51tool that was added that is designed for testing. Since then it has been 52continuously extended with new subcommands, improving our test coverage. Among 53other things you can use it to query lldb for symbol files, for object files 54and breakpoints. 55 56Obviously shell tests are great for testing the command line driver itself or 57the subcomponents already exposed by lldb-test. But when it comes to LLDB's 58vast functionality, most things can be tested both through the driver as well 59as the Python API. For example, to test setting a breakpoint, you could do it 60from the command line driver with ``b main`` or you could use the SB API and do 61something like ``target.BreakpointCreateByName`` [#]_. 62 63A good rule of thumb is to prefer shell tests when what is being tested is 64relatively simple. Expressivity is limited compared to the API tests, which 65means that you have to have a well-defined test scenario that you can easily 66match with ``FileCheck``. 67 68Another thing to consider are the binaries being debugged, which we call 69inferiors. For shell tests, they have to be relatively simple. The 70``dotest.py`` test framework has extensive support for complex build scenarios 71and different variants, which is described in more detail below, while shell 72tests are limited to single lines of shell commands with compiler and linker 73invocations. 74 75On the same topic, another interesting aspect of the shell tests is that there 76you can often get away with a broken or incomplete binary, whereas the API 77tests almost always require a fully functional executable. This enables testing 78of (some) aspects of handling of binaries with non-native architectures or 79operating systems. 80 81Finally, the shell tests always run in batch mode. You start with some input 82and the test verifies the output. The debugger can be sensitive to its 83environment, such as the platform it runs on. It can be hard to express 84that the same test might behave slightly differently on macOS and Linux. 85Additionally, the debugger is an interactive tool, and the shell test provide 86no good way of testing those interactive aspects, such as tab completion for 87example. 88 89API Tests 90````````` 91 92API tests are located under ``lldb/test/API``. They are run with the 93``dotest.py``. Tests are written in Python and test binaries (inferiors) are 94compiled with Make. The majority of API tests are end-to-end tests that compile 95programs from source, run them, and debug the processes. 96 97As mentioned before, ``dotest.py`` is LLDB's testing framework. The 98implementation is located under ``lldb/packages/Python/lldbsuite``. We have 99several extensions and custom test primitives on top of what's offered by 100`unittest2 <https://docs.python.org/2/library/unittest.html>`_. Those can be 101found in 102`lldbtest.py <https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/lldbtest.py>`_. 103 104Below is the directory layout of the `example API test 105<https://github.com/llvm/llvm-project/tree/main/lldb/test/API/sample_test>`_. 106The test directory will always contain a python file, starting with ``Test``. 107Most of the tests are structured as a binary being debugged, so there will be 108one or more source files and a ``Makefile``. 109 110:: 111 112 sample_test 113 ├── Makefile 114 ├── TestSampleTest.py 115 └── main.c 116 117Let's start with the Python test file. Every test is its own class and can have 118one or more test methods, that start with ``test_``. Many tests define 119multiple test methods and share a bunch of common code. For example, for a 120fictive test that makes sure we can set breakpoints we might have one test 121method that ensures we can set a breakpoint by address, on that sets a 122breakpoint by name and another that sets the same breakpoint by file and line 123number. The setup, teardown and everything else other than setting the 124breakpoint could be shared. 125 126Our testing framework also has a bunch of utilities that abstract common 127operations, such as creating targets, setting breakpoints etc. When code is 128shared across tests, we extract it into a utility in ``lldbutil``. It's always 129worth taking a look at `lldbutil 130<https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/lldbutil.py>`_ 131to see if there's a utility to simplify some of the testing boiler plate. 132Because we can't always audit every existing test, this is doubly true when 133looking at an existing test for inspiration. 134 135It's possible to skip or `XFAIL 136<https://ftp.gnu.org/old-gnu/Manuals/dejagnu-1.3/html_node/dejagnu_6.html>`_ 137tests using decorators. You'll see them a lot. The debugger can be sensitive to 138things like the architecture, the host and target platform, the compiler 139version etc. LLDB comes with a range of predefined decorators for these 140configurations. 141 142:: 143 144 @expectedFailureAll(archs=["aarch64"], oslist=["linux"] 145 146Another great thing about these decorators is that they're very easy to extend, 147it's even possible to define a function in a test case that determines whether 148the test should be run or not. 149 150:: 151 152 @expectedFailure(checking_function_name) 153 154In addition to providing a lot more flexibility when it comes to writing the 155test, the API test also allow for much more complex scenarios when it comes to 156building inferiors. Every test has its own ``Makefile``, most of them only a 157few lines long. A shared ``Makefile`` (``Makefile.rules``) with about a 158thousand lines of rules takes care of most if not all of the boiler plate, 159while individual make files can be used to build more advanced tests. 160 161Here's an example of a simple ``Makefile`` used by the example test. 162 163:: 164 165 C_SOURCES := main.c 166 CFLAGS_EXTRAS := -std=c99 167 168 include Makefile.rules 169 170Finding the right variables to set can be tricky. You can always take a look at 171`Makefile.rules <https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/make/Makefile.rules>`_ 172but often it's easier to find an existing ``Makefile`` that does something 173similar to what you want to do. 174 175Another thing this enables is having different variants for the same test 176case. By default, we run every test for all 3 debug info formats, so once with 177DWARF from the object files, once with gmodules and finally with a dSYM on 178macOS or split DWARF (DWO) on Linux. But there are many more things we can test 179that are orthogonal to the test itself. On GreenDragon we have a matrix bot 180that runs the test suite under different configurations, with older host 181compilers and different DWARF versions. 182 183As you can imagine, this quickly lead to combinatorial explosion in the number 184of variants. It's very tempting to add more variants because it's an easy way 185to increase test coverage. It doesn't scale. It's easy to set up, but increases 186the runtime of the tests and has a large ongoing cost. 187 188The key take away is that the different variants don't obviate the need for 189focused tests. So relying on it to test say DWARF5 is a really bad idea. 190Instead you should write tests that check the specific DWARF5 feature, and have 191the variant as a nice-to-have. 192 193In conclusion, you'll want to opt for an API test to test the API itself or 194when you need the expressivity, either for the test case itself or for the 195program being debugged. The fact that the API tests work with different 196variants mean that more general tests should be API tests, so that they can be 197run against the different variants. 198 199Guidelines for API tests 200^^^^^^^^^^^^^^^^^^^^^^^^ 201 202API tests are expected to be fast, reliable and maintainable. To achieve this 203goal, API tests should conform to the following guidelines in addition to normal 204good testing practices. 205 206**Don't unnecessarily launch the test executable.** 207 Launching a process and running to a breakpoint can often be the most 208 expensive part of a test and should be avoided if possible. A large part 209 of LLDB's functionality is available directly after creating an `SBTarget` 210 of the test executable. 211 212 The part of the SB API that can be tested with just a target includes 213 everything that represents information about the executable and its 214 debug information (e.g., `SBTarget`, `SBModule`, `SBSymbolContext`, 215 `SBFunction`, `SBInstruction`, `SBCompileUnit`, etc.). For test executables 216 written in languages with a type system that is mostly defined at compile 217 time (e.g., C and C++) there is also usually no process necessary to test 218 the `SBType`-related parts of the API. With those languages it's also 219 possible to test `SBValue` by running expressions with 220 `SBTarget.EvaluateExpression` or the `expect_expr` testing utility. 221 222 Functionality that always requires a running process is everything that 223 tests the `SBProcess`, `SBThread`, and `SBFrame` classes. The same is true 224 for tests that exercise breakpoints, watchpoints and sanitizers. 225 Languages such as Objective-C that have a dependency on a runtime 226 environment also always require a running process. 227 228**Don't unnecessarily include system headers in test sources.** 229 Including external headers slows down the compilation of the test executable 230 and it makes reproducing test failures on other operating systems or 231 configurations harder. 232 233**Avoid specifying test-specific compiler flags when including system headers.** 234 If a test requires including a system header (e.g., a test for a libc++ 235 formatter includes a libc++ header), try to avoid specifying custom compiler 236 flags if possible. Certain debug information formats such as ``gmodules`` 237 use a cache that is shared between all API tests and that contains 238 precompiled system headers. If you add or remove a specific compiler flag 239 in your test (e.g., adding ``-DFOO`` to the ``Makefile`` or ``self.build`` 240 arguments), then the test will not use the shared precompiled header cache 241 and expensively recompile all system headers from scratch. If you depend on 242 a specific compiler flag for the test, you can avoid this issue by either 243 removing all system header includes or decorating the test function with 244 ``@no_debug_info_test`` (which will avoid running all debug information 245 variants including ``gmodules``). 246 247**Test programs should be kept simple.** 248 Test executables should do the minimum amount of work to bring the process 249 into the state that is required for the test. Simulating a 'real' program 250 that actually tries to do some useful task rarely helps with catching bugs 251 and makes the test much harder to debug and maintain. The test programs 252 should always be deterministic (i.e., do not generate and check against 253 random test values). 254 255**Identifiers in tests should be simple and descriptive.** 256 Often test programs need to declare functions and classes which require 257 choosing some form of identifier for them. These identifiers should always 258 either be kept simple for small tests (e.g., ``A``, ``B``, ...) or have some 259 descriptive name (e.g., ``ClassWithTailPadding``, ``inlined_func``, ...). 260 Never choose identifiers that are already used anywhere else in LLVM or 261 other programs (e.g., don't name a class ``VirtualFileSystem``, a function 262 ``llvm_unreachable``, or a namespace ``rapidxml``) as this will mislead 263 people ``grep``'ing the LLVM repository for those strings. 264 265**Prefer LLDB testing utilities over directly working with the SB API.** 266 The ``lldbutil`` module and the ``TestBase`` class come with a large amount 267 of utility functions that can do common test setup tasks (e.g., starting a 268 test executable and running the process to a breakpoint). Using these 269 functions not only keeps the test shorter and free of duplicated code, but 270 they also follow best test suite practices and usually give much clearer 271 error messages if something goes wrong. The test utilities also contain 272 custom asserts and checks that should be preferably used (e.g. 273 ``self.assertSuccess``). 274 275**Prefer calling the SB API over checking command output.** 276 Avoid writing your tests on top of ``self.expect(...)`` calls that check 277 the output of LLDB commands and instead try calling into the SB API. Relying 278 on LLDB commands makes changing (and improving) the output/syntax of 279 commands harder and the resulting tests are often prone to accepting 280 incorrect test results. Especially improved error messages that contain 281 more information might cause these ``self.expect`` calls to unintentionally 282 find the required ``substrs``. For example, the following ``self.expect`` 283 check will unexpectedly pass if it's ran as the first expression in a test: 284 285:: 286 287 self.expect("expr 2 + 2", substrs=["0"]) 288 289When running the same command in LLDB the reason for the unexpected success 290is that '0' is found in the name of the implicitly created result variable: 291 292:: 293 294 (lldb) expr 2 + 2 295 (int) $0 = 4 296 ^ The '0' substring is found here. 297 298A better way to write the test above would be using LLDB's testing function 299``expect_expr`` will only pass if the expression produces a value of 0: 300 301:: 302 303 self.expect_expr("2 + 2", result_value="0") 304 305**Prefer using specific asserts over the generic assertTrue/assertFalse.**. 306 The `self.assertTrue`/`self.assertFalse` functions should always be your 307 last option as they give non-descriptive error messages. The test class has 308 several expressive asserts such as `self.assertIn` that automatically 309 generate an explanation how the received values differ from the expected 310 ones. Check the documentation of Python's `unittest` module to see what 311 asserts are available. If you can't find a specific assert that fits your 312 needs and you fall back to a generic assert, make sure you put useful 313 information into the assert's `msg` argument that helps explain the failure. 314 315:: 316 317 # Bad. Will print a generic error such as 'False is not True'. 318 self.assertTrue(expected_string in list_of_results) 319 # Good. Will print expected_string and the contents of list_of_results. 320 self.assertIn(expected_string, list_of_results) 321 322Running The Tests 323----------------- 324 325.. note:: 326 327 On Windows any invocations of python should be replaced with python_d, the 328 debug interpreter, when running the test suite against a debug version of 329 LLDB. 330 331.. note:: 332 333 On NetBSD you must export ``LD_LIBRARY_PATH=$PWD/lib`` in your environment. 334 This is due to lack of the ``$ORIGIN`` linker feature. 335 336Running the Full Test Suite 337``````````````````````````` 338 339The easiest way to run the LLDB test suite is to use the ``check-lldb`` build 340target. 341 342By default, the ``check-lldb`` target builds the test programs with the same 343compiler that was used to build LLDB. To build the tests with a different 344compiler, you can set the ``LLDB_TEST_COMPILER`` CMake variable. 345 346It is possible to customize the architecture of the test binaries and compiler 347used by appending ``-A`` and ``-C`` options respectively to the CMake variable 348``LLDB_TEST_USER_ARGS``. For example, to test LLDB against 32-bit binaries 349built with a custom version of clang, do: 350 351:: 352 353 $ cmake -DLLDB_TEST_USER_ARGS="-A i386 -C /path/to/custom/clang" -G Ninja 354 $ ninja check-lldb 355 356Note that multiple ``-A`` and ``-C`` flags can be specified to 357``LLDB_TEST_USER_ARGS``. 358 359Running a Single Test Suite 360``````````````````````````` 361 362Each test suite can be run separately, similar to running the whole test suite 363with ``check-lldb``. 364 365* Use ``check-lldb-unit`` to run just the unit tests. 366* Use ``check-lldb-api`` to run just the SB API tests. 367* Use ``check-lldb-shell`` to run just the shell tests. 368 369You can run specific subdirectories by appending the directory name to the 370target. For example, to run all the tests in ``ObjectFile``, you can use the 371target ``check-lldb-shell-objectfile``. However, because the unit tests and API 372tests don't actually live under ``lldb/test``, this convenience is only 373available for the shell tests. 374 375Running a Single Test 376````````````````````` 377 378The recommended way to run a single test is by invoking the lit driver with a 379filter. This ensures that the test is run with the same configuration as when 380run as part of a test suite. 381 382:: 383 384 $ ./bin/llvm-lit -sv tools/lldb/test --filter <test> 385 386 387Because lit automatically scans a directory for tests, it's also possible to 388pass a subdirectory to run a specific subset of the tests. 389 390:: 391 392 $ ./bin/llvm-lit -sv tools/lldb/test/Shell/Commands/CommandScriptImmediateOutput 393 394 395For the SB API tests it is possible to forward arguments to ``dotest.py`` by 396passing ``--param`` to lit and setting a value for ``dotest-args``. 397 398:: 399 400 $ ./bin/llvm-lit -sv tools/lldb/test --param dotest-args='-C gcc' 401 402 403Below is an overview of running individual test in the unit and API test suites 404without going through the lit driver. 405 406Running a Specific Test or Set of Tests: API Tests 407`````````````````````````````````````````````````` 408 409In addition to running all the LLDB test suites with the ``check-lldb`` CMake 410target above, it is possible to run individual LLDB tests. If you have a CMake 411build you can use the ``lldb-dotest`` binary, which is a wrapper around 412``dotest.py`` that passes all the arguments configured by CMake. 413 414Alternatively, you can use ``dotest.py`` directly, if you want to run a test 415one-off with a different configuration. 416 417For example, to run the test cases defined in TestInferiorCrashing.py, run: 418 419:: 420 421 $ ./bin/lldb-dotest -p TestInferiorCrashing.py 422 423:: 424 425 $ cd $lldb/test 426 $ python dotest.py --executable <path-to-lldb> -p TestInferiorCrashing.py ../packages/Python/lldbsuite/test 427 428If the test is not specified by name (e.g. if you leave the ``-p`` argument 429off), all tests in that directory will be executed: 430 431 432:: 433 434 $ ./bin/lldb-dotest functionalities/data-formatter 435 436:: 437 438 $ python dotest.py --executable <path-to-lldb> functionalities/data-formatter 439 440Many more options that are available. To see a list of all of them, run: 441 442:: 443 444 $ python dotest.py -h 445 446 447Running a Specific Test or Set of Tests: Unit Tests 448``````````````````````````````````````````````````` 449 450The unit tests are simple executables, located in the build directory under ``tools/lldb/unittests``. 451 452To run them, just run the test binary, for example, to run all the Host tests: 453 454:: 455 456 $ ./tools/lldb/unittests/Host/HostTests 457 458 459To run a specific test, pass a filter, for example: 460 461:: 462 463 $ ./tools/lldb/unittests/Host/HostTests --gtest_filter=SocketTest.DomainListenConnectAccept 464 465 466Running the Test Suite Remotely 467``````````````````````````````` 468 469Running the test-suite remotely is similar to the process of running a local 470test suite, but there are two things to have in mind: 471 4721. You must have the lldb-server running on the remote system, ready to accept 473 multiple connections. For more information on how to setup remote debugging 474 see the Remote debugging page. 4752. You must tell the test-suite how to connect to the remote system. This is 476 achieved using the ``--platform-name``, ``--platform-url`` and 477 ``--platform-working-dir`` parameters to ``dotest.py``. These parameters 478 correspond to the platform select and platform connect LLDB commands. You 479 will usually also need to specify the compiler and architecture for the 480 remote system. 481 482Currently, running the remote test suite is supported only with ``dotest.py`` (or 483dosep.py with a single thread), but we expect this issue to be addressed in the 484near future. 485 486Running tests in QEMU System Emulation Environment 487`````````````````````````````````````````````````` 488 489QEMU can be used to test LLDB in an emulation environment in the absence of 490actual hardware. `QEMU based testing <https://lldb.llvm.org/use/qemu-testing.html>`_ 491page describes how to setup an emulation environment using QEMU helper scripts 492found under llvm-project/lldb/scripts/lldb-test-qemu. These scripts currently 493work with Arm or AArch64, but support for other architectures can be added easily. 494 495Debugging Test Failures 496----------------------- 497 498On non-Windows platforms, you can use the ``-d`` option to ``dotest.py`` which 499will cause the script to print out the pid of the test and wait for a while 500until a debugger is attached. Then run ``lldb -p <pid>`` to attach. 501 502To instead debug a test's python source, edit the test and insert 503``import pdb; pdb.set_trace()`` at the point you want to start debugging. In 504addition to pdb's debugging facilities, lldb commands can be executed with the 505help of a pdb alias. For example ``lldb bt`` and ``lldb v some_var``. Add this 506line to your ``~/.pdbrc``: 507 508:: 509 510 alias lldb self.dbg.HandleCommand("%*") 511 512:: 513 514Debugging Test Failures on Windows 515`````````````````````````````````` 516 517On Windows, it is strongly recommended to use Python Tools for Visual Studio 518for debugging test failures. It can seamlessly step between native and managed 519code, which is very helpful when you need to step through the test itself, and 520then into the LLDB code that backs the operations the test is performing. 521 522A quick guide to getting started with PTVS is as follows: 523 524#. Install PTVS 525#. Create a Visual Studio Project for the Python code. 526 #. Go to File -> New -> Project -> Python -> From Existing Python Code. 527 #. Choose llvm/tools/lldb as the directory containing the Python code. 528 #. When asked where to save the .pyproj file, choose the folder ``llvm/tools/lldb/pyproj``. This is a special folder that is ignored by the ``.gitignore`` file, since it is not checked in. 529#. Set test/dotest.py as the startup file 530#. Make sure there is a Python Environment installed for your distribution. For example, if you installed Python to ``C:\Python35``, PTVS needs to know that this is the interpreter you want to use for running the test suite. 531 #. Go to Tools -> Options -> Python Tools -> Environment Options 532 #. Click Add Environment, and enter Python 3.5 Debug for the name. Fill out the values correctly. 533#. Configure the project to use this debug interpreter. 534 #. Right click the Project node in Solution Explorer. 535 #. In the General tab, Make sure Python 3.5 Debug is the selected Interpreter. 536 #. In Debug/Search Paths, enter the path to your ninja/lib/site-packages directory. 537 #. In Debug/Environment Variables, enter ``VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\``. 538 #. If you want to enabled mixed mode debugging, check Enable native code debugging (this slows down debugging, so enable it only on an as-needed basis.) 539#. Set the command line for the test suite to run. 540 #. Right click the project in solution explorer and choose the Debug tab. 541 #. Enter the arguments to dotest.py. 542 #. Example command options: 543 544:: 545 546 --arch=i686 547 # Path to debug lldb.exe 548 --executable D:/src/llvmbuild/ninja/bin/lldb.exe 549 # Directory to store log files 550 -s D:/src/llvmbuild/ninja/lldb-test-traces 551 -u CXXFLAGS -u CFLAGS 552 # If a test crashes, show JIT debugging dialog. 553 --enable-crash-dialog 554 # Path to release clang.exe 555 -C d:\src\llvmbuild\ninja_release\bin\clang.exe 556 # Path to the particular test you want to debug. 557 -p TestPaths.py 558 # Root of test tree 559 D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test 560 561:: 562 563 --arch=i686 --executable D:/src/llvmbuild/ninja/bin/lldb.exe -s D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe -p TestPaths.py D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test --no-multiprocess 564 565.. [#] `https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName <https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName>`_ 566