xref: /openbsd/gnu/llvm/lldb/docs/resources/test.rst (revision 771fbea0)
1Testing
2=======
3
4.. contents::
5   :local:
6
7Test Suite Structure
8--------------------
9
10The LLDB test suite consists of three different kinds of test:
11
12* **Unit tests**: written in C++ using the googletest unit testing library.
13* **Shell tests**: Integration tests that test the debugger through the command
14  line. These tests interact with the debugger either through the command line
15  driver or through ``lldb-test`` which is a tool that exposes the internal
16  data structures in an easy-to-parse way for testing. Most people will know
17  these as *lit tests* in LLVM, although lit is the test driver and ShellTest
18  is the test format that uses ``RUN:`` lines. `FileCheck
19  <https://llvm.org/docs/CommandGuide/FileCheck.html>`_ is used to verify
20  the output.
21* **API tests**: Integration tests that interact with the debugger through the
22  SB API. These are written in Python and use LLDB's ``dotest.py`` testing
23  framework on top of Python's `unittest2
24  <https://docs.python.org/2/library/unittest.html>`_.
25
26All three test suites use ``lit`` (`LLVM Integrated Tester
27<https://llvm.org/docs/CommandGuide/lit.html>`_ ) as the test driver. The test
28suites can be run as a whole or separately.
29
30
31Unit Tests
32``````````
33
34Unit tests are located under ``lldb/unittests``. If it's possible to test
35something in isolation or as a single unit, you should make it a unit test.
36
37Often you need instances of the core objects such as a debugger, target or
38process, in order to test something meaningful. We already have a handful of
39tests that have the necessary boiler plate, but this is something we could
40abstract away and make it more user friendly.
41
42Shell Tests
43```````````
44
45Shell tests are located under ``lldb/test/Shell``. These tests are generally
46built around checking the output of ``lldb`` (the command line driver) or
47``lldb-test`` using ``FileCheck``. Shell tests are generally small and fast to
48write because they require little boilerplate.
49
50``lldb-test`` is a relatively new addition to the test suite. It was the first
51tool that was added that is designed for testing. Since then it has been
52continuously extended with new subcommands, improving our test coverage. Among
53other things you can use it to query lldb for symbol files, for object files
54and breakpoints.
55
56Obviously shell tests are great for testing the command line driver itself or
57the subcomponents already exposed by lldb-test. But when it comes to LLDB's
58vast functionality, most things can be tested both through the driver as well
59as the Python API. For example, to test setting a breakpoint, you could do it
60from the command line driver with ``b main`` or you could use the SB API and do
61something like ``target.BreakpointCreateByName`` [#]_.
62
63A good rule of thumb is to prefer shell tests when what is being tested is
64relatively simple. Expressivity is limited compared to the API tests, which
65means that you have to have a well-defined test scenario that you can easily
66match with ``FileCheck``.
67
68Another thing to consider are the binaries being debugged, which we call
69inferiors. For shell tests, they have to be relatively simple. The
70``dotest.py`` test framework has extensive support for complex build scenarios
71and different variants, which is described in more detail below, while shell
72tests are limited to single lines of shell commands with compiler and linker
73invocations.
74
75On the same topic, another interesting aspect of the shell tests is that there
76you can often get away with a broken or incomplete binary, whereas the API
77tests almost always require a fully functional executable. This enables testing
78of (some) aspects of handling of binaries with non-native architectures or
79operating systems.
80
81Finally, the shell tests always run in batch mode. You start with some input
82and the test verifies the output. The debugger can be sensitive to its
83environment, such as the the platform it runs on. It can be hard to express
84that the same test might behave slightly differently on macOS and Linux.
85Additionally, the debugger is an interactive tool, and the shell test provide
86no good way of testing those interactive aspects, such as tab completion for
87example.
88
89API Tests
90`````````
91
92API tests are located under ``lldb/test/API``. They are run with the
93``dotest.py``. Tests are written in Python and test binaries (inferiors) are
94compiled with Make. The majority of API tests are end-to-end tests that compile
95programs from source, run them, and debug the processes.
96
97As mentioned before, ``dotest.py`` is LLDB's testing framework. The
98implementation is located under ``lldb/packages/Python/lldbsuite``. We have
99several extensions and custom test primitives on top of what's offered by
100`unittest2 <https://docs.python.org/2/library/unittest.html>`_. Those can be
101found  in
102`lldbtest.py <https://github.com/llvm/llvm-project/blob/master/lldb/packages/Python/lldbsuite/test/lldbtest.py>`_.
103
104Below is the directory layout of the `example API test
105<https://github.com/llvm/llvm-project/tree/master/lldb/test/API/sample_test>`_.
106The test directory will always contain a python file, starting with ``Test``.
107Most of the tests are structured as a binary being debugged, so there will be
108one or more source files and a ``Makefile``.
109
110::
111
112  sample_test
113  ├── Makefile
114  ├── TestSampleTest.py
115  └── main.c
116
117Let's start with the Python test file. Every test is its own class and can have
118one or more test methods, that start with ``test_``.  Many tests define
119multiple test methods and share a bunch of common code. For example, for a
120fictive test that makes sure we can set breakpoints we might have one test
121method that ensures we can set a breakpoint by address, on that sets a
122breakpoint by name and another that sets the same breakpoint by file and line
123number. The setup, teardown and everything else other than setting the
124breakpoint could be shared.
125
126Our testing framework also has a bunch of utilities that abstract common
127operations, such as creating targets, setting breakpoints etc. When code is
128shared across tests, we extract it into a utility in ``lldbutil``. It's always
129worth taking a look at  `lldbutil
130<https://github.com/llvm/llvm-project/blob/master/lldb/packages/Python/lldbsuite/test/lldbutil.py>`_
131to see if there's a utility to simplify some of the testing boiler plate.
132Because we can't always audit every existing test, this is doubly true when
133looking at an existing test for inspiration.
134
135It's possible to skip or `XFAIL
136<https://ftp.gnu.org/old-gnu/Manuals/dejagnu-1.3/html_node/dejagnu_6.html>`_
137tests using decorators. You'll see them a lot. The debugger can be sensitive to
138things like the architecture, the host and target platform, the compiler
139version etc. LLDB comes with a range of predefined decorators for these
140configurations.
141
142::
143
144  @expectedFailureAll(archs=["aarch64"], oslist=["linux"]
145
146Another great thing about these decorators is that they're very easy to extend,
147it's even possible to define a function in a test case that determines whether
148the test should be run or not.
149
150::
151
152  @expectedFailure(checking_function_name)
153
154In addition to providing a lot more flexibility when it comes to writing the
155test, the API test also allow for much more complex scenarios when it comes to
156building inferiors. Every test has its own ``Makefile``, most of them only a
157few lines long. A shared ``Makefile`` (``Makefile.rules``) with about a
158thousand lines of rules takes care of most if not all of the boiler plate,
159while individual make files can be used to build more advanced tests. 

160
161Here's an example of a simple ``Makefile`` used by the example test.
162
163::
164
165  C_SOURCES := main.c
166  CFLAGS_EXTRAS := -std=c99
167
168  include Makefile.rules
169
170Finding the right variables to set can be tricky. You can always take a look at
171`Makefile.rules <https://github.com/llvm/llvm-project/blob/master/lldb/packages/Python/lldbsuite/test/make/Makefile.rules>`_
172but often it's easier to find an existing ``Makefile`` that does something
173similar to what you want to do.
174
175Another thing this enables is having different variants for the same test
176case. By default, we run every test for all 3 debug info formats, so once with
177DWARF from the object files, once with gmodules and finally with a dSYM on
178macOS or split DWARF (DWO) on Linux. But there are many more things we can test
179that are orthogonal to the test itself. On GreenDragon we have a matrix bot
180that runs the test suite under different configurations, with older host
181compilers and different DWARF versions.
182
183As you can imagine, this quickly lead to combinatorial explosion in the number
184of variants. It's very tempting to add more variants because it's an easy way
185to increase test coverage. It doesn't scale. It's easy to set up, but increases
186the runtime of the tests and has a large ongoing cost.
187
188The key take away is that the different variants don't obviate the need for
189focused tests. So relying on it to test say DWARF5 is a really bad idea.
190Instead you should write tests that check the specific DWARF5 feature, and have
191the variant as a nice-to-have.
192
193In conclusion, you'll want to opt for an API test to test the API itself or
194when you need the expressivity, either for the test case itself or for the
195program being debugged. The fact that the API tests work with different
196variants mean that more general tests should be API tests, so that they can be
197run against the different variants.
198
199Running The Tests
200-----------------
201
202.. note::
203
204   On Windows any invocations of python should be replaced with python_d, the
205   debug interpreter, when running the test suite against a debug version of
206   LLDB.
207
208.. note::
209
210   On NetBSD you must export ``LD_LIBRARY_PATH=$PWD/lib`` in your environment.
211   This is due to lack of the ``$ORIGIN`` linker feature.
212
213Running the Full Test Suite
214```````````````````````````
215
216The easiest way to run the LLDB test suite is to use the ``check-lldb`` build
217target.
218
219By default, the ``check-lldb`` target builds the test programs with the same
220compiler that was used to build LLDB. To build the tests with a different
221compiler, you can set the ``LLDB_TEST_COMPILER`` CMake variable.
222
223It is possible to customize the architecture of the test binaries and compiler
224used by appending ``-A`` and ``-C`` options respectively to the CMake variable
225``LLDB_TEST_USER_ARGS``. For example, to test LLDB against 32-bit binaries
226built with a custom version of clang, do:
227
228::
229
230   $ cmake -DLLDB_TEST_USER_ARGS="-A i386 -C /path/to/custom/clang" -G Ninja
231   $ ninja check-lldb
232
233Note that multiple ``-A`` and ``-C`` flags can be specified to
234``LLDB_TEST_USER_ARGS``.
235
236Running a Single Test Suite
237```````````````````````````
238
239Each test suite can be run separately, similar to running the whole test suite
240with ``check-lldb``.
241
242* Use ``check-lldb-unit`` to run just the unit tests.
243* Use ``check-lldb-api`` to run just the SB API tests.
244* Use ``check-lldb-shell`` to run just the shell tests.
245
246You can run specific subdirectories by appending the directory name to the
247target. For example, to run all the tests in ``ObjectFile``, you can use the
248target ``check-lldb-shell-objectfile``. However, because the unit tests and API
249tests don't actually live under ``lldb/test``, this convenience is only
250available for the shell tests.
251
252Running a Single Test
253`````````````````````
254
255The recommended way to run a single test is by invoking the lit driver with a
256filter. This ensures that the test is run with the same configuration as when
257run as part of a test suite.
258
259::
260
261   $ ./bin/llvm-lit -sv tools/lldb/test --filter <test>
262
263
264Because lit automatically scans a directory for tests, it's also possible to
265pass a subdirectory to run a specific subset of the tests.
266
267::
268
269   $ ./bin/llvm-lit -sv tools/lldb/test/Shell/Commands/CommandScriptImmediateOutput
270
271
272For the SB API tests it is possible to forward arguments to ``dotest.py`` by
273passing ``--param`` to lit and setting a value for ``dotest-args``.
274
275::
276
277   $ ./bin/llvm-lit -sv tools/lldb/test --param dotest-args='-C gcc'
278
279
280Below is an overview of running individual test in the unit and API test suites
281without going through the lit driver.
282
283Running a Specific Test or Set of Tests: API Tests
284``````````````````````````````````````````````````
285
286In addition to running all the LLDB test suites with the ``check-lldb`` CMake
287target above, it is possible to run individual LLDB tests. If you have a CMake
288build you can use the ``lldb-dotest`` binary, which is a wrapper around
289``dotest.py`` that passes all the arguments configured by CMake.
290
291Alternatively, you can use ``dotest.py`` directly, if you want to run a test
292one-off with a different configuration.
293
294For example, to run the test cases defined in TestInferiorCrashing.py, run:
295
296::
297
298   $ ./bin/lldb-dotest -p TestInferiorCrashing.py
299
300::
301
302   $ cd $lldb/test
303   $ python dotest.py --executable <path-to-lldb> -p TestInferiorCrashing.py ../packages/Python/lldbsuite/test
304
305If the test is not specified by name (e.g. if you leave the ``-p`` argument
306off),  all tests in that directory will be executed:
307
308
309::
310
311   $ ./bin/lldb-dotest functionalities/data-formatter
312
313::
314
315   $ python dotest.py --executable <path-to-lldb> functionalities/data-formatter
316
317Many more options that are available. To see a list of all of them, run:
318
319::
320
321   $ python dotest.py -h
322
323
324Running a Specific Test or Set of Tests: Unit Tests
325```````````````````````````````````````````````````
326
327The unit tests are simple executables, located in the build directory under ``tools/lldb/unittests``.
328
329To run them, just run the test binary, for example, to run all the Host tests:
330
331::
332
333   $ ./tools/lldb/unittests/Host/HostTests
334
335
336To run a specific test, pass a filter, for example:
337
338::
339
340   $ ./tools/lldb/unittests/Host/HostTests --gtest_filter=SocketTest.DomainListenConnectAccept
341
342
343Running the Test Suite Remotely
344```````````````````````````````
345
346Running the test-suite remotely is similar to the process of running a local
347test suite, but there are two things to have in mind:
348
3491. You must have the lldb-server running on the remote system, ready to accept
350   multiple connections. For more information on how to setup remote debugging
351   see the Remote debugging page.
3522. You must tell the test-suite how to connect to the remote system. This is
353   achieved using the ``--platform-name``, ``--platform-url`` and
354   ``--platform-working-dir`` parameters to ``dotest.py``. These parameters
355   correspond to the platform select and platform connect LLDB commands. You
356   will usually also need to specify the compiler and architecture for the
357   remote system.
358
359Currently, running the remote test suite is supported only with ``dotest.py`` (or
360dosep.py with a single thread), but we expect this issue to be addressed in the
361near future.
362
363Debugging Test Failures
364-----------------------
365
366On non-Windows platforms, you can use the ``-d`` option to ``dotest.py`` which
367will cause the script to wait for a while until a debugger is attached.
368
369Debugging Test Failures on Windows
370``````````````````````````````````
371
372On Windows, it is strongly recommended to use Python Tools for Visual Studio
373for debugging test failures. It can seamlessly step between native and managed
374code, which is very helpful when you need to step through the test itself, and
375then into the LLDB code that backs the operations the test is performing.
376
377A quick guide to getting started with PTVS is as follows:
378
379#. Install PTVS
380#. Create a Visual Studio Project for the Python code.
381    #. Go to File -> New -> Project -> Python -> From Existing Python Code.
382    #. Choose llvm/tools/lldb as the directory containing the Python code.
383    #. When asked where to save the .pyproj file, choose the folder ``llvm/tools/lldb/pyproj``. This is a special folder that is ignored by the ``.gitignore`` file, since it is not checked in.
384#. Set test/dotest.py as the startup file
385#. Make sure there is a Python Environment installed for your distribution. For example, if you installed Python to ``C:\Python35``, PTVS needs to know that this is the interpreter you want to use for running the test suite.
386    #. Go to Tools -> Options -> Python Tools -> Environment Options
387    #. Click Add Environment, and enter Python 3.5 Debug for the name. Fill out the values correctly.
388#. Configure the project to use this debug interpreter.
389    #. Right click the Project node in Solution Explorer.
390    #. In the General tab, Make sure Python 3.5 Debug is the selected Interpreter.
391    #. In Debug/Search Paths, enter the path to your ninja/lib/site-packages directory.
392    #. In Debug/Environment Variables, enter ``VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\``.
393    #. If you want to enabled mixed mode debugging, check Enable native code debugging (this slows down debugging, so enable it only on an as-needed basis.)
394#. Set the command line for the test suite to run.
395    #. Right click the project in solution explorer and choose the Debug tab.
396    #. Enter the arguments to dotest.py.
397    #. Example command options:
398
399::
400
401   --arch=i686
402   # Path to debug lldb.exe
403   --executable D:/src/llvmbuild/ninja/bin/lldb.exe
404   # Directory to store log files
405   -s D:/src/llvmbuild/ninja/lldb-test-traces
406   -u CXXFLAGS -u CFLAGS
407   # If a test crashes, show JIT debugging dialog.
408   --enable-crash-dialog
409   # Path to release clang.exe
410   -C d:\src\llvmbuild\ninja_release\bin\clang.exe
411   # Path to the particular test you want to debug.
412   -p TestPaths.py
413   # Root of test tree
414   D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test
415
416::
417
418   --arch=i686 --executable D:/src/llvmbuild/ninja/bin/lldb.exe -s D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe -p TestPaths.py D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test --no-multiprocess
419
420.. [#] `https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName <https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName>`_
421