1.. SPDX-License-Identifier: GPL-2.0
2
3============================
4Tips For Running KUnit Tests
5============================
6
7Using ``kunit.py run`` ("kunit tool")
8=====================================
9
10Running from any directory
11--------------------------
12
13It can be handy to create a bash function like:
14
15.. code-block:: bash
16
17	function run_kunit() {
18	  ( cd "$(git rev-parse --show-toplevel)" && ./tools/testing/kunit/kunit.py run "$@" )
19	}
20
21.. note::
22	Early versions of ``kunit.py`` (before 5.6) didn't work unless run from
23	the kernel root, hence the use of a subshell and ``cd``.
24
25Running a subset of tests
26-------------------------
27
28``kunit.py run`` accepts an optional glob argument to filter tests. The format
29is ``"<suite_glob>[.test_glob]"``.
30
31Say that we wanted to run the sysctl tests, we could do so via:
32
33.. code-block:: bash
34
35	$ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
36	$ ./tools/testing/kunit/kunit.py run 'sysctl*'
37
38We can filter down to just the "write" tests via:
39
40.. code-block:: bash
41
42	$ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
43	$ ./tools/testing/kunit/kunit.py run 'sysctl*.*write*'
44
45We're paying the cost of building more tests than we need this way, but it's
46easier than fiddling with ``.kunitconfig`` files or commenting out
47``kunit_suite``'s.
48
49However, if we wanted to define a set of tests in a less ad hoc way, the next
50tip is useful.
51
52Defining a set of tests
53-----------------------
54
55``kunit.py run`` (along with ``build``, and ``config``) supports a
56``--kunitconfig`` flag. So if you have a set of tests that you want to run on a
57regular basis (especially if they have other dependencies), you can create a
58specific ``.kunitconfig`` for them.
59
60E.g. kunit has one for its tests:
61
62.. code-block:: bash
63
64	$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit/.kunitconfig
65
66Alternatively, if you're following the convention of naming your
67file ``.kunitconfig``, you can just pass in the dir, e.g.
68
69.. code-block:: bash
70
71	$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit
72
73.. note::
74	This is a relatively new feature (5.12+) so we don't have any
75	conventions yet about on what files should be checked in versus just
76	kept around locally. It's up to you and your maintainer to decide if a
77	config is useful enough to submit (and therefore have to maintain).
78
79.. note::
80	Having ``.kunitconfig`` fragments in a parent and child directory is
81	iffy. There's discussion about adding an "import" statement in these
82	files to make it possible to have a top-level config run tests from all
83	child directories. But that would mean ``.kunitconfig`` files are no
84	longer just simple .config fragments.
85
86	One alternative would be to have kunit tool recursively combine configs
87	automagically, but tests could theoretically depend on incompatible
88	options, so handling that would be tricky.
89
90Setting kernel commandline parameters
91-------------------------------------
92
93You can use ``--kernel_args`` to pass arbitrary kernel arguments, e.g.
94
95.. code-block:: bash
96
97	$ ./tools/testing/kunit/kunit.py run --kernel_args=param=42 --kernel_args=param2=false
98
99
100Generating code coverage reports under UML
101------------------------------------------
102
103.. note::
104	TODO(brendanhiggins@google.com): There are various issues with UML and
105	versions of gcc 7 and up. You're likely to run into missing ``.gcda``
106	files or compile errors.
107
108This is different from the "normal" way of getting coverage information that is
109documented in Documentation/dev-tools/gcov.rst.
110
111Instead of enabling ``CONFIG_GCOV_KERNEL=y``, we can set these options:
112
113.. code-block:: none
114
115	CONFIG_DEBUG_KERNEL=y
116	CONFIG_DEBUG_INFO=y
117	CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
118	CONFIG_GCOV=y
119
120
121Putting it together into a copy-pastable sequence of commands:
122
123.. code-block:: bash
124
125	# Append coverage options to the current config
126	$ ./tools/testing/kunit/kunit.py run --kunitconfig=.kunit/ --kunitconfig=tools/testing/kunit/configs/coverage_uml.config
127	# Extract the coverage information from the build dir (.kunit/)
128	$ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/
129
130	# From here on, it's the same process as with CONFIG_GCOV_KERNEL=y
131	# E.g. can generate an HTML report in a tmp dir like so:
132	$ genhtml -o /tmp/coverage_html coverage.info
133
134
135If your installed version of gcc doesn't work, you can tweak the steps:
136
137.. code-block:: bash
138
139	$ ./tools/testing/kunit/kunit.py run --make_options=CC=/usr/bin/gcc-6
140	$ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/ --gcov-tool=/usr/bin/gcov-6
141
142
143Running tests manually
144======================
145
146Running tests without using ``kunit.py run`` is also an important use case.
147Currently it's your only option if you want to test on architectures other than
148UML.
149
150As running the tests under UML is fairly straightforward (configure and compile
151the kernel, run the ``./linux`` binary), this section will focus on testing
152non-UML architectures.
153
154
155Running built-in tests
156----------------------
157
158When setting tests to ``=y``, the tests will run as part of boot and print
159results to dmesg in TAP format. So you just need to add your tests to your
160``.config``, build and boot your kernel as normal.
161
162So if we compiled our kernel with:
163
164.. code-block:: none
165
166	CONFIG_KUNIT=y
167	CONFIG_KUNIT_EXAMPLE_TEST=y
168
169Then we'd see output like this in dmesg signaling the test ran and passed:
170
171.. code-block:: none
172
173	TAP version 14
174	1..1
175	    # Subtest: example
176	    1..1
177	    # example_simple_test: initializing
178	    ok 1 - example_simple_test
179	ok 1 - example
180
181Running tests as modules
182------------------------
183
184Depending on the tests, you can build them as loadable modules.
185
186For example, we'd change the config options from before to
187
188.. code-block:: none
189
190	CONFIG_KUNIT=y
191	CONFIG_KUNIT_EXAMPLE_TEST=m
192
193Then after booting into our kernel, we can run the test via
194
195.. code-block:: none
196
197	$ modprobe kunit-example-test
198
199This will then cause it to print TAP output to stdout.
200
201.. note::
202	The ``modprobe`` will *not* have a non-zero exit code if any test
203	failed (as of 5.13). But ``kunit.py parse`` would, see below.
204
205.. note::
206	You can set ``CONFIG_KUNIT=m`` as well, however, some features will not
207	work and thus some tests might break. Ideally tests would specify they
208	depend on ``KUNIT=y`` in their ``Kconfig``'s, but this is an edge case
209	most test authors won't think about.
210	As of 5.13, the only difference is that ``current->kunit_test`` will
211	not exist.
212
213Pretty-printing results
214-----------------------
215
216You can use ``kunit.py parse`` to parse dmesg for test output and print out
217results in the same familiar format that ``kunit.py run`` does.
218
219.. code-block:: bash
220
221	$ ./tools/testing/kunit/kunit.py parse /var/log/dmesg
222
223
224Retrieving per suite results
225----------------------------
226
227Regardless of how you're running your tests, you can enable
228``CONFIG_KUNIT_DEBUGFS`` to expose per-suite TAP-formatted results:
229
230.. code-block:: none
231
232	CONFIG_KUNIT=y
233	CONFIG_KUNIT_EXAMPLE_TEST=m
234	CONFIG_KUNIT_DEBUGFS=y
235
236The results for each suite will be exposed under
237``/sys/kernel/debug/kunit/<suite>/results``.
238So using our example config:
239
240.. code-block:: bash
241
242	$ modprobe kunit-example-test > /dev/null
243	$ cat /sys/kernel/debug/kunit/example/results
244	... <TAP output> ...
245
246	# After removing the module, the corresponding files will go away
247	$ modprobe -r kunit-example-test
248	$ cat /sys/kernel/debug/kunit/example/results
249	/sys/kernel/debug/kunit/example/results: No such file or directory
250
251Generating code coverage reports
252--------------------------------
253
254See Documentation/dev-tools/gcov.rst for details on how to do this.
255
256The only vaguely KUnit-specific advice here is that you probably want to build
257your tests as modules. That way you can isolate the coverage from tests from
258other code executed during boot, e.g.
259
260.. code-block:: bash
261
262	# Reset coverage counters before running the test.
263	$ echo 0 > /sys/kernel/debug/gcov/reset
264	$ modprobe kunit-example-test
265
266
267Test Attributes and Filtering
268=============================
269
270Test suites and cases can be marked with test attributes, such as speed of
271test. These attributes will later be printed in test output and can be used to
272filter test execution.
273
274Marking Test Attributes
275-----------------------
276
277Tests are marked with an attribute by including a ``kunit_attributes`` object
278in the test definition.
279
280Test cases can be marked using the ``KUNIT_CASE_ATTR(test_name, attributes)``
281macro to define the test case instead of ``KUNIT_CASE(test_name)``.
282
283.. code-block:: c
284
285	static const struct kunit_attributes example_attr = {
286		.speed = KUNIT_VERY_SLOW,
287	};
288
289	static struct kunit_case example_test_cases[] = {
290		KUNIT_CASE_ATTR(example_test, example_attr),
291	};
292
293.. note::
294	To mark a test case as slow, you can also use ``KUNIT_CASE_SLOW(test_name)``.
295	This is a helpful macro as the slow attribute is the most commonly used.
296
297Test suites can be marked with an attribute by setting the "attr" field in the
298suite definition.
299
300.. code-block:: c
301
302	static const struct kunit_attributes example_attr = {
303		.speed = KUNIT_VERY_SLOW,
304	};
305
306	static struct kunit_suite example_test_suite = {
307		...,
308		.attr = example_attr,
309	};
310
311.. note::
312	Not all attributes need to be set in a ``kunit_attributes`` object. Unset
313	attributes will remain uninitialized and act as though the attribute is set
314	to 0 or NULL. Thus, if an attribute is set to 0, it is treated as unset.
315	These unset attributes will not be reported and may act as a default value
316	for filtering purposes.
317
318Reporting Attributes
319--------------------
320
321When a user runs tests, attributes will be present in the raw kernel output (in
322KTAP format). Note that attributes will be hidden by default in kunit.py output
323for all passing tests but the raw kernel output can be accessed using the
324``--raw_output`` flag. This is an example of how test attributes for test cases
325will be formatted in kernel output:
326
327.. code-block:: none
328
329	# example_test.speed: slow
330	ok 1 example_test
331
332This is an example of how test attributes for test suites will be formatted in
333kernel output:
334
335.. code-block:: none
336
337	  KTAP version 2
338	  # Subtest: example_suite
339	  # module: kunit_example_test
340	  1..3
341	  ...
342	ok 1 example_suite
343
344Additionally, users can output a full attribute report of tests with their
345attributes, using the command line flag ``--list_tests_attr``:
346
347.. code-block:: bash
348
349	kunit.py run "example" --list_tests_attr
350
351.. note::
352	This report can be accessed when running KUnit manually by passing in the
353	module_param ``kunit.action=list_attr``.
354
355Filtering
356---------
357
358Users can filter tests using the ``--filter`` command line flag when running
359tests. As an example:
360
361.. code-block:: bash
362
363	kunit.py run --filter speed=slow
364
365
366You can also use the following operations on filters: "<", ">", "<=", ">=",
367"!=", and "=". Example:
368
369.. code-block:: bash
370
371	kunit.py run --filter "speed>slow"
372
373This example will run all tests with speeds faster than slow. Note that the
374characters < and > are often interpreted by the shell, so they may need to be
375quoted or escaped, as above.
376
377Additionally, you can use multiple filters at once. Simply separate filters
378using commas. Example:
379
380.. code-block:: bash
381
382	kunit.py run --filter "speed>slow, module=kunit_example_test"
383
384.. note::
385	You can use this filtering feature when running KUnit manually by passing
386	the filter as a module param: ``kunit.filter="speed>slow, speed<=normal"``.
387
388Filtered tests will not run or show up in the test output. You can use the
389``--filter_action=skip`` flag to skip filtered tests instead. These tests will be
390shown in the test output in the test but will not run. To use this feature when
391running KUnit manually, use the module param ``kunit.filter_action=skip``.
392
393Rules of Filtering Procedure
394----------------------------
395
396Since both suites and test cases can have attributes, there may be conflicts
397between attributes during filtering. The process of filtering follows these
398rules:
399
400- Filtering always operates at a per-test level.
401
402- If a test has an attribute set, then the test's value is filtered on.
403
404- Otherwise, the value falls back to the suite's value.
405
406- If neither are set, the attribute has a global "default" value, which is used.
407
408List of Current Attributes
409--------------------------
410
411``speed``
412
413This attribute indicates the speed of a test's execution (how slow or fast the
414test is).
415
416This attribute is saved as an enum with the following categories: "normal",
417"slow", or "very_slow". The assumed default speed for tests is "normal". This
418indicates that the test takes a relatively trivial amount of time (less than
4191 second), regardless of the machine it is running on. Any test slower than
420this could be marked as "slow" or "very_slow".
421
422The macro ``KUNIT_CASE_SLOW(test_name)`` can be easily used to set the speed
423of a test case to "slow".
424
425``module``
426
427This attribute indicates the name of the module associated with the test.
428
429This attribute is automatically saved as a string and is printed for each suite.
430Tests can also be filtered using this attribute.
431