xref: /qemu/docs/devel/testing.rst (revision b355f08a)
1===============
2Testing in QEMU
3===============
4
5This document describes the testing infrastructure in QEMU.
6
7Testing with "make check"
8=========================
9
10The "make check" testing family includes most of the C based tests in QEMU. For
11a quick help, run ``make check-help`` from the source tree.
12
13The usual way to run these tests is:
14
15.. code::
16
17  make check
18
19which includes QAPI schema tests, unit tests, QTests and some iotests.
20Different sub-types of "make check" tests will be explained below.
21
22Before running tests, it is best to build QEMU programs first. Some tests
23expect the executables to exist and will fail with obscure messages if they
24cannot find them.
25
26Unit tests
27----------
28
29Unit tests, which can be invoked with ``make check-unit``, are simple C tests
30that typically link to individual QEMU object files and exercise them by
31calling exported functions.
32
33If you are writing new code in QEMU, consider adding a unit test, especially
34for utility modules that are relatively stateless or have few dependencies. To
35add a new unit test:
36
371. Create a new source file. For example, ``tests/unit/foo-test.c``.
38
392. Write the test. Normally you would include the header file which exports
40   the module API, then verify the interface behaves as expected from your
41   test. The test code should be organized with the glib testing framework.
42   Copying and modifying an existing test is usually a good idea.
43
443. Add the test to ``tests/unit/meson.build``. The unit tests are listed in a
45   dictionary called ``tests``.  The values are any additional sources and
46   dependencies to be linked with the test.  For a simple test whose source
47   is in ``tests/unit/foo-test.c``, it is enough to add an entry like::
48
49     {
50       ...
51       'foo-test': [],
52       ...
53     }
54
55Since unit tests don't require environment variables, the simplest way to debug
56a unit test failure is often directly invoking it or even running it under
57``gdb``. However there can still be differences in behavior between ``make``
58invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment
59variable (which affects memory reclamation and catches invalid pointers better)
60and gtester options. If necessary, you can run
61
62.. code::
63
64  make check-unit V=1
65
66and copy the actual command line which executes the unit test, then run
67it from the command line.
68
69QTest
70-----
71
72QTest is a device emulation testing framework.  It can be very useful to test
73device models; it could also control certain aspects of QEMU (such as virtual
74clock stepping), with a special purpose "qtest" protocol.  Refer to
75:doc:`qtest` for more details.
76
77QTest cases can be executed with
78
79.. code::
80
81   make check-qtest
82
83QAPI schema tests
84-----------------
85
86The QAPI schema tests validate the QAPI parser used by QMP, by feeding
87predefined input to the parser and comparing the result with the reference
88output.
89
90The input/output data is managed under the ``tests/qapi-schema`` directory.
91Each test case includes four files that have a common base name:
92
93  * ``${casename}.json`` - the file contains the JSON input for feeding the
94    parser
95  * ``${casename}.out`` - the file contains the expected stdout from the parser
96  * ``${casename}.err`` - the file contains the expected stderr from the parser
97  * ``${casename}.exit`` - the expected error code
98
99Consider adding a new QAPI schema test when you are making a change on the QAPI
100parser (either fixing a bug or extending/modifying the syntax). To do this:
101
1021. Add four files for the new case as explained above. For example:
103
104  ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``.
105
1062. Add the new test in ``tests/Makefile.include``. For example:
107
108  ``qapi-schema += foo.json``
109
110check-block
111-----------
112
113``make check-block`` runs a subset of the block layer iotests (the tests that
114are in the "auto" group).
115See the "QEMU iotests" section below for more information.
116
117GCC gcov support
118----------------
119
120``gcov`` is a GCC tool to analyze the testing coverage by
121instrumenting the tested code. To use it, configure QEMU with
122``--enable-gcov`` option and build. Then run ``make check`` as usual.
123
124If you want to gather coverage information on a single test the ``make
125clean-gcda`` target can be used to delete any existing coverage
126information before running a single test.
127
128You can generate a HTML coverage report by executing ``make
129coverage-html`` which will create
130``meson-logs/coveragereport/index.html``.
131
132Further analysis can be conducted by running the ``gcov`` command
133directly on the various .gcda output files. Please read the ``gcov``
134documentation for more information.
135
136QEMU iotests
137============
138
139QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing
140framework widely used to test block layer related features. It is higher level
141than "make check" tests and 99% of the code is written in bash or Python
142scripts.  The testing success criteria is golden output comparison, and the
143test files are named with numbers.
144
145To run iotests, make sure QEMU is built successfully, then switch to the
146``tests/qemu-iotests`` directory under the build directory, and run ``./check``
147with desired arguments from there.
148
149By default, "raw" format and "file" protocol is used; all tests will be
150executed, except the unsupported ones. You can override the format and protocol
151with arguments:
152
153.. code::
154
155  # test with qcow2 format
156  ./check -qcow2
157  # or test a different protocol
158  ./check -nbd
159
160It's also possible to list test numbers explicitly:
161
162.. code::
163
164  # run selected cases with qcow2 format
165  ./check -qcow2 001 030 153
166
167Cache mode can be selected with the "-c" option, which may help reveal bugs
168that are specific to certain cache mode.
169
170More options are supported by the ``./check`` script, run ``./check -h`` for
171help.
172
173Writing a new test case
174-----------------------
175
176Consider writing a tests case when you are making any changes to the block
177layer. An iotest case is usually the choice for that. There are already many
178test cases, so it is possible that extending one of them may achieve the goal
179and save the boilerplate to create one.  (Unfortunately, there isn't a 100%
180reliable way to find a related one out of hundreds of tests.  One approach is
181using ``git grep``.)
182
183Usually an iotest case consists of two files. One is an executable that
184produces output to stdout and stderr, the other is the expected reference
185output. They are given the same number in file names. E.g. Test script ``055``
186and reference output ``055.out``.
187
188In rare cases, when outputs differ between cache mode ``none`` and others, a
189``.out.nocache`` file is added. In other cases, when outputs differ between
190image formats, more than one ``.out`` files are created ending with the
191respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``.
192
193There isn't a hard rule about how to write a test script, but a new test is
194usually a (copy and) modification of an existing case.  There are a few
195commonly used ways to create a test:
196
197* A Bash script. It will make use of several environmental variables related
198  to the testing procedure, and could source a group of ``common.*`` libraries
199  for some common helper routines.
200
201* A Python unittest script. Import ``iotests`` and create a subclass of
202  ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of
203  this approach is that the output is too scarce, and the script is considered
204  harder to debug.
205
206* A simple Python script without using unittest module. This could also import
207  ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit
208  from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest
209  execution. This is a combination of 1 and 2.
210
211Pick the language per your preference since both Bash and Python have
212comparable library support for invoking and interacting with QEMU programs. If
213you opt for Python, it is strongly recommended to write Python 3 compatible
214code.
215
216Both Python and Bash frameworks in iotests provide helpers to manage test
217images. They can be used to create and clean up images under the test
218directory. If no I/O or any protocol specific feature is needed, it is often
219more convenient to use the pseudo block driver, ``null-co://``, as the test
220image, which doesn't require image creation or cleaning up. Avoid system-wide
221devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``.
222Otherwise, image locking implications have to be considered.  For example,
223another application on the host may have locked the file, possibly leading to a
224test failure.  If using such devices are explicitly desired, consider adding
225``locking=off`` option to disable image locking.
226
227Debugging a test case
228-----------------------
229The following options to the ``check`` script can be useful when debugging
230a failing test:
231
232* ``-gdb`` wraps every QEMU invocation in a ``gdbserver``, which waits for a
233  connection from a gdb client.  The options given to ``gdbserver`` (e.g. the
234  address on which to listen for connections) are taken from the ``$GDB_OPTIONS``
235  environment variable.  By default (if ``$GDB_OPTIONS`` is empty), it listens on
236  ``localhost:12345``.
237  It is possible to connect to it for example with
238  ``gdb -iex "target remote $addr"``, where ``$addr`` is the address
239  ``gdbserver`` listens on.
240  If the ``-gdb`` option is not used, ``$GDB_OPTIONS`` is ignored,
241  regardless of whether it is set or not.
242
243* ``-valgrind`` attaches a valgrind instance to QEMU. If it detects
244  warnings, it will print and save the log in
245  ``$TEST_DIR/<valgrind_pid>.valgrind``.
246  The final command line will be ``valgrind --log-file=$TEST_DIR/
247  <valgrind_pid>.valgrind --error-exitcode=99 $QEMU ...``
248
249* ``-d`` (debug) just increases the logging verbosity, showing
250  for example the QMP commands and answers.
251
252* ``-p`` (print) redirects QEMU’s stdout and stderr to the test output,
253  instead of saving it into a log file in
254  ``$TEST_DIR/qemu-machine-<random_string>``.
255
256Test case groups
257----------------
258
259"Tests may belong to one or more test groups, which are defined in the form
260of a comment in the test source file. By convention, test groups are listed
261in the second line of the test file, after the "#!/..." line, like this:
262
263.. code::
264
265  #!/usr/bin/env python3
266  # group: auto quick
267  #
268  ...
269
270Another way of defining groups is creating the tests/qemu-iotests/group.local
271file. This should be used only for downstream (this file should never appear
272in upstream). This file may be used for defining some downstream test groups
273or for temporarily disabling tests, like this:
274
275.. code::
276
277  # groups for some company downstream process
278  #
279  # ci - tests to run on build
280  # down - our downstream tests, not for upstream
281  #
282  # Format of each line is:
283  # TEST_NAME TEST_GROUP [TEST_GROUP ]...
284
285  013 ci
286  210 disabled
287  215 disabled
288  our-ugly-workaround-test down ci
289
290Note that the following group names have a special meaning:
291
292- quick: Tests in this group should finish within a few seconds.
293
294- auto: Tests in this group are used during "make check" and should be
295  runnable in any case. That means they should run with every QEMU binary
296  (also non-x86), with every QEMU configuration (i.e. must not fail if
297  an optional feature is not compiled in - but reporting a "skip" is ok),
298  work at least with the qcow2 file format, work with all kind of host
299  filesystems and users (e.g. "nobody" or "root") and must not take too
300  much memory and disk space (since CI pipelines tend to fail otherwise).
301
302- disabled: Tests in this group are disabled and ignored by check.
303
304.. _container-ref:
305
306Container based tests
307=====================
308
309Introduction
310------------
311
312The container testing framework in QEMU utilizes public images to
313build and test QEMU in predefined and widely accessible Linux
314environments. This makes it possible to expand the test coverage
315across distros, toolchain flavors and library versions. The support
316was originally written for Docker although we also support Podman as
317an alternative container runtime. Although the many of the target
318names and scripts are prefixed with "docker" the system will
319automatically run on whichever is configured.
320
321The container images are also used to augment the generation of tests
322for testing TCG. See :ref:`checktcg-ref` for more details.
323
324Docker Prerequisites
325--------------------
326
327Install "docker" with the system package manager and start the Docker service
328on your development machine, then make sure you have the privilege to run
329Docker commands. Typically it means setting up passwordless ``sudo docker``
330command or login as root. For example:
331
332.. code::
333
334  $ sudo yum install docker
335  $ # or `apt-get install docker` for Ubuntu, etc.
336  $ sudo systemctl start docker
337  $ sudo docker ps
338
339The last command should print an empty table, to verify the system is ready.
340
341An alternative method to set up permissions is by adding the current user to
342"docker" group and making the docker daemon socket file (by default
343``/var/run/docker.sock``) accessible to the group:
344
345.. code::
346
347  $ sudo groupadd docker
348  $ sudo usermod $USER -a -G docker
349  $ sudo chown :docker /var/run/docker.sock
350
351Note that any one of above configurations makes it possible for the user to
352exploit the whole host with Docker bind mounting or other privileged
353operations.  So only do it on development machines.
354
355Podman Prerequisites
356--------------------
357
358Install "podman" with the system package manager.
359
360.. code::
361
362  $ sudo dnf install podman
363  $ podman ps
364
365The last command should print an empty table, to verify the system is ready.
366
367Quickstart
368----------
369
370From source tree, type ``make docker-help`` to see the help. Testing
371can be started without configuring or building QEMU (``configure`` and
372``make`` are done in the container, with parameters defined by the
373make target):
374
375.. code::
376
377  make docker-test-build@centos8
378
379This will create a container instance using the ``centos8`` image (the image
380is downloaded and initialized automatically), in which the ``test-build`` job
381is executed.
382
383Registry
384--------
385
386The QEMU project has a container registry hosted by GitLab at
387``registry.gitlab.com/qemu-project/qemu`` which will automatically be
388used to pull in pre-built layers. This avoids unnecessary strain on
389the distro archives created by multiple developers running the same
390container build steps over and over again. This can be overridden
391locally by using the ``NOCACHE`` build option:
392
393.. code::
394
395   make docker-image-debian10 NOCACHE=1
396
397Images
398------
399
400Along with many other images, the ``centos8`` image is defined in a Dockerfile
401in ``tests/docker/dockerfiles/``, called ``centos8.docker``. ``make docker-help``
402command will list all the available images.
403
404To add a new image, simply create a new ``.docker`` file under the
405``tests/docker/dockerfiles/`` directory.
406
407A ``.pre`` script can be added beside the ``.docker`` file, which will be
408executed before building the image under the build context directory. This is
409mainly used to do necessary host side setup. One such setup is ``binfmt_misc``,
410for example, to make qemu-user powered cross build containers work.
411
412Tests
413-----
414
415Different tests are added to cover various configurations to build and test
416QEMU.  Docker tests are the executables under ``tests/docker`` named
417``test-*``. They are typically shell scripts and are built on top of a shell
418library, ``tests/docker/common.rc``, which provides helpers to find the QEMU
419source and build it.
420
421The full list of tests is printed in the ``make docker-help`` help.
422
423Debugging a Docker test failure
424-------------------------------
425
426When CI tasks, maintainers or yourself report a Docker test failure, follow the
427below steps to debug it:
428
4291. Locally reproduce the failure with the reported command line. E.g. run
430   ``make docker-test-mingw@fedora J=8``.
4312. Add "V=1" to the command line, try again, to see the verbose output.
4323. Further add "DEBUG=1" to the command line. This will pause in a shell prompt
433   in the container right before testing starts. You could either manually
434   build QEMU and run tests from there, or press Ctrl-D to let the Docker
435   testing continue.
4364. If you press Ctrl-D, the same building and testing procedure will begin, and
437   will hopefully run into the error again. After that, you will be dropped to
438   the prompt for debug.
439
440Options
441-------
442
443Various options can be used to affect how Docker tests are done. The full
444list is in the ``make docker`` help text. The frequently used ones are:
445
446* ``V=1``: the same as in top level ``make``. It will be propagated to the
447  container and enable verbose output.
448* ``J=$N``: the number of parallel tasks in make commands in the container,
449  similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in
450  top level ``make`` will not be propagated into the container.)
451* ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test
452  failure" section.
453
454Thread Sanitizer
455================
456
457Thread Sanitizer (TSan) is a tool which can detect data races.  QEMU supports
458building and testing with this tool.
459
460For more information on TSan:
461
462https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
463
464Thread Sanitizer in Docker
465---------------------------
466TSan is currently supported in the ubuntu2004 docker.
467
468The test-tsan test will build using TSan and then run make check.
469
470.. code::
471
472  make docker-test-tsan@ubuntu2004
473
474TSan warnings under docker are placed in files located at build/tsan/.
475
476We recommend using DEBUG=1 to allow launching the test from inside the docker,
477and to allow review of the warnings generated by TSan.
478
479Building and Testing with TSan
480------------------------------
481
482It is possible to build and test with TSan, with a few additional steps.
483These steps are normally done automatically in the docker.
484
485There is a one time patch needed in clang-9 or clang-10 at this time:
486
487.. code::
488
489  sed -i 's/^const/static const/g' \
490      /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
491
492To configure the build for TSan:
493
494.. code::
495
496  ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \
497               --disable-werror --extra-cflags="-O0"
498
499The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment
500variable.
501
502More information on the TSAN_OPTIONS can be found here:
503
504https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
505
506For example:
507
508.. code::
509
510  export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \
511                      detect_deadlocks=false history_size=7 exitcode=0 \
512                      log_path=<build path>/tsan/tsan_warning
513
514The above exitcode=0 has TSan continue without error if any warnings are found.
515This allows for running the test and then checking the warnings afterwards.
516If you want TSan to stop and exit with error on warnings, use exitcode=66.
517
518TSan Suppressions
519-----------------
520Keep in mind that for any data race warning, although there might be a data race
521detected by TSan, there might be no actual bug here.  TSan provides several
522different mechanisms for suppressing warnings.  In general it is recommended
523to fix the code if possible to eliminate the data race rather than suppress
524the warning.
525
526A few important files for suppressing warnings are:
527
528tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime.
529The comment on each suppression will typically indicate why we are
530suppressing it.  More information on the file format can be found here:
531
532https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions
533
534tests/tsan/blacklist.tsan - Has TSan warnings we wish to disable
535at compile time for test or debug.
536Add flags to configure to enable:
537
538"--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/blacklist.tsan"
539
540More information on the file format can be found here under "Blacklist Format":
541
542https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
543
544TSan Annotations
545----------------
546include/qemu/tsan.h defines annotations.  See this file for more descriptions
547of the annotations themselves.  Annotations can be used to suppress
548TSan warnings or give TSan more information so that it can detect proper
549relationships between accesses of data.
550
551Annotation examples can be found here:
552
553https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/
554
555Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp
556
557The full set of annotations can be found here:
558
559https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp
560
561VM testing
562==========
563
564This test suite contains scripts that bootstrap various guest images that have
565necessary packages to build QEMU. The basic usage is documented in ``Makefile``
566help which is displayed with ``make vm-help``.
567
568Quickstart
569----------
570
571Run ``make vm-help`` to list available make targets. Invoke a specific make
572command to run build test in an image. For example, ``make vm-build-freebsd``
573will build the source tree in the FreeBSD image. The command can be executed
574from either the source tree or the build dir; if the former, ``./configure`` is
575not needed. The command will then generate the test image in ``./tests/vm/``
576under the working directory.
577
578Note: images created by the scripts accept a well-known RSA key pair for SSH
579access, so they SHOULD NOT be exposed to external interfaces if you are
580concerned about attackers taking control of the guest and potentially
581exploiting a QEMU security bug to compromise the host.
582
583QEMU binaries
584-------------
585
586By default, qemu-system-x86_64 is searched in $PATH to run the guest. If there
587isn't one, or if it is older than 2.10, the test won't work. In this case,
588provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``.
589
590Likewise the path to qemu-img can be set in QEMU_IMG environment variable.
591
592Make jobs
593---------
594
595The ``-j$X`` option in the make command line is not propagated into the VM,
596specify ``J=$X`` to control the make jobs in the guest.
597
598Debugging
599---------
600
601Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive
602debugging and verbose output. If this is not enough, see the next section.
603``V=1`` will be propagated down into the make jobs in the guest.
604
605Manual invocation
606-----------------
607
608Each guest script is an executable script with the same command line options.
609For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``:
610
611.. code::
612
613    $ cd $QEMU_SRC/tests/vm
614
615    # To bootstrap the image
616    $ ./netbsd --build-image --image /var/tmp/netbsd.img
617    <...>
618
619    # To run an arbitrary command in guest (the output will not be echoed unless
620    # --debug is added)
621    $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a
622
623    # To build QEMU in guest
624    $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC
625
626    # To get to an interactive shell
627    $ ./netbsd --interactive --image /var/tmp/netbsd.img sh
628
629Adding new guests
630-----------------
631
632Please look at existing guest scripts for how to add new guests.
633
634Most importantly, create a subclass of BaseVM and implement ``build_image()``
635method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from
636the script's ``main()``.
637
638* Usually in ``build_image()``, a template image is downloaded from a
639  predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and
640  the checksum, so consider using it.
641
642* Once the image is downloaded, users, SSH server and QEMU build deps should
643  be set up:
644
645  - Root password set to ``BaseVM.ROOT_PASS``
646  - User ``BaseVM.GUEST_USER`` is created, and password set to
647    ``BaseVM.GUEST_PASS``
648  - SSH service is enabled and started on boot,
649    ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys``
650    file of both root and the normal user
651  - DHCP client service is enabled and started on boot, so that it can
652    automatically configure the virtio-net-pci NIC and communicate with QEMU
653    user net (10.0.2.2)
654  - Necessary packages are installed to untar the source tarball and build
655    QEMU
656
657* Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that
658  untars a raw virtio-blk block device, which is the tarball data blob of the
659  QEMU source tree, then configure/build it. Running "make check" is also
660  recommended.
661
662Image fuzzer testing
663====================
664
665An image fuzzer was added to exercise format drivers. Currently only qcow2 is
666supported. To start the fuzzer, run
667
668.. code::
669
670  tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2
671
672Alternatively, some command different from "qemu-img info" can be tested, by
673changing the ``-c`` option.
674
675Acceptance tests using the Avocado Framework
676============================================
677
678The ``tests/acceptance`` directory hosts functional tests, also known
679as acceptance level tests.  They're usually higher level tests, and
680may interact with external resources and with various guest operating
681systems.
682
683These tests are written using the Avocado Testing Framework (which must
684be installed separately) in conjunction with a the ``avocado_qemu.Test``
685class, implemented at ``tests/acceptance/avocado_qemu``.
686
687Tests based on ``avocado_qemu.Test`` can easily:
688
689 * Customize the command line arguments given to the convenience
690   ``self.vm`` attribute (a QEMUMachine instance)
691
692 * Interact with the QEMU monitor, send QMP commands and check
693   their results
694
695 * Interact with the guest OS, using the convenience console device
696   (which may be useful to assert the effectiveness and correctness of
697   command line arguments or QMP commands)
698
699 * Interact with external data files that accompany the test itself
700   (see ``self.get_data()``)
701
702 * Download (and cache) remote data files, such as firmware and kernel
703   images
704
705 * Have access to a library of guest OS images (by means of the
706   ``avocado.utils.vmimage`` library)
707
708 * Make use of various other test related utilities available at the
709   test class itself and at the utility library:
710
711   - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test
712   - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html
713
714Running tests
715-------------
716
717You can run the acceptance tests simply by executing:
718
719.. code::
720
721  make check-acceptance
722
723This involves the automatic creation of Python virtual environment
724within the build tree (at ``tests/venv``) which will have all the
725right dependencies, and will save tests results also within the
726build tree (at ``tests/results``).
727
728Note: the build environment must be using a Python 3 stack, and have
729the ``venv`` and ``pip`` packages installed.  If necessary, make sure
730``configure`` is called with ``--python=`` and that those modules are
731available.  On Debian and Ubuntu based systems, depending on the
732specific version, they may be on packages named ``python3-venv`` and
733``python3-pip``.
734
735It is also possible to run tests based on tags using the
736``make check-acceptance`` command and the ``AVOCADO_TAGS`` environment
737variable:
738
739.. code::
740
741   make check-acceptance AVOCADO_TAGS=quick
742
743Note that tags separated with commas have an AND behavior, while tags
744separated by spaces have an OR behavior. For more information on Avocado
745tags, see:
746
747 https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/tags.html
748
749To run a single test file, a couple of them, or a test within a file
750using the ``make check-acceptance`` command, set the ``AVOCADO_TESTS``
751environment variable with the test files or test names. To run all
752tests from a single file, use:
753
754 .. code::
755
756  make check-acceptance AVOCADO_TESTS=$FILEPATH
757
758The same is valid to run tests from multiple test files:
759
760 .. code::
761
762  make check-acceptance AVOCADO_TESTS='$FILEPATH1 $FILEPATH2'
763
764To run a single test within a file, use:
765
766 .. code::
767
768  make check-acceptance AVOCADO_TESTS=$FILEPATH:$TESTCLASS.$TESTNAME
769
770The same is valid to run single tests from multiple test files:
771
772 .. code::
773
774  make check-acceptance AVOCADO_TESTS='$FILEPATH1:$TESTCLASS1.$TESTNAME1 $FILEPATH2:$TESTCLASS2.$TESTNAME2'
775
776The scripts installed inside the virtual environment may be used
777without an "activation".  For instance, the Avocado test runner
778may be invoked by running:
779
780 .. code::
781
782  tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/acceptance/
783
784Note that if ``make check-acceptance`` was not executed before, it is
785possible to create the Python virtual environment with the dependencies
786needed running:
787
788 .. code::
789
790  make check-venv
791
792It is also possible to run tests from a single file or a single test within
793a test file. To run tests from a single file within the build tree, use:
794
795 .. code::
796
797  tests/venv/bin/avocado run tests/acceptance/$TESTFILE
798
799To run a single test within a test file, use:
800
801 .. code::
802
803  tests/venv/bin/avocado run tests/acceptance/$TESTFILE:$TESTCLASS.$TESTNAME
804
805Valid test names are visible in the output from any previous execution
806of Avocado or ``make check-acceptance``, and can also be queried using:
807
808 .. code::
809
810  tests/venv/bin/avocado list tests/acceptance
811
812Manual Installation
813-------------------
814
815To manually install Avocado and its dependencies, run:
816
817.. code::
818
819  pip install --user avocado-framework
820
821Alternatively, follow the instructions on this link:
822
823  https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/installing.html
824
825Overview
826--------
827
828The ``tests/acceptance/avocado_qemu`` directory provides the
829``avocado_qemu`` Python module, containing the ``avocado_qemu.Test``
830class.  Here's a simple usage example:
831
832.. code::
833
834  from avocado_qemu import Test
835
836
837  class Version(Test):
838      """
839      :avocado: tags=quick
840      """
841      def test_qmp_human_info_version(self):
842          self.vm.launch()
843          res = self.vm.command('human-monitor-command',
844                                command_line='info version')
845          self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)')
846
847To execute your test, run:
848
849.. code::
850
851  avocado run version.py
852
853Tests may be classified according to a convention by using docstring
854directives such as ``:avocado: tags=TAG1,TAG2``.  To run all tests
855in the current directory, tagged as "quick", run:
856
857.. code::
858
859  avocado run -t quick .
860
861The ``avocado_qemu.Test`` base test class
862-----------------------------------------
863
864The ``avocado_qemu.Test`` class has a number of characteristics that
865are worth being mentioned right away.
866
867First of all, it attempts to give each test a ready to use QEMUMachine
868instance, available at ``self.vm``.  Because many tests will tweak the
869QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``)
870is left to the test writer.
871
872The base test class has also support for tests with more than one
873QEMUMachine. The way to get machines is through the ``self.get_vm()``
874method which will return a QEMUMachine instance. The ``self.get_vm()``
875method accepts arguments that will be passed to the QEMUMachine creation
876and also an optional ``name`` attribute so you can identify a specific
877machine and get it more than once through the tests methods. A simple
878and hypothetical example follows:
879
880.. code::
881
882  from avocado_qemu import Test
883
884
885  class MultipleMachines(Test):
886      def test_multiple_machines(self):
887          first_machine = self.get_vm()
888          second_machine = self.get_vm()
889          self.get_vm(name='third_machine').launch()
890
891          first_machine.launch()
892          second_machine.launch()
893
894          first_res = first_machine.command(
895              'human-monitor-command',
896              command_line='info version')
897
898          second_res = second_machine.command(
899              'human-monitor-command',
900              command_line='info version')
901
902          third_res = self.get_vm(name='third_machine').command(
903              'human-monitor-command',
904              command_line='info version')
905
906          self.assertEquals(first_res, second_res, third_res)
907
908At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines
909shutdown.
910
911The ``avocado_qemu.LinuxTest`` base test class
912~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
913
914The ``avocado_qemu.LinuxTest`` is further specialization of the
915``avocado_qemu.Test`` class, so it contains all the characteristics of
916the later plus some extra features.
917
918First of all, this base class is intended for tests that need to
919interact with a fully booted and operational Linux guest.  At this
920time, it uses a Fedora 31 guest image.  The most basic example looks
921like this:
922
923.. code::
924
925  from avocado_qemu import LinuxTest
926
927
928  class SomeTest(LinuxTest):
929
930      def test(self):
931          self.launch_and_wait()
932          self.ssh_command('some_command_to_be_run_in_the_guest')
933
934Please refer to tests that use ``avocado_qemu.LinuxTest`` under
935``tests/acceptance`` for more examples.
936
937QEMUMachine
938~~~~~~~~~~~
939
940The QEMUMachine API is already widely used in the Python iotests,
941device-crash-test and other Python scripts.  It's a wrapper around the
942execution of a QEMU binary, giving its users:
943
944 * the ability to set command line arguments to be given to the QEMU
945   binary
946
947 * a ready to use QMP connection and interface, which can be used to
948   send commands and inspect its results, as well as asynchronous
949   events
950
951 * convenience methods to set commonly used command line arguments in
952   a more succinct and intuitive way
953
954QEMU binary selection
955~~~~~~~~~~~~~~~~~~~~~
956
957The QEMU binary used for the ``self.vm`` QEMUMachine instance will
958primarily depend on the value of the ``qemu_bin`` parameter.  If it's
959not explicitly set, its default value will be the result of a dynamic
960probe in the same source tree.  A suitable binary will be one that
961targets the architecture matching host machine.
962
963Based on this description, test writers will usually rely on one of
964the following approaches:
965
9661) Set ``qemu_bin``, and use the given binary
967
9682) Do not set ``qemu_bin``, and use a QEMU binary named like
969   "qemu-system-${arch}", either in the current
970   working directory, or in the current source tree.
971
972The resulting ``qemu_bin`` value will be preserved in the
973``avocado_qemu.Test`` as an attribute with the same name.
974
975Attribute reference
976-------------------
977
978Besides the attributes and methods that are part of the base
979``avocado.Test`` class, the following attributes are available on any
980``avocado_qemu.Test`` instance.
981
982vm
983~~
984
985A QEMUMachine instance, initially configured according to the given
986``qemu_bin`` parameter.
987
988arch
989~~~~
990
991The architecture can be used on different levels of the stack, e.g. by
992the framework or by the test itself.  At the framework level, it will
993currently influence the selection of a QEMU binary (when one is not
994explicitly given).
995
996Tests are also free to use this attribute value, for their own needs.
997A test may, for instance, use the same value when selecting the
998architecture of a kernel or disk image to boot a VM with.
999
1000The ``arch`` attribute will be set to the test parameter of the same
1001name.  If one is not given explicitly, it will either be set to
1002``None``, or, if the test is tagged with one (and only one)
1003``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``.
1004
1005cpu
1006~~~
1007
1008The cpu model that will be set to all QEMUMachine instances created
1009by the test.
1010
1011The ``cpu`` attribute will be set to the test parameter of the same
1012name. If one is not given explicitly, it will either be set to
1013``None ``, or, if the test is tagged with one (and only one)
1014``:avocado: tags=cpu:VALUE`` tag, it will be set to ``VALUE``.
1015
1016machine
1017~~~~~~~
1018
1019The machine type that will be set to all QEMUMachine instances created
1020by the test.
1021
1022The ``machine`` attribute will be set to the test parameter of the same
1023name.  If one is not given explicitly, it will either be set to
1024``None``, or, if the test is tagged with one (and only one)
1025``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``.
1026
1027qemu_bin
1028~~~~~~~~
1029
1030The preserved value of the ``qemu_bin`` parameter or the result of the
1031dynamic probe for a QEMU binary in the current working directory or
1032source tree.
1033
1034LinuxTest
1035~~~~~~~~~
1036
1037Besides the attributes present on the ``avocado_qemu.Test`` base
1038class, the ``avocado_qemu.LinuxTest`` adds the following attributes:
1039
1040distro
1041......
1042
1043The name of the Linux distribution used as the guest image for the
1044test.  The name should match the **Provider** column on the list
1045of images supported by the avocado.utils.vmimage library:
1046
1047https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1048
1049distro_version
1050..............
1051
1052The version of the Linux distribution as the guest image for the
1053test.  The name should match the **Version** column on the list
1054of images supported by the avocado.utils.vmimage library:
1055
1056https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1057
1058distro_checksum
1059...............
1060
1061The sha256 hash of the guest image file used for the test.
1062
1063If this value is not set in the code or by a test parameter (with the
1064same name), no validation on the integrity of the image will be
1065performed.
1066
1067Parameter reference
1068-------------------
1069
1070To understand how Avocado parameters are accessed by tests, and how
1071they can be passed to tests, please refer to::
1072
1073  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#accessing-test-parameters
1074
1075Parameter values can be easily seen in the log files, and will look
1076like the following:
1077
1078.. code::
1079
1080  PARAMS (key=qemu_bin, path=*, default=./qemu-system-x86_64) => './qemu-system-x86_64
1081
1082arch
1083~~~~
1084
1085The architecture that will influence the selection of a QEMU binary
1086(when one is not explicitly given).
1087
1088Tests are also free to use this parameter value, for their own needs.
1089A test may, for instance, use the same value when selecting the
1090architecture of a kernel or disk image to boot a VM with.
1091
1092This parameter has a direct relation with the ``arch`` attribute.  If
1093not given, it will default to None.
1094
1095cpu
1096~~~
1097
1098The cpu model that will be set to all QEMUMachine instances created
1099by the test.
1100
1101machine
1102~~~~~~~
1103
1104The machine type that will be set to all QEMUMachine instances created
1105by the test.
1106
1107
1108qemu_bin
1109~~~~~~~~
1110
1111The exact QEMU binary to be used on QEMUMachine.
1112
1113LinuxTest
1114~~~~~~~~~
1115
1116Besides the parameters present on the ``avocado_qemu.Test`` base
1117class, the ``avocado_qemu.LinuxTest`` adds the following parameters:
1118
1119distro
1120......
1121
1122The name of the Linux distribution used as the guest image for the
1123test.  The name should match the **Provider** column on the list
1124of images supported by the avocado.utils.vmimage library:
1125
1126https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1127
1128distro_version
1129..............
1130
1131The version of the Linux distribution as the guest image for the
1132test.  The name should match the **Version** column on the list
1133of images supported by the avocado.utils.vmimage library:
1134
1135https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1136
1137distro_checksum
1138...............
1139
1140The sha256 hash of the guest image file used for the test.
1141
1142If this value is not set in the code or by this parameter no
1143validation on the integrity of the image will be performed.
1144
1145Skipping tests
1146--------------
1147The Avocado framework provides Python decorators which allow for easily skip
1148tests running under certain conditions. For example, on the lack of a binary
1149on the test system or when the running environment is a CI system. For further
1150information about those decorators, please refer to::
1151
1152  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#skipping-tests
1153
1154While the conditions for skipping tests are often specifics of each one, there
1155are recurring scenarios identified by the QEMU developers and the use of
1156environment variables became a kind of standard way to enable/disable tests.
1157
1158Here is a list of the most used variables:
1159
1160AVOCADO_ALLOW_LARGE_STORAGE
1161~~~~~~~~~~~~~~~~~~~~~~~~~~~
1162Tests which are going to fetch or produce assets considered *large* are not
1163going to run unless that ``AVOCADO_ALLOW_LARGE_STORAGE=1`` is exported on
1164the environment.
1165
1166The definition of *large* is a bit arbitrary here, but it usually means an
1167asset which occupies at least 1GB of size on disk when uncompressed.
1168
1169AVOCADO_ALLOW_UNTRUSTED_CODE
1170~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1171There are tests which will boot a kernel image or firmware that can be
1172considered not safe to run on the developer's workstation, thus they are
1173skipped by default. The definition of *not safe* is also arbitrary but
1174usually it means a blob which either its source or build process aren't
1175public available.
1176
1177You should export ``AVOCADO_ALLOW_UNTRUSTED_CODE=1`` on the environment in
1178order to allow tests which make use of those kind of assets.
1179
1180AVOCADO_TIMEOUT_EXPECTED
1181~~~~~~~~~~~~~~~~~~~~~~~~
1182The Avocado framework has a timeout mechanism which interrupts tests to avoid the
1183test suite of getting stuck. The timeout value can be set via test parameter or
1184property defined in the test class, for further details::
1185
1186  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#setting-a-test-timeout
1187
1188Even though the timeout can be set by the test developer, there are some tests
1189that may not have a well-defined limit of time to finish under certain
1190conditions. For example, tests that take longer to execute when QEMU is
1191compiled with debug flags. Therefore, the ``AVOCADO_TIMEOUT_EXPECTED`` variable
1192has been used to determine whether those tests should run or not.
1193
1194GITLAB_CI
1195~~~~~~~~~
1196A number of tests are flagged to not run on the GitLab CI. Usually because
1197they proved to the flaky or there are constraints on the CI environment which
1198would make them fail. If you encounter a similar situation then use that
1199variable as shown on the code snippet below to skip the test:
1200
1201.. code::
1202
1203  @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
1204  def test(self):
1205      do_something()
1206
1207Uninstalling Avocado
1208--------------------
1209
1210If you've followed the manual installation instructions above, you can
1211easily uninstall Avocado.  Start by listing the packages you have
1212installed::
1213
1214  pip list --user
1215
1216And remove any package you want with::
1217
1218  pip uninstall <package_name>
1219
1220If you've used ``make check-acceptance``, the Python virtual environment where
1221Avocado is installed will be cleaned up as part of ``make check-clean``.
1222
1223.. _checktcg-ref:
1224
1225Testing with "make check-tcg"
1226=============================
1227
1228The check-tcg tests are intended for simple smoke tests of both
1229linux-user and softmmu TCG functionality. However to build test
1230programs for guest targets you need to have cross compilers available.
1231If your distribution supports cross compilers you can do something as
1232simple as::
1233
1234  apt install gcc-aarch64-linux-gnu
1235
1236The configure script will automatically pick up their presence.
1237Sometimes compilers have slightly odd names so the availability of
1238them can be prompted by passing in the appropriate configure option
1239for the architecture in question, for example::
1240
1241  $(configure) --cross-cc-aarch64=aarch64-cc
1242
1243There is also a ``--cross-cc-flags-ARCH`` flag in case additional
1244compiler flags are needed to build for a given target.
1245
1246If you have the ability to run containers as the user the build system
1247will automatically use them where no system compiler is available. For
1248architectures where we also support building QEMU we will generally
1249use the same container to build tests. However there are a number of
1250additional containers defined that have a minimal cross-build
1251environment that is only suitable for building test cases. Sometimes
1252we may use a bleeding edge distribution for compiler features needed
1253for test cases that aren't yet in the LTS distros we support for QEMU
1254itself.
1255
1256See :ref:`container-ref` for more details.
1257
1258Running subset of tests
1259-----------------------
1260
1261You can build the tests for one architecture::
1262
1263  make build-tcg-tests-$TARGET
1264
1265And run with::
1266
1267  make run-tcg-tests-$TARGET
1268
1269Adding ``V=1`` to the invocation will show the details of how to
1270invoke QEMU for the test which is useful for debugging tests.
1271
1272TCG test dependencies
1273---------------------
1274
1275The TCG tests are deliberately very light on dependencies and are
1276either totally bare with minimal gcc lib support (for softmmu tests)
1277or just glibc (for linux-user tests). This is because getting a cross
1278compiler to work with additional libraries can be challenging.
1279
1280Other TCG Tests
1281---------------
1282
1283There are a number of out-of-tree test suites that are used for more
1284extensive testing of processor features.
1285
1286KVM Unit Tests
1287~~~~~~~~~~~~~~
1288
1289The KVM unit tests are designed to run as a Guest OS under KVM but
1290there is no reason why they can't exercise the TCG as well. It
1291provides a minimal OS kernel with hooks for enabling the MMU as well
1292as reporting test results via a special device::
1293
1294  https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
1295
1296Linux Test Project
1297~~~~~~~~~~~~~~~~~~
1298
1299The LTP is focused on exercising the syscall interface of a Linux
1300kernel. It checks that syscalls behave as documented and strives to
1301exercise as many corner cases as possible. It is a useful test suite
1302to run to exercise QEMU's linux-user code::
1303
1304  https://linux-test-project.github.io/
1305