xref: /qemu/docs/devel/testing.rst (revision d051d0e1)
1===============
2Testing in QEMU
3===============
4
5This document describes the testing infrastructure in QEMU.
6
7Testing with "make check"
8=========================
9
10The "make check" testing family includes most of the C based tests in QEMU. For
11a quick help, run ``make check-help`` from the source tree.
12
13The usual way to run these tests is:
14
15.. code::
16
17  make check
18
19which includes QAPI schema tests, unit tests, QTests and some iotests.
20Different sub-types of "make check" tests will be explained below.
21
22Before running tests, it is best to build QEMU programs first. Some tests
23expect the executables to exist and will fail with obscure messages if they
24cannot find them.
25
26Unit tests
27----------
28
29Unit tests, which can be invoked with ``make check-unit``, are simple C tests
30that typically link to individual QEMU object files and exercise them by
31calling exported functions.
32
33If you are writing new code in QEMU, consider adding a unit test, especially
34for utility modules that are relatively stateless or have few dependencies. To
35add a new unit test:
36
371. Create a new source file. For example, ``tests/unit/foo-test.c``.
38
392. Write the test. Normally you would include the header file which exports
40   the module API, then verify the interface behaves as expected from your
41   test. The test code should be organized with the glib testing framework.
42   Copying and modifying an existing test is usually a good idea.
43
443. Add the test to ``tests/unit/meson.build``. The unit tests are listed in a
45   dictionary called ``tests``.  The values are any additional sources and
46   dependencies to be linked with the test.  For a simple test whose source
47   is in ``tests/unit/foo-test.c``, it is enough to add an entry like::
48
49     {
50       ...
51       'foo-test': [],
52       ...
53     }
54
55Since unit tests don't require environment variables, the simplest way to debug
56a unit test failure is often directly invoking it or even running it under
57``gdb``. However there can still be differences in behavior between ``make``
58invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment
59variable (which affects memory reclamation and catches invalid pointers better)
60and gtester options. If necessary, you can run
61
62.. code::
63
64  make check-unit V=1
65
66and copy the actual command line which executes the unit test, then run
67it from the command line.
68
69QTest
70-----
71
72QTest is a device emulation testing framework.  It can be very useful to test
73device models; it could also control certain aspects of QEMU (such as virtual
74clock stepping), with a special purpose "qtest" protocol.  Refer to
75:doc:`qtest` for more details.
76
77QTest cases can be executed with
78
79.. code::
80
81   make check-qtest
82
83QAPI schema tests
84-----------------
85
86The QAPI schema tests validate the QAPI parser used by QMP, by feeding
87predefined input to the parser and comparing the result with the reference
88output.
89
90The input/output data is managed under the ``tests/qapi-schema`` directory.
91Each test case includes four files that have a common base name:
92
93  * ``${casename}.json`` - the file contains the JSON input for feeding the
94    parser
95  * ``${casename}.out`` - the file contains the expected stdout from the parser
96  * ``${casename}.err`` - the file contains the expected stderr from the parser
97  * ``${casename}.exit`` - the expected error code
98
99Consider adding a new QAPI schema test when you are making a change on the QAPI
100parser (either fixing a bug or extending/modifying the syntax). To do this:
101
1021. Add four files for the new case as explained above. For example:
103
104  ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``.
105
1062. Add the new test in ``tests/Makefile.include``. For example:
107
108  ``qapi-schema += foo.json``
109
110check-block
111-----------
112
113``make check-block`` runs a subset of the block layer iotests (the tests that
114are in the "auto" group).
115See the "QEMU iotests" section below for more information.
116
117GCC gcov support
118----------------
119
120``gcov`` is a GCC tool to analyze the testing coverage by
121instrumenting the tested code. To use it, configure QEMU with
122``--enable-gcov`` option and build. Then run ``make check`` as usual.
123
124If you want to gather coverage information on a single test the ``make
125clean-gcda`` target can be used to delete any existing coverage
126information before running a single test.
127
128You can generate a HTML coverage report by executing ``make
129coverage-html`` which will create
130``meson-logs/coveragereport/index.html``.
131
132Further analysis can be conducted by running the ``gcov`` command
133directly on the various .gcda output files. Please read the ``gcov``
134documentation for more information.
135
136QEMU iotests
137============
138
139QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing
140framework widely used to test block layer related features. It is higher level
141than "make check" tests and 99% of the code is written in bash or Python
142scripts.  The testing success criteria is golden output comparison, and the
143test files are named with numbers.
144
145To run iotests, make sure QEMU is built successfully, then switch to the
146``tests/qemu-iotests`` directory under the build directory, and run ``./check``
147with desired arguments from there.
148
149By default, "raw" format and "file" protocol is used; all tests will be
150executed, except the unsupported ones. You can override the format and protocol
151with arguments:
152
153.. code::
154
155  # test with qcow2 format
156  ./check -qcow2
157  # or test a different protocol
158  ./check -nbd
159
160It's also possible to list test numbers explicitly:
161
162.. code::
163
164  # run selected cases with qcow2 format
165  ./check -qcow2 001 030 153
166
167Cache mode can be selected with the "-c" option, which may help reveal bugs
168that are specific to certain cache mode.
169
170More options are supported by the ``./check`` script, run ``./check -h`` for
171help.
172
173Writing a new test case
174-----------------------
175
176Consider writing a tests case when you are making any changes to the block
177layer. An iotest case is usually the choice for that. There are already many
178test cases, so it is possible that extending one of them may achieve the goal
179and save the boilerplate to create one.  (Unfortunately, there isn't a 100%
180reliable way to find a related one out of hundreds of tests.  One approach is
181using ``git grep``.)
182
183Usually an iotest case consists of two files. One is an executable that
184produces output to stdout and stderr, the other is the expected reference
185output. They are given the same number in file names. E.g. Test script ``055``
186and reference output ``055.out``.
187
188In rare cases, when outputs differ between cache mode ``none`` and others, a
189``.out.nocache`` file is added. In other cases, when outputs differ between
190image formats, more than one ``.out`` files are created ending with the
191respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``.
192
193There isn't a hard rule about how to write a test script, but a new test is
194usually a (copy and) modification of an existing case.  There are a few
195commonly used ways to create a test:
196
197* A Bash script. It will make use of several environmental variables related
198  to the testing procedure, and could source a group of ``common.*`` libraries
199  for some common helper routines.
200
201* A Python unittest script. Import ``iotests`` and create a subclass of
202  ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of
203  this approach is that the output is too scarce, and the script is considered
204  harder to debug.
205
206* A simple Python script without using unittest module. This could also import
207  ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit
208  from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest
209  execution. This is a combination of 1 and 2.
210
211Pick the language per your preference since both Bash and Python have
212comparable library support for invoking and interacting with QEMU programs. If
213you opt for Python, it is strongly recommended to write Python 3 compatible
214code.
215
216Both Python and Bash frameworks in iotests provide helpers to manage test
217images. They can be used to create and clean up images under the test
218directory. If no I/O or any protocol specific feature is needed, it is often
219more convenient to use the pseudo block driver, ``null-co://``, as the test
220image, which doesn't require image creation or cleaning up. Avoid system-wide
221devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``.
222Otherwise, image locking implications have to be considered.  For example,
223another application on the host may have locked the file, possibly leading to a
224test failure.  If using such devices are explicitly desired, consider adding
225``locking=off`` option to disable image locking.
226
227Debugging a test case
228-----------------------
229The following options to the ``check`` script can be useful when debugging
230a failing test:
231
232* ``-gdb`` wraps every QEMU invocation in a ``gdbserver``, which waits for a
233  connection from a gdb client.  The options given to ``gdbserver`` (e.g. the
234  address on which to listen for connections) are taken from the ``$GDB_OPTIONS``
235  environment variable.  By default (if ``$GDB_OPTIONS`` is empty), it listens on
236  ``localhost:12345``.
237  It is possible to connect to it for example with
238  ``gdb -iex "target remote $addr"``, where ``$addr`` is the address
239  ``gdbserver`` listens on.
240  If the ``-gdb`` option is not used, ``$GDB_OPTIONS`` is ignored,
241  regardless of whether it is set or not.
242
243* ``-valgrind`` attaches a valgrind instance to QEMU. If it detects
244  warnings, it will print and save the log in
245  ``$TEST_DIR/<valgrind_pid>.valgrind``.
246  The final command line will be ``valgrind --log-file=$TEST_DIR/
247  <valgrind_pid>.valgrind --error-exitcode=99 $QEMU ...``
248
249* ``-d`` (debug) just increases the logging verbosity, showing
250  for example the QMP commands and answers.
251
252* ``-p`` (print) redirects QEMU’s stdout and stderr to the test output,
253  instead of saving it into a log file in
254  ``$TEST_DIR/qemu-machine-<random_string>``.
255
256Test case groups
257----------------
258
259"Tests may belong to one or more test groups, which are defined in the form
260of a comment in the test source file. By convention, test groups are listed
261in the second line of the test file, after the "#!/..." line, like this:
262
263.. code::
264
265  #!/usr/bin/env python3
266  # group: auto quick
267  #
268  ...
269
270Another way of defining groups is creating the tests/qemu-iotests/group.local
271file. This should be used only for downstream (this file should never appear
272in upstream). This file may be used for defining some downstream test groups
273or for temporarily disabling tests, like this:
274
275.. code::
276
277  # groups for some company downstream process
278  #
279  # ci - tests to run on build
280  # down - our downstream tests, not for upstream
281  #
282  # Format of each line is:
283  # TEST_NAME TEST_GROUP [TEST_GROUP ]...
284
285  013 ci
286  210 disabled
287  215 disabled
288  our-ugly-workaround-test down ci
289
290Note that the following group names have a special meaning:
291
292- quick: Tests in this group should finish within a few seconds.
293
294- auto: Tests in this group are used during "make check" and should be
295  runnable in any case. That means they should run with every QEMU binary
296  (also non-x86), with every QEMU configuration (i.e. must not fail if
297  an optional feature is not compiled in - but reporting a "skip" is ok),
298  work at least with the qcow2 file format, work with all kind of host
299  filesystems and users (e.g. "nobody" or "root") and must not take too
300  much memory and disk space (since CI pipelines tend to fail otherwise).
301
302- disabled: Tests in this group are disabled and ignored by check.
303
304.. _container-ref:
305
306Container based tests
307=====================
308
309Introduction
310------------
311
312The container testing framework in QEMU utilizes public images to
313build and test QEMU in predefined and widely accessible Linux
314environments. This makes it possible to expand the test coverage
315across distros, toolchain flavors and library versions. The support
316was originally written for Docker although we also support Podman as
317an alternative container runtime. Although the many of the target
318names and scripts are prefixed with "docker" the system will
319automatically run on whichever is configured.
320
321The container images are also used to augment the generation of tests
322for testing TCG. See :ref:`checktcg-ref` for more details.
323
324Docker Prerequisites
325--------------------
326
327Install "docker" with the system package manager and start the Docker service
328on your development machine, then make sure you have the privilege to run
329Docker commands. Typically it means setting up passwordless ``sudo docker``
330command or login as root. For example:
331
332.. code::
333
334  $ sudo yum install docker
335  $ # or `apt-get install docker` for Ubuntu, etc.
336  $ sudo systemctl start docker
337  $ sudo docker ps
338
339The last command should print an empty table, to verify the system is ready.
340
341An alternative method to set up permissions is by adding the current user to
342"docker" group and making the docker daemon socket file (by default
343``/var/run/docker.sock``) accessible to the group:
344
345.. code::
346
347  $ sudo groupadd docker
348  $ sudo usermod $USER -a -G docker
349  $ sudo chown :docker /var/run/docker.sock
350
351Note that any one of above configurations makes it possible for the user to
352exploit the whole host with Docker bind mounting or other privileged
353operations.  So only do it on development machines.
354
355Podman Prerequisites
356--------------------
357
358Install "podman" with the system package manager.
359
360.. code::
361
362  $ sudo dnf install podman
363  $ podman ps
364
365The last command should print an empty table, to verify the system is ready.
366
367Quickstart
368----------
369
370From source tree, type ``make docker-help`` to see the help. Testing
371can be started without configuring or building QEMU (``configure`` and
372``make`` are done in the container, with parameters defined by the
373make target):
374
375.. code::
376
377  make docker-test-build@centos8
378
379This will create a container instance using the ``centos8`` image (the image
380is downloaded and initialized automatically), in which the ``test-build`` job
381is executed.
382
383Registry
384--------
385
386The QEMU project has a container registry hosted by GitLab at
387``registry.gitlab.com/qemu-project/qemu`` which will automatically be
388used to pull in pre-built layers. This avoids unnecessary strain on
389the distro archives created by multiple developers running the same
390container build steps over and over again. This can be overridden
391locally by using the ``NOCACHE`` build option:
392
393.. code::
394
395   make docker-image-debian10 NOCACHE=1
396
397Images
398------
399
400Along with many other images, the ``centos8`` image is defined in a Dockerfile
401in ``tests/docker/dockerfiles/``, called ``centos8.docker``. ``make docker-help``
402command will list all the available images.
403
404To add a new image, simply create a new ``.docker`` file under the
405``tests/docker/dockerfiles/`` directory.
406
407A ``.pre`` script can be added beside the ``.docker`` file, which will be
408executed before building the image under the build context directory. This is
409mainly used to do necessary host side setup. One such setup is ``binfmt_misc``,
410for example, to make qemu-user powered cross build containers work.
411
412Tests
413-----
414
415Different tests are added to cover various configurations to build and test
416QEMU.  Docker tests are the executables under ``tests/docker`` named
417``test-*``. They are typically shell scripts and are built on top of a shell
418library, ``tests/docker/common.rc``, which provides helpers to find the QEMU
419source and build it.
420
421The full list of tests is printed in the ``make docker-help`` help.
422
423Debugging a Docker test failure
424-------------------------------
425
426When CI tasks, maintainers or yourself report a Docker test failure, follow the
427below steps to debug it:
428
4291. Locally reproduce the failure with the reported command line. E.g. run
430   ``make docker-test-mingw@fedora J=8``.
4312. Add "V=1" to the command line, try again, to see the verbose output.
4323. Further add "DEBUG=1" to the command line. This will pause in a shell prompt
433   in the container right before testing starts. You could either manually
434   build QEMU and run tests from there, or press Ctrl-D to let the Docker
435   testing continue.
4364. If you press Ctrl-D, the same building and testing procedure will begin, and
437   will hopefully run into the error again. After that, you will be dropped to
438   the prompt for debug.
439
440Options
441-------
442
443Various options can be used to affect how Docker tests are done. The full
444list is in the ``make docker`` help text. The frequently used ones are:
445
446* ``V=1``: the same as in top level ``make``. It will be propagated to the
447  container and enable verbose output.
448* ``J=$N``: the number of parallel tasks in make commands in the container,
449  similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in
450  top level ``make`` will not be propagated into the container.)
451* ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test
452  failure" section.
453
454Thread Sanitizer
455================
456
457Thread Sanitizer (TSan) is a tool which can detect data races.  QEMU supports
458building and testing with this tool.
459
460For more information on TSan:
461
462https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
463
464Thread Sanitizer in Docker
465---------------------------
466TSan is currently supported in the ubuntu2004 docker.
467
468The test-tsan test will build using TSan and then run make check.
469
470.. code::
471
472  make docker-test-tsan@ubuntu2004
473
474TSan warnings under docker are placed in files located at build/tsan/.
475
476We recommend using DEBUG=1 to allow launching the test from inside the docker,
477and to allow review of the warnings generated by TSan.
478
479Building and Testing with TSan
480------------------------------
481
482It is possible to build and test with TSan, with a few additional steps.
483These steps are normally done automatically in the docker.
484
485There is a one time patch needed in clang-9 or clang-10 at this time:
486
487.. code::
488
489  sed -i 's/^const/static const/g' \
490      /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
491
492To configure the build for TSan:
493
494.. code::
495
496  ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \
497               --disable-werror --extra-cflags="-O0"
498
499The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment
500variable.
501
502More information on the TSAN_OPTIONS can be found here:
503
504https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
505
506For example:
507
508.. code::
509
510  export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \
511                      detect_deadlocks=false history_size=7 exitcode=0 \
512                      log_path=<build path>/tsan/tsan_warning
513
514The above exitcode=0 has TSan continue without error if any warnings are found.
515This allows for running the test and then checking the warnings afterwards.
516If you want TSan to stop and exit with error on warnings, use exitcode=66.
517
518TSan Suppressions
519-----------------
520Keep in mind that for any data race warning, although there might be a data race
521detected by TSan, there might be no actual bug here.  TSan provides several
522different mechanisms for suppressing warnings.  In general it is recommended
523to fix the code if possible to eliminate the data race rather than suppress
524the warning.
525
526A few important files for suppressing warnings are:
527
528tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime.
529The comment on each suppression will typically indicate why we are
530suppressing it.  More information on the file format can be found here:
531
532https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions
533
534tests/tsan/blacklist.tsan - Has TSan warnings we wish to disable
535at compile time for test or debug.
536Add flags to configure to enable:
537
538"--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/blacklist.tsan"
539
540More information on the file format can be found here under "Blacklist Format":
541
542https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
543
544TSan Annotations
545----------------
546include/qemu/tsan.h defines annotations.  See this file for more descriptions
547of the annotations themselves.  Annotations can be used to suppress
548TSan warnings or give TSan more information so that it can detect proper
549relationships between accesses of data.
550
551Annotation examples can be found here:
552
553https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/
554
555Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp
556
557The full set of annotations can be found here:
558
559https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp
560
561VM testing
562==========
563
564This test suite contains scripts that bootstrap various guest images that have
565necessary packages to build QEMU. The basic usage is documented in ``Makefile``
566help which is displayed with ``make vm-help``.
567
568Quickstart
569----------
570
571Run ``make vm-help`` to list available make targets. Invoke a specific make
572command to run build test in an image. For example, ``make vm-build-freebsd``
573will build the source tree in the FreeBSD image. The command can be executed
574from either the source tree or the build dir; if the former, ``./configure`` is
575not needed. The command will then generate the test image in ``./tests/vm/``
576under the working directory.
577
578Note: images created by the scripts accept a well-known RSA key pair for SSH
579access, so they SHOULD NOT be exposed to external interfaces if you are
580concerned about attackers taking control of the guest and potentially
581exploiting a QEMU security bug to compromise the host.
582
583QEMU binaries
584-------------
585
586By default, qemu-system-x86_64 is searched in $PATH to run the guest. If there
587isn't one, or if it is older than 2.10, the test won't work. In this case,
588provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``.
589
590Likewise the path to qemu-img can be set in QEMU_IMG environment variable.
591
592Make jobs
593---------
594
595The ``-j$X`` option in the make command line is not propagated into the VM,
596specify ``J=$X`` to control the make jobs in the guest.
597
598Debugging
599---------
600
601Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive
602debugging and verbose output. If this is not enough, see the next section.
603``V=1`` will be propagated down into the make jobs in the guest.
604
605Manual invocation
606-----------------
607
608Each guest script is an executable script with the same command line options.
609For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``:
610
611.. code::
612
613    $ cd $QEMU_SRC/tests/vm
614
615    # To bootstrap the image
616    $ ./netbsd --build-image --image /var/tmp/netbsd.img
617    <...>
618
619    # To run an arbitrary command in guest (the output will not be echoed unless
620    # --debug is added)
621    $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a
622
623    # To build QEMU in guest
624    $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC
625
626    # To get to an interactive shell
627    $ ./netbsd --interactive --image /var/tmp/netbsd.img sh
628
629Adding new guests
630-----------------
631
632Please look at existing guest scripts for how to add new guests.
633
634Most importantly, create a subclass of BaseVM and implement ``build_image()``
635method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from
636the script's ``main()``.
637
638* Usually in ``build_image()``, a template image is downloaded from a
639  predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and
640  the checksum, so consider using it.
641
642* Once the image is downloaded, users, SSH server and QEMU build deps should
643  be set up:
644
645  - Root password set to ``BaseVM.ROOT_PASS``
646  - User ``BaseVM.GUEST_USER`` is created, and password set to
647    ``BaseVM.GUEST_PASS``
648  - SSH service is enabled and started on boot,
649    ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys``
650    file of both root and the normal user
651  - DHCP client service is enabled and started on boot, so that it can
652    automatically configure the virtio-net-pci NIC and communicate with QEMU
653    user net (10.0.2.2)
654  - Necessary packages are installed to untar the source tarball and build
655    QEMU
656
657* Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that
658  untars a raw virtio-blk block device, which is the tarball data blob of the
659  QEMU source tree, then configure/build it. Running "make check" is also
660  recommended.
661
662Image fuzzer testing
663====================
664
665An image fuzzer was added to exercise format drivers. Currently only qcow2 is
666supported. To start the fuzzer, run
667
668.. code::
669
670  tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2
671
672Alternatively, some command different from "qemu-img info" can be tested, by
673changing the ``-c`` option.
674
675Acceptance tests using the Avocado Framework
676============================================
677
678The ``tests/acceptance`` directory hosts functional tests, also known
679as acceptance level tests.  They're usually higher level tests, and
680may interact with external resources and with various guest operating
681systems.
682
683These tests are written using the Avocado Testing Framework (which must
684be installed separately) in conjunction with a the ``avocado_qemu.Test``
685class, implemented at ``tests/acceptance/avocado_qemu``.
686
687Tests based on ``avocado_qemu.Test`` can easily:
688
689 * Customize the command line arguments given to the convenience
690   ``self.vm`` attribute (a QEMUMachine instance)
691
692 * Interact with the QEMU monitor, send QMP commands and check
693   their results
694
695 * Interact with the guest OS, using the convenience console device
696   (which may be useful to assert the effectiveness and correctness of
697   command line arguments or QMP commands)
698
699 * Interact with external data files that accompany the test itself
700   (see ``self.get_data()``)
701
702 * Download (and cache) remote data files, such as firmware and kernel
703   images
704
705 * Have access to a library of guest OS images (by means of the
706   ``avocado.utils.vmimage`` library)
707
708 * Make use of various other test related utilities available at the
709   test class itself and at the utility library:
710
711   - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test
712   - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html
713
714Running tests
715-------------
716
717You can run the acceptance tests simply by executing:
718
719.. code::
720
721  make check-acceptance
722
723This involves the automatic creation of Python virtual environment
724within the build tree (at ``tests/venv``) which will have all the
725right dependencies, and will save tests results also within the
726build tree (at ``tests/results``).
727
728Note: the build environment must be using a Python 3 stack, and have
729the ``venv`` and ``pip`` packages installed.  If necessary, make sure
730``configure`` is called with ``--python=`` and that those modules are
731available.  On Debian and Ubuntu based systems, depending on the
732specific version, they may be on packages named ``python3-venv`` and
733``python3-pip``.
734
735The scripts installed inside the virtual environment may be used
736without an "activation".  For instance, the Avocado test runner
737may be invoked by running:
738
739 .. code::
740
741  tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/acceptance/
742
743Manual Installation
744-------------------
745
746To manually install Avocado and its dependencies, run:
747
748.. code::
749
750  pip install --user avocado-framework
751
752Alternatively, follow the instructions on this link:
753
754  https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/installing.html
755
756Overview
757--------
758
759The ``tests/acceptance/avocado_qemu`` directory provides the
760``avocado_qemu`` Python module, containing the ``avocado_qemu.Test``
761class.  Here's a simple usage example:
762
763.. code::
764
765  from avocado_qemu import Test
766
767
768  class Version(Test):
769      """
770      :avocado: tags=quick
771      """
772      def test_qmp_human_info_version(self):
773          self.vm.launch()
774          res = self.vm.command('human-monitor-command',
775                                command_line='info version')
776          self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)')
777
778To execute your test, run:
779
780.. code::
781
782  avocado run version.py
783
784Tests may be classified according to a convention by using docstring
785directives such as ``:avocado: tags=TAG1,TAG2``.  To run all tests
786in the current directory, tagged as "quick", run:
787
788.. code::
789
790  avocado run -t quick .
791
792The ``avocado_qemu.Test`` base test class
793-----------------------------------------
794
795The ``avocado_qemu.Test`` class has a number of characteristics that
796are worth being mentioned right away.
797
798First of all, it attempts to give each test a ready to use QEMUMachine
799instance, available at ``self.vm``.  Because many tests will tweak the
800QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``)
801is left to the test writer.
802
803The base test class has also support for tests with more than one
804QEMUMachine. The way to get machines is through the ``self.get_vm()``
805method which will return a QEMUMachine instance. The ``self.get_vm()``
806method accepts arguments that will be passed to the QEMUMachine creation
807and also an optional ``name`` attribute so you can identify a specific
808machine and get it more than once through the tests methods. A simple
809and hypothetical example follows:
810
811.. code::
812
813  from avocado_qemu import Test
814
815
816  class MultipleMachines(Test):
817      def test_multiple_machines(self):
818          first_machine = self.get_vm()
819          second_machine = self.get_vm()
820          self.get_vm(name='third_machine').launch()
821
822          first_machine.launch()
823          second_machine.launch()
824
825          first_res = first_machine.command(
826              'human-monitor-command',
827              command_line='info version')
828
829          second_res = second_machine.command(
830              'human-monitor-command',
831              command_line='info version')
832
833          third_res = self.get_vm(name='third_machine').command(
834              'human-monitor-command',
835              command_line='info version')
836
837          self.assertEquals(first_res, second_res, third_res)
838
839At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines
840shutdown.
841
842The ``avocado_qemu.LinuxTest`` base test class
843~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
844
845The ``avocado_qemu.LinuxTest`` is further specialization of the
846``avocado_qemu.Test`` class, so it contains all the characteristics of
847the later plus some extra features.
848
849First of all, this base class is intended for tests that need to
850interact with a fully booted and operational Linux guest.  At this
851time, it uses a Fedora 31 guest image.  The most basic example looks
852like this:
853
854.. code::
855
856  from avocado_qemu import LinuxTest
857
858
859  class SomeTest(LinuxTest):
860
861      def test(self):
862          self.launch_and_wait()
863          self.ssh_command('some_command_to_be_run_in_the_guest')
864
865Please refer to tests that use ``avocado_qemu.LinuxTest`` under
866``tests/acceptance`` for more examples.
867
868QEMUMachine
869~~~~~~~~~~~
870
871The QEMUMachine API is already widely used in the Python iotests,
872device-crash-test and other Python scripts.  It's a wrapper around the
873execution of a QEMU binary, giving its users:
874
875 * the ability to set command line arguments to be given to the QEMU
876   binary
877
878 * a ready to use QMP connection and interface, which can be used to
879   send commands and inspect its results, as well as asynchronous
880   events
881
882 * convenience methods to set commonly used command line arguments in
883   a more succinct and intuitive way
884
885QEMU binary selection
886~~~~~~~~~~~~~~~~~~~~~
887
888The QEMU binary used for the ``self.vm`` QEMUMachine instance will
889primarily depend on the value of the ``qemu_bin`` parameter.  If it's
890not explicitly set, its default value will be the result of a dynamic
891probe in the same source tree.  A suitable binary will be one that
892targets the architecture matching host machine.
893
894Based on this description, test writers will usually rely on one of
895the following approaches:
896
8971) Set ``qemu_bin``, and use the given binary
898
8992) Do not set ``qemu_bin``, and use a QEMU binary named like
900   "qemu-system-${arch}", either in the current
901   working directory, or in the current source tree.
902
903The resulting ``qemu_bin`` value will be preserved in the
904``avocado_qemu.Test`` as an attribute with the same name.
905
906Attribute reference
907-------------------
908
909Besides the attributes and methods that are part of the base
910``avocado.Test`` class, the following attributes are available on any
911``avocado_qemu.Test`` instance.
912
913vm
914~~
915
916A QEMUMachine instance, initially configured according to the given
917``qemu_bin`` parameter.
918
919arch
920~~~~
921
922The architecture can be used on different levels of the stack, e.g. by
923the framework or by the test itself.  At the framework level, it will
924currently influence the selection of a QEMU binary (when one is not
925explicitly given).
926
927Tests are also free to use this attribute value, for their own needs.
928A test may, for instance, use the same value when selecting the
929architecture of a kernel or disk image to boot a VM with.
930
931The ``arch`` attribute will be set to the test parameter of the same
932name.  If one is not given explicitly, it will either be set to
933``None``, or, if the test is tagged with one (and only one)
934``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``.
935
936cpu
937~~~
938
939The cpu model that will be set to all QEMUMachine instances created
940by the test.
941
942The ``cpu`` attribute will be set to the test parameter of the same
943name. If one is not given explicitly, it will either be set to
944``None ``, or, if the test is tagged with one (and only one)
945``:avocado: tags=cpu:VALUE`` tag, it will be set to ``VALUE``.
946
947machine
948~~~~~~~
949
950The machine type that will be set to all QEMUMachine instances created
951by the test.
952
953The ``machine`` attribute will be set to the test parameter of the same
954name.  If one is not given explicitly, it will either be set to
955``None``, or, if the test is tagged with one (and only one)
956``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``.
957
958qemu_bin
959~~~~~~~~
960
961The preserved value of the ``qemu_bin`` parameter or the result of the
962dynamic probe for a QEMU binary in the current working directory or
963source tree.
964
965LinuxTest
966~~~~~~~~~
967
968Besides the attributes present on the ``avocado_qemu.Test`` base
969class, the ``avocado_qemu.LinuxTest`` adds the following attributes:
970
971distro
972......
973
974The name of the Linux distribution used as the guest image for the
975test.  The name should match the **Provider** column on the list
976of images supported by the avocado.utils.vmimage library:
977
978https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
979
980distro_version
981..............
982
983The version of the Linux distribution as the guest image for the
984test.  The name should match the **Version** column on the list
985of images supported by the avocado.utils.vmimage library:
986
987https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
988
989distro_checksum
990...............
991
992The sha256 hash of the guest image file used for the test.
993
994If this value is not set in the code or by a test parameter (with the
995same name), no validation on the integrity of the image will be
996performed.
997
998Parameter reference
999-------------------
1000
1001To understand how Avocado parameters are accessed by tests, and how
1002they can be passed to tests, please refer to::
1003
1004  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#accessing-test-parameters
1005
1006Parameter values can be easily seen in the log files, and will look
1007like the following:
1008
1009.. code::
1010
1011  PARAMS (key=qemu_bin, path=*, default=./qemu-system-x86_64) => './qemu-system-x86_64
1012
1013arch
1014~~~~
1015
1016The architecture that will influence the selection of a QEMU binary
1017(when one is not explicitly given).
1018
1019Tests are also free to use this parameter value, for their own needs.
1020A test may, for instance, use the same value when selecting the
1021architecture of a kernel or disk image to boot a VM with.
1022
1023This parameter has a direct relation with the ``arch`` attribute.  If
1024not given, it will default to None.
1025
1026cpu
1027~~~
1028
1029The cpu model that will be set to all QEMUMachine instances created
1030by the test.
1031
1032machine
1033~~~~~~~
1034
1035The machine type that will be set to all QEMUMachine instances created
1036by the test.
1037
1038
1039qemu_bin
1040~~~~~~~~
1041
1042The exact QEMU binary to be used on QEMUMachine.
1043
1044LinuxTest
1045~~~~~~~~~
1046
1047Besides the parameters present on the ``avocado_qemu.Test`` base
1048class, the ``avocado_qemu.LinuxTest`` adds the following parameters:
1049
1050distro
1051......
1052
1053The name of the Linux distribution used as the guest image for the
1054test.  The name should match the **Provider** column on the list
1055of images supported by the avocado.utils.vmimage library:
1056
1057https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1058
1059distro_version
1060..............
1061
1062The version of the Linux distribution as the guest image for the
1063test.  The name should match the **Version** column on the list
1064of images supported by the avocado.utils.vmimage library:
1065
1066https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1067
1068distro_checksum
1069...............
1070
1071The sha256 hash of the guest image file used for the test.
1072
1073If this value is not set in the code or by this parameter no
1074validation on the integrity of the image will be performed.
1075
1076Skipping tests
1077--------------
1078The Avocado framework provides Python decorators which allow for easily skip
1079tests running under certain conditions. For example, on the lack of a binary
1080on the test system or when the running environment is a CI system. For further
1081information about those decorators, please refer to::
1082
1083  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#skipping-tests
1084
1085While the conditions for skipping tests are often specifics of each one, there
1086are recurring scenarios identified by the QEMU developers and the use of
1087environment variables became a kind of standard way to enable/disable tests.
1088
1089Here is a list of the most used variables:
1090
1091AVOCADO_ALLOW_LARGE_STORAGE
1092~~~~~~~~~~~~~~~~~~~~~~~~~~~
1093Tests which are going to fetch or produce assets considered *large* are not
1094going to run unless that ``AVOCADO_ALLOW_LARGE_STORAGE=1`` is exported on
1095the environment.
1096
1097The definition of *large* is a bit arbitrary here, but it usually means an
1098asset which occupies at least 1GB of size on disk when uncompressed.
1099
1100AVOCADO_ALLOW_UNTRUSTED_CODE
1101~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1102There are tests which will boot a kernel image or firmware that can be
1103considered not safe to run on the developer's workstation, thus they are
1104skipped by default. The definition of *not safe* is also arbitrary but
1105usually it means a blob which either its source or build process aren't
1106public available.
1107
1108You should export ``AVOCADO_ALLOW_UNTRUSTED_CODE=1`` on the environment in
1109order to allow tests which make use of those kind of assets.
1110
1111AVOCADO_TIMEOUT_EXPECTED
1112~~~~~~~~~~~~~~~~~~~~~~~~
1113The Avocado framework has a timeout mechanism which interrupts tests to avoid the
1114test suite of getting stuck. The timeout value can be set via test parameter or
1115property defined in the test class, for further details::
1116
1117  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#setting-a-test-timeout
1118
1119Even though the timeout can be set by the test developer, there are some tests
1120that may not have a well-defined limit of time to finish under certain
1121conditions. For example, tests that take longer to execute when QEMU is
1122compiled with debug flags. Therefore, the ``AVOCADO_TIMEOUT_EXPECTED`` variable
1123has been used to determine whether those tests should run or not.
1124
1125GITLAB_CI
1126~~~~~~~~~
1127A number of tests are flagged to not run on the GitLab CI. Usually because
1128they proved to the flaky or there are constraints on the CI environment which
1129would make them fail. If you encounter a similar situation then use that
1130variable as shown on the code snippet below to skip the test:
1131
1132.. code::
1133
1134  @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
1135  def test(self):
1136      do_something()
1137
1138Uninstalling Avocado
1139--------------------
1140
1141If you've followed the manual installation instructions above, you can
1142easily uninstall Avocado.  Start by listing the packages you have
1143installed::
1144
1145  pip list --user
1146
1147And remove any package you want with::
1148
1149  pip uninstall <package_name>
1150
1151If you've used ``make check-acceptance``, the Python virtual environment where
1152Avocado is installed will be cleaned up as part of ``make check-clean``.
1153
1154.. _checktcg-ref:
1155
1156Testing with "make check-tcg"
1157=============================
1158
1159The check-tcg tests are intended for simple smoke tests of both
1160linux-user and softmmu TCG functionality. However to build test
1161programs for guest targets you need to have cross compilers available.
1162If your distribution supports cross compilers you can do something as
1163simple as::
1164
1165  apt install gcc-aarch64-linux-gnu
1166
1167The configure script will automatically pick up their presence.
1168Sometimes compilers have slightly odd names so the availability of
1169them can be prompted by passing in the appropriate configure option
1170for the architecture in question, for example::
1171
1172  $(configure) --cross-cc-aarch64=aarch64-cc
1173
1174There is also a ``--cross-cc-flags-ARCH`` flag in case additional
1175compiler flags are needed to build for a given target.
1176
1177If you have the ability to run containers as the user the build system
1178will automatically use them where no system compiler is available. For
1179architectures where we also support building QEMU we will generally
1180use the same container to build tests. However there are a number of
1181additional containers defined that have a minimal cross-build
1182environment that is only suitable for building test cases. Sometimes
1183we may use a bleeding edge distribution for compiler features needed
1184for test cases that aren't yet in the LTS distros we support for QEMU
1185itself.
1186
1187See :ref:`container-ref` for more details.
1188
1189Running subset of tests
1190-----------------------
1191
1192You can build the tests for one architecture::
1193
1194  make build-tcg-tests-$TARGET
1195
1196And run with::
1197
1198  make run-tcg-tests-$TARGET
1199
1200Adding ``V=1`` to the invocation will show the details of how to
1201invoke QEMU for the test which is useful for debugging tests.
1202
1203TCG test dependencies
1204---------------------
1205
1206The TCG tests are deliberately very light on dependencies and are
1207either totally bare with minimal gcc lib support (for softmmu tests)
1208or just glibc (for linux-user tests). This is because getting a cross
1209compiler to work with additional libraries can be challenging.
1210
1211Other TCG Tests
1212---------------
1213
1214There are a number of out-of-tree test suites that are used for more
1215extensive testing of processor features.
1216
1217KVM Unit Tests
1218~~~~~~~~~~~~~~
1219
1220The KVM unit tests are designed to run as a Guest OS under KVM but
1221there is no reason why they can't exercise the TCG as well. It
1222provides a minimal OS kernel with hooks for enabling the MMU as well
1223as reporting test results via a special device::
1224
1225  https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
1226
1227Linux Test Project
1228~~~~~~~~~~~~~~~~~~
1229
1230The LTP is focused on exercising the syscall interface of a Linux
1231kernel. It checks that syscalls behave as documented and strives to
1232exercise as many corner cases as possible. It is a useful test suite
1233to run to exercise QEMU's linux-user code::
1234
1235  https://linux-test-project.github.io/
1236