xref: /qemu/docs/devel/testing.rst (revision 5086c997)
1===============
2Testing in QEMU
3===============
4
5This document describes the testing infrastructure in QEMU.
6
7Testing with "make check"
8=========================
9
10The "make check" testing family includes most of the C based tests in QEMU. For
11a quick help, run ``make check-help`` from the source tree.
12
13The usual way to run these tests is:
14
15.. code::
16
17  make check
18
19which includes QAPI schema tests, unit tests, QTests and some iotests.
20Different sub-types of "make check" tests will be explained below.
21
22Before running tests, it is best to build QEMU programs first. Some tests
23expect the executables to exist and will fail with obscure messages if they
24cannot find them.
25
26Unit tests
27----------
28
29Unit tests, which can be invoked with ``make check-unit``, are simple C tests
30that typically link to individual QEMU object files and exercise them by
31calling exported functions.
32
33If you are writing new code in QEMU, consider adding a unit test, especially
34for utility modules that are relatively stateless or have few dependencies. To
35add a new unit test:
36
371. Create a new source file. For example, ``tests/foo-test.c``.
38
392. Write the test. Normally you would include the header file which exports
40   the module API, then verify the interface behaves as expected from your
41   test. The test code should be organized with the glib testing framework.
42   Copying and modifying an existing test is usually a good idea.
43
443. Add the test to ``tests/meson.build``. The unit tests are listed in a
45   dictionary called ``tests``.  The values are any additional sources and
46   dependencies to be linked with the test.  For a simple test whose source
47   is in ``tests/foo-test.c``, it is enough to add an entry like::
48
49     {
50       ...
51       'foo-test': [],
52       ...
53     }
54
55Since unit tests don't require environment variables, the simplest way to debug
56a unit test failure is often directly invoking it or even running it under
57``gdb``. However there can still be differences in behavior between ``make``
58invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment
59variable (which affects memory reclamation and catches invalid pointers better)
60and gtester options. If necessary, you can run
61
62.. code::
63
64  make check-unit V=1
65
66and copy the actual command line which executes the unit test, then run
67it from the command line.
68
69QTest
70-----
71
72QTest is a device emulation testing framework.  It can be very useful to test
73device models; it could also control certain aspects of QEMU (such as virtual
74clock stepping), with a special purpose "qtest" protocol.  Refer to
75:doc:`qtest` for more details.
76
77QTest cases can be executed with
78
79.. code::
80
81   make check-qtest
82
83QAPI schema tests
84-----------------
85
86The QAPI schema tests validate the QAPI parser used by QMP, by feeding
87predefined input to the parser and comparing the result with the reference
88output.
89
90The input/output data is managed under the ``tests/qapi-schema`` directory.
91Each test case includes four files that have a common base name:
92
93  * ``${casename}.json`` - the file contains the JSON input for feeding the
94    parser
95  * ``${casename}.out`` - the file contains the expected stdout from the parser
96  * ``${casename}.err`` - the file contains the expected stderr from the parser
97  * ``${casename}.exit`` - the expected error code
98
99Consider adding a new QAPI schema test when you are making a change on the QAPI
100parser (either fixing a bug or extending/modifying the syntax). To do this:
101
1021. Add four files for the new case as explained above. For example:
103
104  ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``.
105
1062. Add the new test in ``tests/Makefile.include``. For example:
107
108  ``qapi-schema += foo.json``
109
110check-block
111-----------
112
113``make check-block`` runs a subset of the block layer iotests (the tests that
114are in the "auto" group).
115See the "QEMU iotests" section below for more information.
116
117GCC gcov support
118----------------
119
120``gcov`` is a GCC tool to analyze the testing coverage by
121instrumenting the tested code. To use it, configure QEMU with
122``--enable-gcov`` option and build. Then run ``make check`` as usual.
123
124If you want to gather coverage information on a single test the ``make
125clean-gcda`` target can be used to delete any existing coverage
126information before running a single test.
127
128You can generate a HTML coverage report by executing ``make
129coverage-html`` which will create
130``meson-logs/coveragereport/index.html``.
131
132Further analysis can be conducted by running the ``gcov`` command
133directly on the various .gcda output files. Please read the ``gcov``
134documentation for more information.
135
136QEMU iotests
137============
138
139QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing
140framework widely used to test block layer related features. It is higher level
141than "make check" tests and 99% of the code is written in bash or Python
142scripts.  The testing success criteria is golden output comparison, and the
143test files are named with numbers.
144
145To run iotests, make sure QEMU is built successfully, then switch to the
146``tests/qemu-iotests`` directory under the build directory, and run ``./check``
147with desired arguments from there.
148
149By default, "raw" format and "file" protocol is used; all tests will be
150executed, except the unsupported ones. You can override the format and protocol
151with arguments:
152
153.. code::
154
155  # test with qcow2 format
156  ./check -qcow2
157  # or test a different protocol
158  ./check -nbd
159
160It's also possible to list test numbers explicitly:
161
162.. code::
163
164  # run selected cases with qcow2 format
165  ./check -qcow2 001 030 153
166
167Cache mode can be selected with the "-c" option, which may help reveal bugs
168that are specific to certain cache mode.
169
170More options are supported by the ``./check`` script, run ``./check -h`` for
171help.
172
173Writing a new test case
174-----------------------
175
176Consider writing a tests case when you are making any changes to the block
177layer. An iotest case is usually the choice for that. There are already many
178test cases, so it is possible that extending one of them may achieve the goal
179and save the boilerplate to create one.  (Unfortunately, there isn't a 100%
180reliable way to find a related one out of hundreds of tests.  One approach is
181using ``git grep``.)
182
183Usually an iotest case consists of two files. One is an executable that
184produces output to stdout and stderr, the other is the expected reference
185output. They are given the same number in file names. E.g. Test script ``055``
186and reference output ``055.out``.
187
188In rare cases, when outputs differ between cache mode ``none`` and others, a
189``.out.nocache`` file is added. In other cases, when outputs differ between
190image formats, more than one ``.out`` files are created ending with the
191respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``.
192
193There isn't a hard rule about how to write a test script, but a new test is
194usually a (copy and) modification of an existing case.  There are a few
195commonly used ways to create a test:
196
197* A Bash script. It will make use of several environmental variables related
198  to the testing procedure, and could source a group of ``common.*`` libraries
199  for some common helper routines.
200
201* A Python unittest script. Import ``iotests`` and create a subclass of
202  ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of
203  this approach is that the output is too scarce, and the script is considered
204  harder to debug.
205
206* A simple Python script without using unittest module. This could also import
207  ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit
208  from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest
209  execution. This is a combination of 1 and 2.
210
211Pick the language per your preference since both Bash and Python have
212comparable library support for invoking and interacting with QEMU programs. If
213you opt for Python, it is strongly recommended to write Python 3 compatible
214code.
215
216Both Python and Bash frameworks in iotests provide helpers to manage test
217images. They can be used to create and clean up images under the test
218directory. If no I/O or any protocol specific feature is needed, it is often
219more convenient to use the pseudo block driver, ``null-co://``, as the test
220image, which doesn't require image creation or cleaning up. Avoid system-wide
221devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``.
222Otherwise, image locking implications have to be considered.  For example,
223another application on the host may have locked the file, possibly leading to a
224test failure.  If using such devices are explicitly desired, consider adding
225``locking=off`` option to disable image locking.
226
227Test case groups
228----------------
229
230"Tests may belong to one or more test groups, which are defined in the form
231of a comment in the test source file. By convention, test groups are listed
232in the second line of the test file, after the "#!/..." line, like this:
233
234.. code::
235
236  #!/usr/bin/env python3
237  # group: auto quick
238  #
239  ...
240
241Another way of defining groups is creating the tests/qemu-iotests/group.local
242file. This should be used only for downstream (this file should never appear
243in upstream). This file may be used for defining some downstream test groups
244or for temporarily disabling tests, like this:
245
246.. code::
247
248  # groups for some company downstream process
249  #
250  # ci - tests to run on build
251  # down - our downstream tests, not for upstream
252  #
253  # Format of each line is:
254  # TEST_NAME TEST_GROUP [TEST_GROUP ]...
255
256  013 ci
257  210 disabled
258  215 disabled
259  our-ugly-workaround-test down ci
260
261Note that the following group names have a special meaning:
262
263- quick: Tests in this group should finish within a few seconds.
264
265- auto: Tests in this group are used during "make check" and should be
266  runnable in any case. That means they should run with every QEMU binary
267  (also non-x86), with every QEMU configuration (i.e. must not fail if
268  an optional feature is not compiled in - but reporting a "skip" is ok),
269  work at least with the qcow2 file format, work with all kind of host
270  filesystems and users (e.g. "nobody" or "root") and must not take too
271  much memory and disk space (since CI pipelines tend to fail otherwise).
272
273- disabled: Tests in this group are disabled and ignored by check.
274
275.. _docker-ref:
276
277Docker based tests
278==================
279
280Introduction
281------------
282
283The Docker testing framework in QEMU utilizes public Docker images to build and
284test QEMU in predefined and widely accessible Linux environments.  This makes
285it possible to expand the test coverage across distros, toolchain flavors and
286library versions.
287
288Prerequisites
289-------------
290
291Install "docker" with the system package manager and start the Docker service
292on your development machine, then make sure you have the privilege to run
293Docker commands. Typically it means setting up passwordless ``sudo docker``
294command or login as root. For example:
295
296.. code::
297
298  $ sudo yum install docker
299  $ # or `apt-get install docker` for Ubuntu, etc.
300  $ sudo systemctl start docker
301  $ sudo docker ps
302
303The last command should print an empty table, to verify the system is ready.
304
305An alternative method to set up permissions is by adding the current user to
306"docker" group and making the docker daemon socket file (by default
307``/var/run/docker.sock``) accessible to the group:
308
309.. code::
310
311  $ sudo groupadd docker
312  $ sudo usermod $USER -a -G docker
313  $ sudo chown :docker /var/run/docker.sock
314
315Note that any one of above configurations makes it possible for the user to
316exploit the whole host with Docker bind mounting or other privileged
317operations.  So only do it on development machines.
318
319Quickstart
320----------
321
322From source tree, type ``make docker`` to see the help. Testing can be started
323without configuring or building QEMU (``configure`` and ``make`` are done in
324the container, with parameters defined by the make target):
325
326.. code::
327
328  make docker-test-build@min-glib
329
330This will create a container instance using the ``min-glib`` image (the image
331is downloaded and initialized automatically), in which the ``test-build`` job
332is executed.
333
334Images
335------
336
337Along with many other images, the ``min-glib`` image is defined in a Dockerfile
338in ``tests/docker/dockerfiles/``, called ``min-glib.docker``. ``make docker``
339command will list all the available images.
340
341To add a new image, simply create a new ``.docker`` file under the
342``tests/docker/dockerfiles/`` directory.
343
344A ``.pre`` script can be added beside the ``.docker`` file, which will be
345executed before building the image under the build context directory. This is
346mainly used to do necessary host side setup. One such setup is ``binfmt_misc``,
347for example, to make qemu-user powered cross build containers work.
348
349Tests
350-----
351
352Different tests are added to cover various configurations to build and test
353QEMU.  Docker tests are the executables under ``tests/docker`` named
354``test-*``. They are typically shell scripts and are built on top of a shell
355library, ``tests/docker/common.rc``, which provides helpers to find the QEMU
356source and build it.
357
358The full list of tests is printed in the ``make docker`` help.
359
360Tools
361-----
362
363There are executables that are created to run in a specific Docker environment.
364This makes it easy to write scripts that have heavy or special dependencies,
365but are still very easy to use.
366
367Currently the only tool is ``travis``, which mimics the Travis-CI tests in a
368container. It runs in the ``travis`` image:
369
370.. code::
371
372  make docker-travis@travis
373
374Debugging a Docker test failure
375-------------------------------
376
377When CI tasks, maintainers or yourself report a Docker test failure, follow the
378below steps to debug it:
379
3801. Locally reproduce the failure with the reported command line. E.g. run
381   ``make docker-test-mingw@fedora J=8``.
3822. Add "V=1" to the command line, try again, to see the verbose output.
3833. Further add "DEBUG=1" to the command line. This will pause in a shell prompt
384   in the container right before testing starts. You could either manually
385   build QEMU and run tests from there, or press Ctrl-D to let the Docker
386   testing continue.
3874. If you press Ctrl-D, the same building and testing procedure will begin, and
388   will hopefully run into the error again. After that, you will be dropped to
389   the prompt for debug.
390
391Options
392-------
393
394Various options can be used to affect how Docker tests are done. The full
395list is in the ``make docker`` help text. The frequently used ones are:
396
397* ``V=1``: the same as in top level ``make``. It will be propagated to the
398  container and enable verbose output.
399* ``J=$N``: the number of parallel tasks in make commands in the container,
400  similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in
401  top level ``make`` will not be propagated into the container.)
402* ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test
403  failure" section.
404
405Thread Sanitizer
406================
407
408Thread Sanitizer (TSan) is a tool which can detect data races.  QEMU supports
409building and testing with this tool.
410
411For more information on TSan:
412
413https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
414
415Thread Sanitizer in Docker
416---------------------------
417TSan is currently supported in the ubuntu2004 docker.
418
419The test-tsan test will build using TSan and then run make check.
420
421.. code::
422
423  make docker-test-tsan@ubuntu2004
424
425TSan warnings under docker are placed in files located at build/tsan/.
426
427We recommend using DEBUG=1 to allow launching the test from inside the docker,
428and to allow review of the warnings generated by TSan.
429
430Building and Testing with TSan
431------------------------------
432
433It is possible to build and test with TSan, with a few additional steps.
434These steps are normally done automatically in the docker.
435
436There is a one time patch needed in clang-9 or clang-10 at this time:
437
438.. code::
439
440  sed -i 's/^const/static const/g' \
441      /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
442
443To configure the build for TSan:
444
445.. code::
446
447  ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \
448               --disable-werror --extra-cflags="-O0"
449
450The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment
451variable.
452
453More information on the TSAN_OPTIONS can be found here:
454
455https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
456
457For example:
458
459.. code::
460
461  export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \
462                      detect_deadlocks=false history_size=7 exitcode=0 \
463                      log_path=<build path>/tsan/tsan_warning
464
465The above exitcode=0 has TSan continue without error if any warnings are found.
466This allows for running the test and then checking the warnings afterwards.
467If you want TSan to stop and exit with error on warnings, use exitcode=66.
468
469TSan Suppressions
470-----------------
471Keep in mind that for any data race warning, although there might be a data race
472detected by TSan, there might be no actual bug here.  TSan provides several
473different mechanisms for suppressing warnings.  In general it is recommended
474to fix the code if possible to eliminate the data race rather than suppress
475the warning.
476
477A few important files for suppressing warnings are:
478
479tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime.
480The comment on each suppression will typically indicate why we are
481suppressing it.  More information on the file format can be found here:
482
483https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions
484
485tests/tsan/blacklist.tsan - Has TSan warnings we wish to disable
486at compile time for test or debug.
487Add flags to configure to enable:
488
489"--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/blacklist.tsan"
490
491More information on the file format can be found here under "Blacklist Format":
492
493https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
494
495TSan Annotations
496----------------
497include/qemu/tsan.h defines annotations.  See this file for more descriptions
498of the annotations themselves.  Annotations can be used to suppress
499TSan warnings or give TSan more information so that it can detect proper
500relationships between accesses of data.
501
502Annotation examples can be found here:
503
504https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/
505
506Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp
507
508The full set of annotations can be found here:
509
510https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp
511
512VM testing
513==========
514
515This test suite contains scripts that bootstrap various guest images that have
516necessary packages to build QEMU. The basic usage is documented in ``Makefile``
517help which is displayed with ``make vm-help``.
518
519Quickstart
520----------
521
522Run ``make vm-help`` to list available make targets. Invoke a specific make
523command to run build test in an image. For example, ``make vm-build-freebsd``
524will build the source tree in the FreeBSD image. The command can be executed
525from either the source tree or the build dir; if the former, ``./configure`` is
526not needed. The command will then generate the test image in ``./tests/vm/``
527under the working directory.
528
529Note: images created by the scripts accept a well-known RSA key pair for SSH
530access, so they SHOULD NOT be exposed to external interfaces if you are
531concerned about attackers taking control of the guest and potentially
532exploiting a QEMU security bug to compromise the host.
533
534QEMU binaries
535-------------
536
537By default, qemu-system-x86_64 is searched in $PATH to run the guest. If there
538isn't one, or if it is older than 2.10, the test won't work. In this case,
539provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``.
540
541Likewise the path to qemu-img can be set in QEMU_IMG environment variable.
542
543Make jobs
544---------
545
546The ``-j$X`` option in the make command line is not propagated into the VM,
547specify ``J=$X`` to control the make jobs in the guest.
548
549Debugging
550---------
551
552Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive
553debugging and verbose output. If this is not enough, see the next section.
554``V=1`` will be propagated down into the make jobs in the guest.
555
556Manual invocation
557-----------------
558
559Each guest script is an executable script with the same command line options.
560For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``:
561
562.. code::
563
564    $ cd $QEMU_SRC/tests/vm
565
566    # To bootstrap the image
567    $ ./netbsd --build-image --image /var/tmp/netbsd.img
568    <...>
569
570    # To run an arbitrary command in guest (the output will not be echoed unless
571    # --debug is added)
572    $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a
573
574    # To build QEMU in guest
575    $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC
576
577    # To get to an interactive shell
578    $ ./netbsd --interactive --image /var/tmp/netbsd.img sh
579
580Adding new guests
581-----------------
582
583Please look at existing guest scripts for how to add new guests.
584
585Most importantly, create a subclass of BaseVM and implement ``build_image()``
586method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from
587the script's ``main()``.
588
589* Usually in ``build_image()``, a template image is downloaded from a
590  predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and
591  the checksum, so consider using it.
592
593* Once the image is downloaded, users, SSH server and QEMU build deps should
594  be set up:
595
596  - Root password set to ``BaseVM.ROOT_PASS``
597  - User ``BaseVM.GUEST_USER`` is created, and password set to
598    ``BaseVM.GUEST_PASS``
599  - SSH service is enabled and started on boot,
600    ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys``
601    file of both root and the normal user
602  - DHCP client service is enabled and started on boot, so that it can
603    automatically configure the virtio-net-pci NIC and communicate with QEMU
604    user net (10.0.2.2)
605  - Necessary packages are installed to untar the source tarball and build
606    QEMU
607
608* Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that
609  untars a raw virtio-blk block device, which is the tarball data blob of the
610  QEMU source tree, then configure/build it. Running "make check" is also
611  recommended.
612
613Image fuzzer testing
614====================
615
616An image fuzzer was added to exercise format drivers. Currently only qcow2 is
617supported. To start the fuzzer, run
618
619.. code::
620
621  tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2
622
623Alternatively, some command different from "qemu-img info" can be tested, by
624changing the ``-c`` option.
625
626Acceptance tests using the Avocado Framework
627============================================
628
629The ``tests/acceptance`` directory hosts functional tests, also known
630as acceptance level tests.  They're usually higher level tests, and
631may interact with external resources and with various guest operating
632systems.
633
634These tests are written using the Avocado Testing Framework (which must
635be installed separately) in conjunction with a the ``avocado_qemu.Test``
636class, implemented at ``tests/acceptance/avocado_qemu``.
637
638Tests based on ``avocado_qemu.Test`` can easily:
639
640 * Customize the command line arguments given to the convenience
641   ``self.vm`` attribute (a QEMUMachine instance)
642
643 * Interact with the QEMU monitor, send QMP commands and check
644   their results
645
646 * Interact with the guest OS, using the convenience console device
647   (which may be useful to assert the effectiveness and correctness of
648   command line arguments or QMP commands)
649
650 * Interact with external data files that accompany the test itself
651   (see ``self.get_data()``)
652
653 * Download (and cache) remote data files, such as firmware and kernel
654   images
655
656 * Have access to a library of guest OS images (by means of the
657   ``avocado.utils.vmimage`` library)
658
659 * Make use of various other test related utilities available at the
660   test class itself and at the utility library:
661
662   - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test
663   - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html
664
665Running tests
666-------------
667
668You can run the acceptance tests simply by executing:
669
670.. code::
671
672  make check-acceptance
673
674This involves the automatic creation of Python virtual environment
675within the build tree (at ``tests/venv``) which will have all the
676right dependencies, and will save tests results also within the
677build tree (at ``tests/results``).
678
679Note: the build environment must be using a Python 3 stack, and have
680the ``venv`` and ``pip`` packages installed.  If necessary, make sure
681``configure`` is called with ``--python=`` and that those modules are
682available.  On Debian and Ubuntu based systems, depending on the
683specific version, they may be on packages named ``python3-venv`` and
684``python3-pip``.
685
686The scripts installed inside the virtual environment may be used
687without an "activation".  For instance, the Avocado test runner
688may be invoked by running:
689
690 .. code::
691
692  tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/acceptance/
693
694Manual Installation
695-------------------
696
697To manually install Avocado and its dependencies, run:
698
699.. code::
700
701  pip install --user avocado-framework
702
703Alternatively, follow the instructions on this link:
704
705  https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/installing.html
706
707Overview
708--------
709
710The ``tests/acceptance/avocado_qemu`` directory provides the
711``avocado_qemu`` Python module, containing the ``avocado_qemu.Test``
712class.  Here's a simple usage example:
713
714.. code::
715
716  from avocado_qemu import Test
717
718
719  class Version(Test):
720      """
721      :avocado: tags=quick
722      """
723      def test_qmp_human_info_version(self):
724          self.vm.launch()
725          res = self.vm.command('human-monitor-command',
726                                command_line='info version')
727          self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)')
728
729To execute your test, run:
730
731.. code::
732
733  avocado run version.py
734
735Tests may be classified according to a convention by using docstring
736directives such as ``:avocado: tags=TAG1,TAG2``.  To run all tests
737in the current directory, tagged as "quick", run:
738
739.. code::
740
741  avocado run -t quick .
742
743The ``avocado_qemu.Test`` base test class
744-----------------------------------------
745
746The ``avocado_qemu.Test`` class has a number of characteristics that
747are worth being mentioned right away.
748
749First of all, it attempts to give each test a ready to use QEMUMachine
750instance, available at ``self.vm``.  Because many tests will tweak the
751QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``)
752is left to the test writer.
753
754The base test class has also support for tests with more than one
755QEMUMachine. The way to get machines is through the ``self.get_vm()``
756method which will return a QEMUMachine instance. The ``self.get_vm()``
757method accepts arguments that will be passed to the QEMUMachine creation
758and also an optional `name` attribute so you can identify a specific
759machine and get it more than once through the tests methods. A simple
760and hypothetical example follows:
761
762.. code::
763
764  from avocado_qemu import Test
765
766
767  class MultipleMachines(Test):
768      """
769      :avocado: enable
770      """
771      def test_multiple_machines(self):
772          first_machine = self.get_vm()
773          second_machine = self.get_vm()
774          self.get_vm(name='third_machine').launch()
775
776          first_machine.launch()
777          second_machine.launch()
778
779          first_res = first_machine.command(
780              'human-monitor-command',
781              command_line='info version')
782
783          second_res = second_machine.command(
784              'human-monitor-command',
785              command_line='info version')
786
787          third_res = self.get_vm(name='third_machine').command(
788              'human-monitor-command',
789              command_line='info version')
790
791          self.assertEquals(first_res, second_res, third_res)
792
793At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines
794shutdown.
795
796QEMUMachine
797~~~~~~~~~~~
798
799The QEMUMachine API is already widely used in the Python iotests,
800device-crash-test and other Python scripts.  It's a wrapper around the
801execution of a QEMU binary, giving its users:
802
803 * the ability to set command line arguments to be given to the QEMU
804   binary
805
806 * a ready to use QMP connection and interface, which can be used to
807   send commands and inspect its results, as well as asynchronous
808   events
809
810 * convenience methods to set commonly used command line arguments in
811   a more succinct and intuitive way
812
813QEMU binary selection
814~~~~~~~~~~~~~~~~~~~~~
815
816The QEMU binary used for the ``self.vm`` QEMUMachine instance will
817primarily depend on the value of the ``qemu_bin`` parameter.  If it's
818not explicitly set, its default value will be the result of a dynamic
819probe in the same source tree.  A suitable binary will be one that
820targets the architecture matching host machine.
821
822Based on this description, test writers will usually rely on one of
823the following approaches:
824
8251) Set ``qemu_bin``, and use the given binary
826
8272) Do not set ``qemu_bin``, and use a QEMU binary named like
828   "qemu-system-${arch}", either in the current
829   working directory, or in the current source tree.
830
831The resulting ``qemu_bin`` value will be preserved in the
832``avocado_qemu.Test`` as an attribute with the same name.
833
834Attribute reference
835-------------------
836
837Besides the attributes and methods that are part of the base
838``avocado.Test`` class, the following attributes are available on any
839``avocado_qemu.Test`` instance.
840
841vm
842~~
843
844A QEMUMachine instance, initially configured according to the given
845``qemu_bin`` parameter.
846
847arch
848~~~~
849
850The architecture can be used on different levels of the stack, e.g. by
851the framework or by the test itself.  At the framework level, it will
852currently influence the selection of a QEMU binary (when one is not
853explicitly given).
854
855Tests are also free to use this attribute value, for their own needs.
856A test may, for instance, use the same value when selecting the
857architecture of a kernel or disk image to boot a VM with.
858
859The ``arch`` attribute will be set to the test parameter of the same
860name.  If one is not given explicitly, it will either be set to
861``None``, or, if the test is tagged with one (and only one)
862``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``.
863
864machine
865~~~~~~~
866
867The machine type that will be set to all QEMUMachine instances created
868by the test.
869
870The ``machine`` attribute will be set to the test parameter of the same
871name.  If one is not given explicitly, it will either be set to
872``None``, or, if the test is tagged with one (and only one)
873``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``.
874
875qemu_bin
876~~~~~~~~
877
878The preserved value of the ``qemu_bin`` parameter or the result of the
879dynamic probe for a QEMU binary in the current working directory or
880source tree.
881
882Parameter reference
883-------------------
884
885To understand how Avocado parameters are accessed by tests, and how
886they can be passed to tests, please refer to::
887
888  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#accessing-test-parameters
889
890Parameter values can be easily seen in the log files, and will look
891like the following:
892
893.. code::
894
895  PARAMS (key=qemu_bin, path=*, default=./qemu-system-x86_64) => './qemu-system-x86_64
896
897arch
898~~~~
899
900The architecture that will influence the selection of a QEMU binary
901(when one is not explicitly given).
902
903Tests are also free to use this parameter value, for their own needs.
904A test may, for instance, use the same value when selecting the
905architecture of a kernel or disk image to boot a VM with.
906
907This parameter has a direct relation with the ``arch`` attribute.  If
908not given, it will default to None.
909
910machine
911~~~~~~~
912
913The machine type that will be set to all QEMUMachine instances created
914by the test.
915
916
917qemu_bin
918~~~~~~~~
919
920The exact QEMU binary to be used on QEMUMachine.
921
922Skipping tests
923--------------
924The Avocado framework provides Python decorators which allow for easily skip
925tests running under certain conditions. For example, on the lack of a binary
926on the test system or when the running environment is a CI system. For further
927information about those decorators, please refer to::
928
929  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#skipping-tests
930
931While the conditions for skipping tests are often specifics of each one, there
932are recurring scenarios identified by the QEMU developers and the use of
933environment variables became a kind of standard way to enable/disable tests.
934
935Here is a list of the most used variables:
936
937AVOCADO_ALLOW_LARGE_STORAGE
938~~~~~~~~~~~~~~~~~~~~~~~~~~~
939Tests which are going to fetch or produce assets considered *large* are not
940going to run unless that `AVOCADO_ALLOW_LARGE_STORAGE=1` is exported on
941the environment.
942
943The definition of *large* is a bit arbitrary here, but it usually means an
944asset which occupies at least 1GB of size on disk when uncompressed.
945
946AVOCADO_ALLOW_UNTRUSTED_CODE
947~~~~~~~~~~~~~~~~~~~~~~~~~~~~
948There are tests which will boot a kernel image or firmware that can be
949considered not safe to run on the developer's workstation, thus they are
950skipped by default. The definition of *not safe* is also arbitrary but
951usually it means a blob which either its source or build process aren't
952public available.
953
954You should export `AVOCADO_ALLOW_UNTRUSTED_CODE=1` on the environment in
955order to allow tests which make use of those kind of assets.
956
957AVOCADO_TIMEOUT_EXPECTED
958~~~~~~~~~~~~~~~~~~~~~~~~
959The Avocado framework has a timeout mechanism which interrupts tests to avoid the
960test suite of getting stuck. The timeout value can be set via test parameter or
961property defined in the test class, for further details::
962
963  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#setting-a-test-timeout
964
965Even though the timeout can be set by the test developer, there are some tests
966that may not have a well-defined limit of time to finish under certain
967conditions. For example, tests that take longer to execute when QEMU is
968compiled with debug flags. Therefore, the `AVOCADO_TIMEOUT_EXPECTED` variable
969has been used to determine whether those tests should run or not.
970
971GITLAB_CI
972~~~~~~~~~
973A number of tests are flagged to not run on the GitLab CI. Usually because
974they proved to the flaky or there are constraints on the CI environment which
975would make them fail. If you encounter a similar situation then use that
976variable as shown on the code snippet below to skip the test:
977
978.. code::
979
980  @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
981  def test(self):
982      do_something()
983
984Uninstalling Avocado
985--------------------
986
987If you've followed the manual installation instructions above, you can
988easily uninstall Avocado.  Start by listing the packages you have
989installed::
990
991  pip list --user
992
993And remove any package you want with::
994
995  pip uninstall <package_name>
996
997If you've used ``make check-acceptance``, the Python virtual environment where
998Avocado is installed will be cleaned up as part of ``make check-clean``.
999
1000Testing with "make check-tcg"
1001=============================
1002
1003The check-tcg tests are intended for simple smoke tests of both
1004linux-user and softmmu TCG functionality. However to build test
1005programs for guest targets you need to have cross compilers available.
1006If your distribution supports cross compilers you can do something as
1007simple as::
1008
1009  apt install gcc-aarch64-linux-gnu
1010
1011The configure script will automatically pick up their presence.
1012Sometimes compilers have slightly odd names so the availability of
1013them can be prompted by passing in the appropriate configure option
1014for the architecture in question, for example::
1015
1016  $(configure) --cross-cc-aarch64=aarch64-cc
1017
1018There is also a ``--cross-cc-flags-ARCH`` flag in case additional
1019compiler flags are needed to build for a given target.
1020
1021If you have the ability to run containers as the user you can also
1022take advantage of the build systems "Docker" support. It will then use
1023containers to build any test case for an enabled guest where there is
1024no system compiler available. See :ref:`docker-ref` for details.
1025
1026Running subset of tests
1027-----------------------
1028
1029You can build the tests for one architecture::
1030
1031  make build-tcg-tests-$TARGET
1032
1033And run with::
1034
1035  make run-tcg-tests-$TARGET
1036
1037Adding ``V=1`` to the invocation will show the details of how to
1038invoke QEMU for the test which is useful for debugging tests.
1039
1040TCG test dependencies
1041---------------------
1042
1043The TCG tests are deliberately very light on dependencies and are
1044either totally bare with minimal gcc lib support (for softmmu tests)
1045or just glibc (for linux-user tests). This is because getting a cross
1046compiler to work with additional libraries can be challenging.
1047
1048Other TCG Tests
1049---------------
1050
1051There are a number of out-of-tree test suites that are used for more
1052extensive testing of processor features.
1053
1054KVM Unit Tests
1055~~~~~~~~~~~~~~
1056
1057The KVM unit tests are designed to run as a Guest OS under KVM but
1058there is no reason why they can't exercise the TCG as well. It
1059provides a minimal OS kernel with hooks for enabling the MMU as well
1060as reporting test results via a special device::
1061
1062  https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
1063
1064Linux Test Project
1065~~~~~~~~~~~~~~~~~~
1066
1067The LTP is focused on exercising the syscall interface of a Linux
1068kernel. It checks that syscalls behave as documented and strives to
1069exercise as many corner cases as possible. It is a useful test suite
1070to run to exercise QEMU's linux-user code::
1071
1072  https://linux-test-project.github.io/
1073