1=============== 2Testing in QEMU 3=============== 4 5This document describes the testing infrastructure in QEMU. 6 7Testing with "make check" 8========================= 9 10The "make check" testing family includes most of the C based tests in QEMU. For 11a quick help, run ``make check-help`` from the source tree. 12 13The usual way to run these tests is: 14 15.. code:: 16 17 make check 18 19which includes QAPI schema tests, unit tests, QTests and some iotests. 20Different sub-types of "make check" tests will be explained below. 21 22Before running tests, it is best to build QEMU programs first. Some tests 23expect the executables to exist and will fail with obscure messages if they 24cannot find them. 25 26Unit tests 27---------- 28 29Unit tests, which can be invoked with ``make check-unit``, are simple C tests 30that typically link to individual QEMU object files and exercise them by 31calling exported functions. 32 33If you are writing new code in QEMU, consider adding a unit test, especially 34for utility modules that are relatively stateless or have few dependencies. To 35add a new unit test: 36 371. Create a new source file. For example, ``tests/foo-test.c``. 38 392. Write the test. Normally you would include the header file which exports 40 the module API, then verify the interface behaves as expected from your 41 test. The test code should be organized with the glib testing framework. 42 Copying and modifying an existing test is usually a good idea. 43 443. Add the test to ``tests/meson.build``. The unit tests are listed in a 45 dictionary called ``tests``. The values are any additional sources and 46 dependencies to be linked with the test. For a simple test whose source 47 is in ``tests/foo-test.c``, it is enough to add an entry like:: 48 49 { 50 ... 51 'foo-test': [], 52 ... 53 } 54 55Since unit tests don't require environment variables, the simplest way to debug 56a unit test failure is often directly invoking it or even running it under 57``gdb``. However there can still be differences in behavior between ``make`` 58invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment 59variable (which affects memory reclamation and catches invalid pointers better) 60and gtester options. If necessary, you can run 61 62.. code:: 63 64 make check-unit V=1 65 66and copy the actual command line which executes the unit test, then run 67it from the command line. 68 69QTest 70----- 71 72QTest is a device emulation testing framework. It can be very useful to test 73device models; it could also control certain aspects of QEMU (such as virtual 74clock stepping), with a special purpose "qtest" protocol. Refer to 75:doc:`qtest` for more details. 76 77QTest cases can be executed with 78 79.. code:: 80 81 make check-qtest 82 83QAPI schema tests 84----------------- 85 86The QAPI schema tests validate the QAPI parser used by QMP, by feeding 87predefined input to the parser and comparing the result with the reference 88output. 89 90The input/output data is managed under the ``tests/qapi-schema`` directory. 91Each test case includes four files that have a common base name: 92 93 * ``${casename}.json`` - the file contains the JSON input for feeding the 94 parser 95 * ``${casename}.out`` - the file contains the expected stdout from the parser 96 * ``${casename}.err`` - the file contains the expected stderr from the parser 97 * ``${casename}.exit`` - the expected error code 98 99Consider adding a new QAPI schema test when you are making a change on the QAPI 100parser (either fixing a bug or extending/modifying the syntax). To do this: 101 1021. Add four files for the new case as explained above. For example: 103 104 ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``. 105 1062. Add the new test in ``tests/Makefile.include``. For example: 107 108 ``qapi-schema += foo.json`` 109 110check-block 111----------- 112 113``make check-block`` runs a subset of the block layer iotests (the tests that 114are in the "auto" group). 115See the "QEMU iotests" section below for more information. 116 117GCC gcov support 118---------------- 119 120``gcov`` is a GCC tool to analyze the testing coverage by 121instrumenting the tested code. To use it, configure QEMU with 122``--enable-gcov`` option and build. Then run ``make check`` as usual. 123 124If you want to gather coverage information on a single test the ``make 125clean-gcda`` target can be used to delete any existing coverage 126information before running a single test. 127 128You can generate a HTML coverage report by executing ``make 129coverage-html`` which will create 130``meson-logs/coveragereport/index.html``. 131 132Further analysis can be conducted by running the ``gcov`` command 133directly on the various .gcda output files. Please read the ``gcov`` 134documentation for more information. 135 136QEMU iotests 137============ 138 139QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing 140framework widely used to test block layer related features. It is higher level 141than "make check" tests and 99% of the code is written in bash or Python 142scripts. The testing success criteria is golden output comparison, and the 143test files are named with numbers. 144 145To run iotests, make sure QEMU is built successfully, then switch to the 146``tests/qemu-iotests`` directory under the build directory, and run ``./check`` 147with desired arguments from there. 148 149By default, "raw" format and "file" protocol is used; all tests will be 150executed, except the unsupported ones. You can override the format and protocol 151with arguments: 152 153.. code:: 154 155 # test with qcow2 format 156 ./check -qcow2 157 # or test a different protocol 158 ./check -nbd 159 160It's also possible to list test numbers explicitly: 161 162.. code:: 163 164 # run selected cases with qcow2 format 165 ./check -qcow2 001 030 153 166 167Cache mode can be selected with the "-c" option, which may help reveal bugs 168that are specific to certain cache mode. 169 170More options are supported by the ``./check`` script, run ``./check -h`` for 171help. 172 173Writing a new test case 174----------------------- 175 176Consider writing a tests case when you are making any changes to the block 177layer. An iotest case is usually the choice for that. There are already many 178test cases, so it is possible that extending one of them may achieve the goal 179and save the boilerplate to create one. (Unfortunately, there isn't a 100% 180reliable way to find a related one out of hundreds of tests. One approach is 181using ``git grep``.) 182 183Usually an iotest case consists of two files. One is an executable that 184produces output to stdout and stderr, the other is the expected reference 185output. They are given the same number in file names. E.g. Test script ``055`` 186and reference output ``055.out``. 187 188In rare cases, when outputs differ between cache mode ``none`` and others, a 189``.out.nocache`` file is added. In other cases, when outputs differ between 190image formats, more than one ``.out`` files are created ending with the 191respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``. 192 193There isn't a hard rule about how to write a test script, but a new test is 194usually a (copy and) modification of an existing case. There are a few 195commonly used ways to create a test: 196 197* A Bash script. It will make use of several environmental variables related 198 to the testing procedure, and could source a group of ``common.*`` libraries 199 for some common helper routines. 200 201* A Python unittest script. Import ``iotests`` and create a subclass of 202 ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of 203 this approach is that the output is too scarce, and the script is considered 204 harder to debug. 205 206* A simple Python script without using unittest module. This could also import 207 ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit 208 from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest 209 execution. This is a combination of 1 and 2. 210 211Pick the language per your preference since both Bash and Python have 212comparable library support for invoking and interacting with QEMU programs. If 213you opt for Python, it is strongly recommended to write Python 3 compatible 214code. 215 216Both Python and Bash frameworks in iotests provide helpers to manage test 217images. They can be used to create and clean up images under the test 218directory. If no I/O or any protocol specific feature is needed, it is often 219more convenient to use the pseudo block driver, ``null-co://``, as the test 220image, which doesn't require image creation or cleaning up. Avoid system-wide 221devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``. 222Otherwise, image locking implications have to be considered. For example, 223another application on the host may have locked the file, possibly leading to a 224test failure. If using such devices are explicitly desired, consider adding 225``locking=off`` option to disable image locking. 226 227Test case groups 228---------------- 229 230"Tests may belong to one or more test groups, which are defined in the form 231of a comment in the test source file. By convention, test groups are listed 232in the second line of the test file, after the "#!/..." line, like this: 233 234.. code:: 235 236 #!/usr/bin/env python3 237 # group: auto quick 238 # 239 ... 240 241Another way of defining groups is creating the tests/qemu-iotests/group.local 242file. This should be used only for downstream (this file should never appear 243in upstream). This file may be used for defining some downstream test groups 244or for temporarily disabling tests, like this: 245 246.. code:: 247 248 # groups for some company downstream process 249 # 250 # ci - tests to run on build 251 # down - our downstream tests, not for upstream 252 # 253 # Format of each line is: 254 # TEST_NAME TEST_GROUP [TEST_GROUP ]... 255 256 013 ci 257 210 disabled 258 215 disabled 259 our-ugly-workaround-test down ci 260 261Note that the following group names have a special meaning: 262 263- quick: Tests in this group should finish within a few seconds. 264 265- auto: Tests in this group are used during "make check" and should be 266 runnable in any case. That means they should run with every QEMU binary 267 (also non-x86), with every QEMU configuration (i.e. must not fail if 268 an optional feature is not compiled in - but reporting a "skip" is ok), 269 work at least with the qcow2 file format, work with all kind of host 270 filesystems and users (e.g. "nobody" or "root") and must not take too 271 much memory and disk space (since CI pipelines tend to fail otherwise). 272 273- disabled: Tests in this group are disabled and ignored by check. 274 275.. _docker-ref: 276 277Docker based tests 278================== 279 280Introduction 281------------ 282 283The Docker testing framework in QEMU utilizes public Docker images to build and 284test QEMU in predefined and widely accessible Linux environments. This makes 285it possible to expand the test coverage across distros, toolchain flavors and 286library versions. 287 288Prerequisites 289------------- 290 291Install "docker" with the system package manager and start the Docker service 292on your development machine, then make sure you have the privilege to run 293Docker commands. Typically it means setting up passwordless ``sudo docker`` 294command or login as root. For example: 295 296.. code:: 297 298 $ sudo yum install docker 299 $ # or `apt-get install docker` for Ubuntu, etc. 300 $ sudo systemctl start docker 301 $ sudo docker ps 302 303The last command should print an empty table, to verify the system is ready. 304 305An alternative method to set up permissions is by adding the current user to 306"docker" group and making the docker daemon socket file (by default 307``/var/run/docker.sock``) accessible to the group: 308 309.. code:: 310 311 $ sudo groupadd docker 312 $ sudo usermod $USER -a -G docker 313 $ sudo chown :docker /var/run/docker.sock 314 315Note that any one of above configurations makes it possible for the user to 316exploit the whole host with Docker bind mounting or other privileged 317operations. So only do it on development machines. 318 319Quickstart 320---------- 321 322From source tree, type ``make docker`` to see the help. Testing can be started 323without configuring or building QEMU (``configure`` and ``make`` are done in 324the container, with parameters defined by the make target): 325 326.. code:: 327 328 make docker-test-build@min-glib 329 330This will create a container instance using the ``min-glib`` image (the image 331is downloaded and initialized automatically), in which the ``test-build`` job 332is executed. 333 334Images 335------ 336 337Along with many other images, the ``min-glib`` image is defined in a Dockerfile 338in ``tests/docker/dockerfiles/``, called ``min-glib.docker``. ``make docker`` 339command will list all the available images. 340 341To add a new image, simply create a new ``.docker`` file under the 342``tests/docker/dockerfiles/`` directory. 343 344A ``.pre`` script can be added beside the ``.docker`` file, which will be 345executed before building the image under the build context directory. This is 346mainly used to do necessary host side setup. One such setup is ``binfmt_misc``, 347for example, to make qemu-user powered cross build containers work. 348 349Tests 350----- 351 352Different tests are added to cover various configurations to build and test 353QEMU. Docker tests are the executables under ``tests/docker`` named 354``test-*``. They are typically shell scripts and are built on top of a shell 355library, ``tests/docker/common.rc``, which provides helpers to find the QEMU 356source and build it. 357 358The full list of tests is printed in the ``make docker`` help. 359 360Debugging a Docker test failure 361------------------------------- 362 363When CI tasks, maintainers or yourself report a Docker test failure, follow the 364below steps to debug it: 365 3661. Locally reproduce the failure with the reported command line. E.g. run 367 ``make docker-test-mingw@fedora J=8``. 3682. Add "V=1" to the command line, try again, to see the verbose output. 3693. Further add "DEBUG=1" to the command line. This will pause in a shell prompt 370 in the container right before testing starts. You could either manually 371 build QEMU and run tests from there, or press Ctrl-D to let the Docker 372 testing continue. 3734. If you press Ctrl-D, the same building and testing procedure will begin, and 374 will hopefully run into the error again. After that, you will be dropped to 375 the prompt for debug. 376 377Options 378------- 379 380Various options can be used to affect how Docker tests are done. The full 381list is in the ``make docker`` help text. The frequently used ones are: 382 383* ``V=1``: the same as in top level ``make``. It will be propagated to the 384 container and enable verbose output. 385* ``J=$N``: the number of parallel tasks in make commands in the container, 386 similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in 387 top level ``make`` will not be propagated into the container.) 388* ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test 389 failure" section. 390 391Thread Sanitizer 392================ 393 394Thread Sanitizer (TSan) is a tool which can detect data races. QEMU supports 395building and testing with this tool. 396 397For more information on TSan: 398 399https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual 400 401Thread Sanitizer in Docker 402--------------------------- 403TSan is currently supported in the ubuntu2004 docker. 404 405The test-tsan test will build using TSan and then run make check. 406 407.. code:: 408 409 make docker-test-tsan@ubuntu2004 410 411TSan warnings under docker are placed in files located at build/tsan/. 412 413We recommend using DEBUG=1 to allow launching the test from inside the docker, 414and to allow review of the warnings generated by TSan. 415 416Building and Testing with TSan 417------------------------------ 418 419It is possible to build and test with TSan, with a few additional steps. 420These steps are normally done automatically in the docker. 421 422There is a one time patch needed in clang-9 or clang-10 at this time: 423 424.. code:: 425 426 sed -i 's/^const/static const/g' \ 427 /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h 428 429To configure the build for TSan: 430 431.. code:: 432 433 ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \ 434 --disable-werror --extra-cflags="-O0" 435 436The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment 437variable. 438 439More information on the TSAN_OPTIONS can be found here: 440 441https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags 442 443For example: 444 445.. code:: 446 447 export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \ 448 detect_deadlocks=false history_size=7 exitcode=0 \ 449 log_path=<build path>/tsan/tsan_warning 450 451The above exitcode=0 has TSan continue without error if any warnings are found. 452This allows for running the test and then checking the warnings afterwards. 453If you want TSan to stop and exit with error on warnings, use exitcode=66. 454 455TSan Suppressions 456----------------- 457Keep in mind that for any data race warning, although there might be a data race 458detected by TSan, there might be no actual bug here. TSan provides several 459different mechanisms for suppressing warnings. In general it is recommended 460to fix the code if possible to eliminate the data race rather than suppress 461the warning. 462 463A few important files for suppressing warnings are: 464 465tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime. 466The comment on each suppression will typically indicate why we are 467suppressing it. More information on the file format can be found here: 468 469https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions 470 471tests/tsan/blacklist.tsan - Has TSan warnings we wish to disable 472at compile time for test or debug. 473Add flags to configure to enable: 474 475"--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/blacklist.tsan" 476 477More information on the file format can be found here under "Blacklist Format": 478 479https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags 480 481TSan Annotations 482---------------- 483include/qemu/tsan.h defines annotations. See this file for more descriptions 484of the annotations themselves. Annotations can be used to suppress 485TSan warnings or give TSan more information so that it can detect proper 486relationships between accesses of data. 487 488Annotation examples can be found here: 489 490https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/ 491 492Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp 493 494The full set of annotations can be found here: 495 496https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp 497 498VM testing 499========== 500 501This test suite contains scripts that bootstrap various guest images that have 502necessary packages to build QEMU. The basic usage is documented in ``Makefile`` 503help which is displayed with ``make vm-help``. 504 505Quickstart 506---------- 507 508Run ``make vm-help`` to list available make targets. Invoke a specific make 509command to run build test in an image. For example, ``make vm-build-freebsd`` 510will build the source tree in the FreeBSD image. The command can be executed 511from either the source tree or the build dir; if the former, ``./configure`` is 512not needed. The command will then generate the test image in ``./tests/vm/`` 513under the working directory. 514 515Note: images created by the scripts accept a well-known RSA key pair for SSH 516access, so they SHOULD NOT be exposed to external interfaces if you are 517concerned about attackers taking control of the guest and potentially 518exploiting a QEMU security bug to compromise the host. 519 520QEMU binaries 521------------- 522 523By default, qemu-system-x86_64 is searched in $PATH to run the guest. If there 524isn't one, or if it is older than 2.10, the test won't work. In this case, 525provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``. 526 527Likewise the path to qemu-img can be set in QEMU_IMG environment variable. 528 529Make jobs 530--------- 531 532The ``-j$X`` option in the make command line is not propagated into the VM, 533specify ``J=$X`` to control the make jobs in the guest. 534 535Debugging 536--------- 537 538Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive 539debugging and verbose output. If this is not enough, see the next section. 540``V=1`` will be propagated down into the make jobs in the guest. 541 542Manual invocation 543----------------- 544 545Each guest script is an executable script with the same command line options. 546For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``: 547 548.. code:: 549 550 $ cd $QEMU_SRC/tests/vm 551 552 # To bootstrap the image 553 $ ./netbsd --build-image --image /var/tmp/netbsd.img 554 <...> 555 556 # To run an arbitrary command in guest (the output will not be echoed unless 557 # --debug is added) 558 $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a 559 560 # To build QEMU in guest 561 $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC 562 563 # To get to an interactive shell 564 $ ./netbsd --interactive --image /var/tmp/netbsd.img sh 565 566Adding new guests 567----------------- 568 569Please look at existing guest scripts for how to add new guests. 570 571Most importantly, create a subclass of BaseVM and implement ``build_image()`` 572method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from 573the script's ``main()``. 574 575* Usually in ``build_image()``, a template image is downloaded from a 576 predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and 577 the checksum, so consider using it. 578 579* Once the image is downloaded, users, SSH server and QEMU build deps should 580 be set up: 581 582 - Root password set to ``BaseVM.ROOT_PASS`` 583 - User ``BaseVM.GUEST_USER`` is created, and password set to 584 ``BaseVM.GUEST_PASS`` 585 - SSH service is enabled and started on boot, 586 ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys`` 587 file of both root and the normal user 588 - DHCP client service is enabled and started on boot, so that it can 589 automatically configure the virtio-net-pci NIC and communicate with QEMU 590 user net (10.0.2.2) 591 - Necessary packages are installed to untar the source tarball and build 592 QEMU 593 594* Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that 595 untars a raw virtio-blk block device, which is the tarball data blob of the 596 QEMU source tree, then configure/build it. Running "make check" is also 597 recommended. 598 599Image fuzzer testing 600==================== 601 602An image fuzzer was added to exercise format drivers. Currently only qcow2 is 603supported. To start the fuzzer, run 604 605.. code:: 606 607 tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2 608 609Alternatively, some command different from "qemu-img info" can be tested, by 610changing the ``-c`` option. 611 612Acceptance tests using the Avocado Framework 613============================================ 614 615The ``tests/acceptance`` directory hosts functional tests, also known 616as acceptance level tests. They're usually higher level tests, and 617may interact with external resources and with various guest operating 618systems. 619 620These tests are written using the Avocado Testing Framework (which must 621be installed separately) in conjunction with a the ``avocado_qemu.Test`` 622class, implemented at ``tests/acceptance/avocado_qemu``. 623 624Tests based on ``avocado_qemu.Test`` can easily: 625 626 * Customize the command line arguments given to the convenience 627 ``self.vm`` attribute (a QEMUMachine instance) 628 629 * Interact with the QEMU monitor, send QMP commands and check 630 their results 631 632 * Interact with the guest OS, using the convenience console device 633 (which may be useful to assert the effectiveness and correctness of 634 command line arguments or QMP commands) 635 636 * Interact with external data files that accompany the test itself 637 (see ``self.get_data()``) 638 639 * Download (and cache) remote data files, such as firmware and kernel 640 images 641 642 * Have access to a library of guest OS images (by means of the 643 ``avocado.utils.vmimage`` library) 644 645 * Make use of various other test related utilities available at the 646 test class itself and at the utility library: 647 648 - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test 649 - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html 650 651Running tests 652------------- 653 654You can run the acceptance tests simply by executing: 655 656.. code:: 657 658 make check-acceptance 659 660This involves the automatic creation of Python virtual environment 661within the build tree (at ``tests/venv``) which will have all the 662right dependencies, and will save tests results also within the 663build tree (at ``tests/results``). 664 665Note: the build environment must be using a Python 3 stack, and have 666the ``venv`` and ``pip`` packages installed. If necessary, make sure 667``configure`` is called with ``--python=`` and that those modules are 668available. On Debian and Ubuntu based systems, depending on the 669specific version, they may be on packages named ``python3-venv`` and 670``python3-pip``. 671 672The scripts installed inside the virtual environment may be used 673without an "activation". For instance, the Avocado test runner 674may be invoked by running: 675 676 .. code:: 677 678 tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/acceptance/ 679 680Manual Installation 681------------------- 682 683To manually install Avocado and its dependencies, run: 684 685.. code:: 686 687 pip install --user avocado-framework 688 689Alternatively, follow the instructions on this link: 690 691 https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/installing.html 692 693Overview 694-------- 695 696The ``tests/acceptance/avocado_qemu`` directory provides the 697``avocado_qemu`` Python module, containing the ``avocado_qemu.Test`` 698class. Here's a simple usage example: 699 700.. code:: 701 702 from avocado_qemu import Test 703 704 705 class Version(Test): 706 """ 707 :avocado: tags=quick 708 """ 709 def test_qmp_human_info_version(self): 710 self.vm.launch() 711 res = self.vm.command('human-monitor-command', 712 command_line='info version') 713 self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)') 714 715To execute your test, run: 716 717.. code:: 718 719 avocado run version.py 720 721Tests may be classified according to a convention by using docstring 722directives such as ``:avocado: tags=TAG1,TAG2``. To run all tests 723in the current directory, tagged as "quick", run: 724 725.. code:: 726 727 avocado run -t quick . 728 729The ``avocado_qemu.Test`` base test class 730----------------------------------------- 731 732The ``avocado_qemu.Test`` class has a number of characteristics that 733are worth being mentioned right away. 734 735First of all, it attempts to give each test a ready to use QEMUMachine 736instance, available at ``self.vm``. Because many tests will tweak the 737QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``) 738is left to the test writer. 739 740The base test class has also support for tests with more than one 741QEMUMachine. The way to get machines is through the ``self.get_vm()`` 742method which will return a QEMUMachine instance. The ``self.get_vm()`` 743method accepts arguments that will be passed to the QEMUMachine creation 744and also an optional `name` attribute so you can identify a specific 745machine and get it more than once through the tests methods. A simple 746and hypothetical example follows: 747 748.. code:: 749 750 from avocado_qemu import Test 751 752 753 class MultipleMachines(Test): 754 def test_multiple_machines(self): 755 first_machine = self.get_vm() 756 second_machine = self.get_vm() 757 self.get_vm(name='third_machine').launch() 758 759 first_machine.launch() 760 second_machine.launch() 761 762 first_res = first_machine.command( 763 'human-monitor-command', 764 command_line='info version') 765 766 second_res = second_machine.command( 767 'human-monitor-command', 768 command_line='info version') 769 770 third_res = self.get_vm(name='third_machine').command( 771 'human-monitor-command', 772 command_line='info version') 773 774 self.assertEquals(first_res, second_res, third_res) 775 776At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines 777shutdown. 778 779QEMUMachine 780~~~~~~~~~~~ 781 782The QEMUMachine API is already widely used in the Python iotests, 783device-crash-test and other Python scripts. It's a wrapper around the 784execution of a QEMU binary, giving its users: 785 786 * the ability to set command line arguments to be given to the QEMU 787 binary 788 789 * a ready to use QMP connection and interface, which can be used to 790 send commands and inspect its results, as well as asynchronous 791 events 792 793 * convenience methods to set commonly used command line arguments in 794 a more succinct and intuitive way 795 796QEMU binary selection 797~~~~~~~~~~~~~~~~~~~~~ 798 799The QEMU binary used for the ``self.vm`` QEMUMachine instance will 800primarily depend on the value of the ``qemu_bin`` parameter. If it's 801not explicitly set, its default value will be the result of a dynamic 802probe in the same source tree. A suitable binary will be one that 803targets the architecture matching host machine. 804 805Based on this description, test writers will usually rely on one of 806the following approaches: 807 8081) Set ``qemu_bin``, and use the given binary 809 8102) Do not set ``qemu_bin``, and use a QEMU binary named like 811 "qemu-system-${arch}", either in the current 812 working directory, or in the current source tree. 813 814The resulting ``qemu_bin`` value will be preserved in the 815``avocado_qemu.Test`` as an attribute with the same name. 816 817Attribute reference 818------------------- 819 820Besides the attributes and methods that are part of the base 821``avocado.Test`` class, the following attributes are available on any 822``avocado_qemu.Test`` instance. 823 824vm 825~~ 826 827A QEMUMachine instance, initially configured according to the given 828``qemu_bin`` parameter. 829 830arch 831~~~~ 832 833The architecture can be used on different levels of the stack, e.g. by 834the framework or by the test itself. At the framework level, it will 835currently influence the selection of a QEMU binary (when one is not 836explicitly given). 837 838Tests are also free to use this attribute value, for their own needs. 839A test may, for instance, use the same value when selecting the 840architecture of a kernel or disk image to boot a VM with. 841 842The ``arch`` attribute will be set to the test parameter of the same 843name. If one is not given explicitly, it will either be set to 844``None``, or, if the test is tagged with one (and only one) 845``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``. 846 847machine 848~~~~~~~ 849 850The machine type that will be set to all QEMUMachine instances created 851by the test. 852 853The ``machine`` attribute will be set to the test parameter of the same 854name. If one is not given explicitly, it will either be set to 855``None``, or, if the test is tagged with one (and only one) 856``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``. 857 858qemu_bin 859~~~~~~~~ 860 861The preserved value of the ``qemu_bin`` parameter or the result of the 862dynamic probe for a QEMU binary in the current working directory or 863source tree. 864 865Parameter reference 866------------------- 867 868To understand how Avocado parameters are accessed by tests, and how 869they can be passed to tests, please refer to:: 870 871 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#accessing-test-parameters 872 873Parameter values can be easily seen in the log files, and will look 874like the following: 875 876.. code:: 877 878 PARAMS (key=qemu_bin, path=*, default=./qemu-system-x86_64) => './qemu-system-x86_64 879 880arch 881~~~~ 882 883The architecture that will influence the selection of a QEMU binary 884(when one is not explicitly given). 885 886Tests are also free to use this parameter value, for their own needs. 887A test may, for instance, use the same value when selecting the 888architecture of a kernel or disk image to boot a VM with. 889 890This parameter has a direct relation with the ``arch`` attribute. If 891not given, it will default to None. 892 893machine 894~~~~~~~ 895 896The machine type that will be set to all QEMUMachine instances created 897by the test. 898 899 900qemu_bin 901~~~~~~~~ 902 903The exact QEMU binary to be used on QEMUMachine. 904 905Skipping tests 906-------------- 907The Avocado framework provides Python decorators which allow for easily skip 908tests running under certain conditions. For example, on the lack of a binary 909on the test system or when the running environment is a CI system. For further 910information about those decorators, please refer to:: 911 912 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#skipping-tests 913 914While the conditions for skipping tests are often specifics of each one, there 915are recurring scenarios identified by the QEMU developers and the use of 916environment variables became a kind of standard way to enable/disable tests. 917 918Here is a list of the most used variables: 919 920AVOCADO_ALLOW_LARGE_STORAGE 921~~~~~~~~~~~~~~~~~~~~~~~~~~~ 922Tests which are going to fetch or produce assets considered *large* are not 923going to run unless that `AVOCADO_ALLOW_LARGE_STORAGE=1` is exported on 924the environment. 925 926The definition of *large* is a bit arbitrary here, but it usually means an 927asset which occupies at least 1GB of size on disk when uncompressed. 928 929AVOCADO_ALLOW_UNTRUSTED_CODE 930~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 931There are tests which will boot a kernel image or firmware that can be 932considered not safe to run on the developer's workstation, thus they are 933skipped by default. The definition of *not safe* is also arbitrary but 934usually it means a blob which either its source or build process aren't 935public available. 936 937You should export `AVOCADO_ALLOW_UNTRUSTED_CODE=1` on the environment in 938order to allow tests which make use of those kind of assets. 939 940AVOCADO_TIMEOUT_EXPECTED 941~~~~~~~~~~~~~~~~~~~~~~~~ 942The Avocado framework has a timeout mechanism which interrupts tests to avoid the 943test suite of getting stuck. The timeout value can be set via test parameter or 944property defined in the test class, for further details:: 945 946 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#setting-a-test-timeout 947 948Even though the timeout can be set by the test developer, there are some tests 949that may not have a well-defined limit of time to finish under certain 950conditions. For example, tests that take longer to execute when QEMU is 951compiled with debug flags. Therefore, the `AVOCADO_TIMEOUT_EXPECTED` variable 952has been used to determine whether those tests should run or not. 953 954GITLAB_CI 955~~~~~~~~~ 956A number of tests are flagged to not run on the GitLab CI. Usually because 957they proved to the flaky or there are constraints on the CI environment which 958would make them fail. If you encounter a similar situation then use that 959variable as shown on the code snippet below to skip the test: 960 961.. code:: 962 963 @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab') 964 def test(self): 965 do_something() 966 967Uninstalling Avocado 968-------------------- 969 970If you've followed the manual installation instructions above, you can 971easily uninstall Avocado. Start by listing the packages you have 972installed:: 973 974 pip list --user 975 976And remove any package you want with:: 977 978 pip uninstall <package_name> 979 980If you've used ``make check-acceptance``, the Python virtual environment where 981Avocado is installed will be cleaned up as part of ``make check-clean``. 982 983Testing with "make check-tcg" 984============================= 985 986The check-tcg tests are intended for simple smoke tests of both 987linux-user and softmmu TCG functionality. However to build test 988programs for guest targets you need to have cross compilers available. 989If your distribution supports cross compilers you can do something as 990simple as:: 991 992 apt install gcc-aarch64-linux-gnu 993 994The configure script will automatically pick up their presence. 995Sometimes compilers have slightly odd names so the availability of 996them can be prompted by passing in the appropriate configure option 997for the architecture in question, for example:: 998 999 $(configure) --cross-cc-aarch64=aarch64-cc 1000 1001There is also a ``--cross-cc-flags-ARCH`` flag in case additional 1002compiler flags are needed to build for a given target. 1003 1004If you have the ability to run containers as the user you can also 1005take advantage of the build systems "Docker" support. It will then use 1006containers to build any test case for an enabled guest where there is 1007no system compiler available. See :ref:`docker-ref` for details. 1008 1009Running subset of tests 1010----------------------- 1011 1012You can build the tests for one architecture:: 1013 1014 make build-tcg-tests-$TARGET 1015 1016And run with:: 1017 1018 make run-tcg-tests-$TARGET 1019 1020Adding ``V=1`` to the invocation will show the details of how to 1021invoke QEMU for the test which is useful for debugging tests. 1022 1023TCG test dependencies 1024--------------------- 1025 1026The TCG tests are deliberately very light on dependencies and are 1027either totally bare with minimal gcc lib support (for softmmu tests) 1028or just glibc (for linux-user tests). This is because getting a cross 1029compiler to work with additional libraries can be challenging. 1030 1031Other TCG Tests 1032--------------- 1033 1034There are a number of out-of-tree test suites that are used for more 1035extensive testing of processor features. 1036 1037KVM Unit Tests 1038~~~~~~~~~~~~~~ 1039 1040The KVM unit tests are designed to run as a Guest OS under KVM but 1041there is no reason why they can't exercise the TCG as well. It 1042provides a minimal OS kernel with hooks for enabling the MMU as well 1043as reporting test results via a special device:: 1044 1045 https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git 1046 1047Linux Test Project 1048~~~~~~~~~~~~~~~~~~ 1049 1050The LTP is focused on exercising the syscall interface of a Linux 1051kernel. It checks that syscalls behave as documented and strives to 1052exercise as many corner cases as possible. It is a useful test suite 1053to run to exercise QEMU's linux-user code:: 1054 1055 https://linux-test-project.github.io/ 1056