]> git.proxmox.com Git - mirror_qemu.git/blob - docs/devel/testing.rst
Merge remote-tracking branch 'remotes/berrange-gitlab/tags/crypto-and-more-pull-reque...
[mirror_qemu.git] / docs / devel / testing.rst
1 ===============
2 Testing in QEMU
3 ===============
4
5 This document describes the testing infrastructure in QEMU.
6
7 Testing with "make check"
8 =========================
9
10 The "make check" testing family includes most of the C based tests in QEMU. For
11 a quick help, run ``make check-help`` from the source tree.
12
13 The usual way to run these tests is:
14
15 .. code::
16
17 make check
18
19 which includes QAPI schema tests, unit tests, QTests and some iotests.
20 Different sub-types of "make check" tests will be explained below.
21
22 Before running tests, it is best to build QEMU programs first. Some tests
23 expect the executables to exist and will fail with obscure messages if they
24 cannot find them.
25
26 Unit tests
27 ----------
28
29 Unit tests, which can be invoked with ``make check-unit``, are simple C tests
30 that typically link to individual QEMU object files and exercise them by
31 calling exported functions.
32
33 If you are writing new code in QEMU, consider adding a unit test, especially
34 for utility modules that are relatively stateless or have few dependencies. To
35 add a new unit test:
36
37 1. Create a new source file. For example, ``tests/unit/foo-test.c``.
38
39 2. Write the test. Normally you would include the header file which exports
40 the module API, then verify the interface behaves as expected from your
41 test. The test code should be organized with the glib testing framework.
42 Copying and modifying an existing test is usually a good idea.
43
44 3. Add the test to ``tests/unit/meson.build``. The unit tests are listed in a
45 dictionary called ``tests``. The values are any additional sources and
46 dependencies to be linked with the test. For a simple test whose source
47 is in ``tests/unit/foo-test.c``, it is enough to add an entry like::
48
49 {
50 ...
51 'foo-test': [],
52 ...
53 }
54
55 Since unit tests don't require environment variables, the simplest way to debug
56 a unit test failure is often directly invoking it or even running it under
57 ``gdb``. However there can still be differences in behavior between ``make``
58 invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment
59 variable (which affects memory reclamation and catches invalid pointers better)
60 and gtester options. If necessary, you can run
61
62 .. code::
63
64 make check-unit V=1
65
66 and copy the actual command line which executes the unit test, then run
67 it from the command line.
68
69 QTest
70 -----
71
72 QTest is a device emulation testing framework. It can be very useful to test
73 device models; it could also control certain aspects of QEMU (such as virtual
74 clock stepping), with a special purpose "qtest" protocol. Refer to
75 :doc:`qtest` for more details.
76
77 QTest cases can be executed with
78
79 .. code::
80
81 make check-qtest
82
83 QAPI schema tests
84 -----------------
85
86 The QAPI schema tests validate the QAPI parser used by QMP, by feeding
87 predefined input to the parser and comparing the result with the reference
88 output.
89
90 The input/output data is managed under the ``tests/qapi-schema`` directory.
91 Each test case includes four files that have a common base name:
92
93 * ``${casename}.json`` - the file contains the JSON input for feeding the
94 parser
95 * ``${casename}.out`` - the file contains the expected stdout from the parser
96 * ``${casename}.err`` - the file contains the expected stderr from the parser
97 * ``${casename}.exit`` - the expected error code
98
99 Consider adding a new QAPI schema test when you are making a change on the QAPI
100 parser (either fixing a bug or extending/modifying the syntax). To do this:
101
102 1. Add four files for the new case as explained above. For example:
103
104 ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``.
105
106 2. Add the new test in ``tests/Makefile.include``. For example:
107
108 ``qapi-schema += foo.json``
109
110 check-block
111 -----------
112
113 ``make check-block`` runs a subset of the block layer iotests (the tests that
114 are in the "auto" group).
115 See the "QEMU iotests" section below for more information.
116
117 GCC gcov support
118 ----------------
119
120 ``gcov`` is a GCC tool to analyze the testing coverage by
121 instrumenting the tested code. To use it, configure QEMU with
122 ``--enable-gcov`` option and build. Then run ``make check`` as usual.
123
124 If you want to gather coverage information on a single test the ``make
125 clean-gcda`` target can be used to delete any existing coverage
126 information before running a single test.
127
128 You can generate a HTML coverage report by executing ``make
129 coverage-html`` which will create
130 ``meson-logs/coveragereport/index.html``.
131
132 Further analysis can be conducted by running the ``gcov`` command
133 directly on the various .gcda output files. Please read the ``gcov``
134 documentation for more information.
135
136 QEMU iotests
137 ============
138
139 QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing
140 framework widely used to test block layer related features. It is higher level
141 than "make check" tests and 99% of the code is written in bash or Python
142 scripts. The testing success criteria is golden output comparison, and the
143 test files are named with numbers.
144
145 To run iotests, make sure QEMU is built successfully, then switch to the
146 ``tests/qemu-iotests`` directory under the build directory, and run ``./check``
147 with desired arguments from there.
148
149 By default, "raw" format and "file" protocol is used; all tests will be
150 executed, except the unsupported ones. You can override the format and protocol
151 with arguments:
152
153 .. code::
154
155 # test with qcow2 format
156 ./check -qcow2
157 # or test a different protocol
158 ./check -nbd
159
160 It's also possible to list test numbers explicitly:
161
162 .. code::
163
164 # run selected cases with qcow2 format
165 ./check -qcow2 001 030 153
166
167 Cache mode can be selected with the "-c" option, which may help reveal bugs
168 that are specific to certain cache mode.
169
170 More options are supported by the ``./check`` script, run ``./check -h`` for
171 help.
172
173 Writing a new test case
174 -----------------------
175
176 Consider writing a tests case when you are making any changes to the block
177 layer. An iotest case is usually the choice for that. There are already many
178 test cases, so it is possible that extending one of them may achieve the goal
179 and save the boilerplate to create one. (Unfortunately, there isn't a 100%
180 reliable way to find a related one out of hundreds of tests. One approach is
181 using ``git grep``.)
182
183 Usually an iotest case consists of two files. One is an executable that
184 produces output to stdout and stderr, the other is the expected reference
185 output. They are given the same number in file names. E.g. Test script ``055``
186 and reference output ``055.out``.
187
188 In rare cases, when outputs differ between cache mode ``none`` and others, a
189 ``.out.nocache`` file is added. In other cases, when outputs differ between
190 image formats, more than one ``.out`` files are created ending with the
191 respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``.
192
193 There isn't a hard rule about how to write a test script, but a new test is
194 usually a (copy and) modification of an existing case. There are a few
195 commonly used ways to create a test:
196
197 * A Bash script. It will make use of several environmental variables related
198 to the testing procedure, and could source a group of ``common.*`` libraries
199 for some common helper routines.
200
201 * A Python unittest script. Import ``iotests`` and create a subclass of
202 ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of
203 this approach is that the output is too scarce, and the script is considered
204 harder to debug.
205
206 * A simple Python script without using unittest module. This could also import
207 ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit
208 from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest
209 execution. This is a combination of 1 and 2.
210
211 Pick the language per your preference since both Bash and Python have
212 comparable library support for invoking and interacting with QEMU programs. If
213 you opt for Python, it is strongly recommended to write Python 3 compatible
214 code.
215
216 Both Python and Bash frameworks in iotests provide helpers to manage test
217 images. They can be used to create and clean up images under the test
218 directory. If no I/O or any protocol specific feature is needed, it is often
219 more convenient to use the pseudo block driver, ``null-co://``, as the test
220 image, which doesn't require image creation or cleaning up. Avoid system-wide
221 devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``.
222 Otherwise, image locking implications have to be considered. For example,
223 another application on the host may have locked the file, possibly leading to a
224 test failure. If using such devices are explicitly desired, consider adding
225 ``locking=off`` option to disable image locking.
226
227 Test case groups
228 ----------------
229
230 "Tests may belong to one or more test groups, which are defined in the form
231 of a comment in the test source file. By convention, test groups are listed
232 in the second line of the test file, after the "#!/..." line, like this:
233
234 .. code::
235
236 #!/usr/bin/env python3
237 # group: auto quick
238 #
239 ...
240
241 Another way of defining groups is creating the tests/qemu-iotests/group.local
242 file. This should be used only for downstream (this file should never appear
243 in upstream). This file may be used for defining some downstream test groups
244 or for temporarily disabling tests, like this:
245
246 .. code::
247
248 # groups for some company downstream process
249 #
250 # ci - tests to run on build
251 # down - our downstream tests, not for upstream
252 #
253 # Format of each line is:
254 # TEST_NAME TEST_GROUP [TEST_GROUP ]...
255
256 013 ci
257 210 disabled
258 215 disabled
259 our-ugly-workaround-test down ci
260
261 Note that the following group names have a special meaning:
262
263 - quick: Tests in this group should finish within a few seconds.
264
265 - auto: Tests in this group are used during "make check" and should be
266 runnable in any case. That means they should run with every QEMU binary
267 (also non-x86), with every QEMU configuration (i.e. must not fail if
268 an optional feature is not compiled in - but reporting a "skip" is ok),
269 work at least with the qcow2 file format, work with all kind of host
270 filesystems and users (e.g. "nobody" or "root") and must not take too
271 much memory and disk space (since CI pipelines tend to fail otherwise).
272
273 - disabled: Tests in this group are disabled and ignored by check.
274
275 .. _container-ref:
276
277 Container based tests
278 =====================
279
280 Introduction
281 ------------
282
283 The container testing framework in QEMU utilizes public images to
284 build and test QEMU in predefined and widely accessible Linux
285 environments. This makes it possible to expand the test coverage
286 across distros, toolchain flavors and library versions. The support
287 was originally written for Docker although we also support Podman as
288 an alternative container runtime. Although the many of the target
289 names and scripts are prefixed with "docker" the system will
290 automatically run on whichever is configured.
291
292 The container images are also used to augment the generation of tests
293 for testing TCG. See :ref:`checktcg-ref` for more details.
294
295 Docker Prerequisites
296 --------------------
297
298 Install "docker" with the system package manager and start the Docker service
299 on your development machine, then make sure you have the privilege to run
300 Docker commands. Typically it means setting up passwordless ``sudo docker``
301 command or login as root. For example:
302
303 .. code::
304
305 $ sudo yum install docker
306 $ # or `apt-get install docker` for Ubuntu, etc.
307 $ sudo systemctl start docker
308 $ sudo docker ps
309
310 The last command should print an empty table, to verify the system is ready.
311
312 An alternative method to set up permissions is by adding the current user to
313 "docker" group and making the docker daemon socket file (by default
314 ``/var/run/docker.sock``) accessible to the group:
315
316 .. code::
317
318 $ sudo groupadd docker
319 $ sudo usermod $USER -a -G docker
320 $ sudo chown :docker /var/run/docker.sock
321
322 Note that any one of above configurations makes it possible for the user to
323 exploit the whole host with Docker bind mounting or other privileged
324 operations. So only do it on development machines.
325
326 Podman Prerequisites
327 --------------------
328
329 Install "podman" with the system package manager.
330
331 .. code::
332
333 $ sudo dnf install podman
334 $ podman ps
335
336 The last command should print an empty table, to verify the system is ready.
337
338 Quickstart
339 ----------
340
341 From source tree, type ``make docker-help`` to see the help. Testing
342 can be started without configuring or building QEMU (``configure`` and
343 ``make`` are done in the container, with parameters defined by the
344 make target):
345
346 .. code::
347
348 make docker-test-build@centos8
349
350 This will create a container instance using the ``centos8`` image (the image
351 is downloaded and initialized automatically), in which the ``test-build`` job
352 is executed.
353
354 Registry
355 --------
356
357 The QEMU project has a container registry hosted by GitLab at
358 ``registry.gitlab.com/qemu-project/qemu`` which will automatically be
359 used to pull in pre-built layers. This avoids unnecessary strain on
360 the distro archives created by multiple developers running the same
361 container build steps over and over again. This can be overridden
362 locally by using the ``NOCACHE`` build option:
363
364 .. code::
365
366 make docker-image-debian10 NOCACHE=1
367
368 Images
369 ------
370
371 Along with many other images, the ``centos8`` image is defined in a Dockerfile
372 in ``tests/docker/dockerfiles/``, called ``centos8.docker``. ``make docker-help``
373 command will list all the available images.
374
375 To add a new image, simply create a new ``.docker`` file under the
376 ``tests/docker/dockerfiles/`` directory.
377
378 A ``.pre`` script can be added beside the ``.docker`` file, which will be
379 executed before building the image under the build context directory. This is
380 mainly used to do necessary host side setup. One such setup is ``binfmt_misc``,
381 for example, to make qemu-user powered cross build containers work.
382
383 Tests
384 -----
385
386 Different tests are added to cover various configurations to build and test
387 QEMU. Docker tests are the executables under ``tests/docker`` named
388 ``test-*``. They are typically shell scripts and are built on top of a shell
389 library, ``tests/docker/common.rc``, which provides helpers to find the QEMU
390 source and build it.
391
392 The full list of tests is printed in the ``make docker-help`` help.
393
394 Debugging a Docker test failure
395 -------------------------------
396
397 When CI tasks, maintainers or yourself report a Docker test failure, follow the
398 below steps to debug it:
399
400 1. Locally reproduce the failure with the reported command line. E.g. run
401 ``make docker-test-mingw@fedora J=8``.
402 2. Add "V=1" to the command line, try again, to see the verbose output.
403 3. Further add "DEBUG=1" to the command line. This will pause in a shell prompt
404 in the container right before testing starts. You could either manually
405 build QEMU and run tests from there, or press Ctrl-D to let the Docker
406 testing continue.
407 4. If you press Ctrl-D, the same building and testing procedure will begin, and
408 will hopefully run into the error again. After that, you will be dropped to
409 the prompt for debug.
410
411 Options
412 -------
413
414 Various options can be used to affect how Docker tests are done. The full
415 list is in the ``make docker`` help text. The frequently used ones are:
416
417 * ``V=1``: the same as in top level ``make``. It will be propagated to the
418 container and enable verbose output.
419 * ``J=$N``: the number of parallel tasks in make commands in the container,
420 similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in
421 top level ``make`` will not be propagated into the container.)
422 * ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test
423 failure" section.
424
425 Thread Sanitizer
426 ================
427
428 Thread Sanitizer (TSan) is a tool which can detect data races. QEMU supports
429 building and testing with this tool.
430
431 For more information on TSan:
432
433 https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
434
435 Thread Sanitizer in Docker
436 ---------------------------
437 TSan is currently supported in the ubuntu2004 docker.
438
439 The test-tsan test will build using TSan and then run make check.
440
441 .. code::
442
443 make docker-test-tsan@ubuntu2004
444
445 TSan warnings under docker are placed in files located at build/tsan/.
446
447 We recommend using DEBUG=1 to allow launching the test from inside the docker,
448 and to allow review of the warnings generated by TSan.
449
450 Building and Testing with TSan
451 ------------------------------
452
453 It is possible to build and test with TSan, with a few additional steps.
454 These steps are normally done automatically in the docker.
455
456 There is a one time patch needed in clang-9 or clang-10 at this time:
457
458 .. code::
459
460 sed -i 's/^const/static const/g' \
461 /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
462
463 To configure the build for TSan:
464
465 .. code::
466
467 ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \
468 --disable-werror --extra-cflags="-O0"
469
470 The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment
471 variable.
472
473 More information on the TSAN_OPTIONS can be found here:
474
475 https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
476
477 For example:
478
479 .. code::
480
481 export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \
482 detect_deadlocks=false history_size=7 exitcode=0 \
483 log_path=<build path>/tsan/tsan_warning
484
485 The above exitcode=0 has TSan continue without error if any warnings are found.
486 This allows for running the test and then checking the warnings afterwards.
487 If you want TSan to stop and exit with error on warnings, use exitcode=66.
488
489 TSan Suppressions
490 -----------------
491 Keep in mind that for any data race warning, although there might be a data race
492 detected by TSan, there might be no actual bug here. TSan provides several
493 different mechanisms for suppressing warnings. In general it is recommended
494 to fix the code if possible to eliminate the data race rather than suppress
495 the warning.
496
497 A few important files for suppressing warnings are:
498
499 tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime.
500 The comment on each suppression will typically indicate why we are
501 suppressing it. More information on the file format can be found here:
502
503 https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions
504
505 tests/tsan/blacklist.tsan - Has TSan warnings we wish to disable
506 at compile time for test or debug.
507 Add flags to configure to enable:
508
509 "--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/blacklist.tsan"
510
511 More information on the file format can be found here under "Blacklist Format":
512
513 https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
514
515 TSan Annotations
516 ----------------
517 include/qemu/tsan.h defines annotations. See this file for more descriptions
518 of the annotations themselves. Annotations can be used to suppress
519 TSan warnings or give TSan more information so that it can detect proper
520 relationships between accesses of data.
521
522 Annotation examples can be found here:
523
524 https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/
525
526 Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp
527
528 The full set of annotations can be found here:
529
530 https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp
531
532 VM testing
533 ==========
534
535 This test suite contains scripts that bootstrap various guest images that have
536 necessary packages to build QEMU. The basic usage is documented in ``Makefile``
537 help which is displayed with ``make vm-help``.
538
539 Quickstart
540 ----------
541
542 Run ``make vm-help`` to list available make targets. Invoke a specific make
543 command to run build test in an image. For example, ``make vm-build-freebsd``
544 will build the source tree in the FreeBSD image. The command can be executed
545 from either the source tree or the build dir; if the former, ``./configure`` is
546 not needed. The command will then generate the test image in ``./tests/vm/``
547 under the working directory.
548
549 Note: images created by the scripts accept a well-known RSA key pair for SSH
550 access, so they SHOULD NOT be exposed to external interfaces if you are
551 concerned about attackers taking control of the guest and potentially
552 exploiting a QEMU security bug to compromise the host.
553
554 QEMU binaries
555 -------------
556
557 By default, qemu-system-x86_64 is searched in $PATH to run the guest. If there
558 isn't one, or if it is older than 2.10, the test won't work. In this case,
559 provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``.
560
561 Likewise the path to qemu-img can be set in QEMU_IMG environment variable.
562
563 Make jobs
564 ---------
565
566 The ``-j$X`` option in the make command line is not propagated into the VM,
567 specify ``J=$X`` to control the make jobs in the guest.
568
569 Debugging
570 ---------
571
572 Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive
573 debugging and verbose output. If this is not enough, see the next section.
574 ``V=1`` will be propagated down into the make jobs in the guest.
575
576 Manual invocation
577 -----------------
578
579 Each guest script is an executable script with the same command line options.
580 For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``:
581
582 .. code::
583
584 $ cd $QEMU_SRC/tests/vm
585
586 # To bootstrap the image
587 $ ./netbsd --build-image --image /var/tmp/netbsd.img
588 <...>
589
590 # To run an arbitrary command in guest (the output will not be echoed unless
591 # --debug is added)
592 $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a
593
594 # To build QEMU in guest
595 $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC
596
597 # To get to an interactive shell
598 $ ./netbsd --interactive --image /var/tmp/netbsd.img sh
599
600 Adding new guests
601 -----------------
602
603 Please look at existing guest scripts for how to add new guests.
604
605 Most importantly, create a subclass of BaseVM and implement ``build_image()``
606 method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from
607 the script's ``main()``.
608
609 * Usually in ``build_image()``, a template image is downloaded from a
610 predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and
611 the checksum, so consider using it.
612
613 * Once the image is downloaded, users, SSH server and QEMU build deps should
614 be set up:
615
616 - Root password set to ``BaseVM.ROOT_PASS``
617 - User ``BaseVM.GUEST_USER`` is created, and password set to
618 ``BaseVM.GUEST_PASS``
619 - SSH service is enabled and started on boot,
620 ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys``
621 file of both root and the normal user
622 - DHCP client service is enabled and started on boot, so that it can
623 automatically configure the virtio-net-pci NIC and communicate with QEMU
624 user net (10.0.2.2)
625 - Necessary packages are installed to untar the source tarball and build
626 QEMU
627
628 * Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that
629 untars a raw virtio-blk block device, which is the tarball data blob of the
630 QEMU source tree, then configure/build it. Running "make check" is also
631 recommended.
632
633 Image fuzzer testing
634 ====================
635
636 An image fuzzer was added to exercise format drivers. Currently only qcow2 is
637 supported. To start the fuzzer, run
638
639 .. code::
640
641 tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2
642
643 Alternatively, some command different from "qemu-img info" can be tested, by
644 changing the ``-c`` option.
645
646 Acceptance tests using the Avocado Framework
647 ============================================
648
649 The ``tests/acceptance`` directory hosts functional tests, also known
650 as acceptance level tests. They're usually higher level tests, and
651 may interact with external resources and with various guest operating
652 systems.
653
654 These tests are written using the Avocado Testing Framework (which must
655 be installed separately) in conjunction with a the ``avocado_qemu.Test``
656 class, implemented at ``tests/acceptance/avocado_qemu``.
657
658 Tests based on ``avocado_qemu.Test`` can easily:
659
660 * Customize the command line arguments given to the convenience
661 ``self.vm`` attribute (a QEMUMachine instance)
662
663 * Interact with the QEMU monitor, send QMP commands and check
664 their results
665
666 * Interact with the guest OS, using the convenience console device
667 (which may be useful to assert the effectiveness and correctness of
668 command line arguments or QMP commands)
669
670 * Interact with external data files that accompany the test itself
671 (see ``self.get_data()``)
672
673 * Download (and cache) remote data files, such as firmware and kernel
674 images
675
676 * Have access to a library of guest OS images (by means of the
677 ``avocado.utils.vmimage`` library)
678
679 * Make use of various other test related utilities available at the
680 test class itself and at the utility library:
681
682 - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test
683 - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html
684
685 Running tests
686 -------------
687
688 You can run the acceptance tests simply by executing:
689
690 .. code::
691
692 make check-acceptance
693
694 This involves the automatic creation of Python virtual environment
695 within the build tree (at ``tests/venv``) which will have all the
696 right dependencies, and will save tests results also within the
697 build tree (at ``tests/results``).
698
699 Note: the build environment must be using a Python 3 stack, and have
700 the ``venv`` and ``pip`` packages installed. If necessary, make sure
701 ``configure`` is called with ``--python=`` and that those modules are
702 available. On Debian and Ubuntu based systems, depending on the
703 specific version, they may be on packages named ``python3-venv`` and
704 ``python3-pip``.
705
706 The scripts installed inside the virtual environment may be used
707 without an "activation". For instance, the Avocado test runner
708 may be invoked by running:
709
710 .. code::
711
712 tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/acceptance/
713
714 Manual Installation
715 -------------------
716
717 To manually install Avocado and its dependencies, run:
718
719 .. code::
720
721 pip install --user avocado-framework
722
723 Alternatively, follow the instructions on this link:
724
725 https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/installing.html
726
727 Overview
728 --------
729
730 The ``tests/acceptance/avocado_qemu`` directory provides the
731 ``avocado_qemu`` Python module, containing the ``avocado_qemu.Test``
732 class. Here's a simple usage example:
733
734 .. code::
735
736 from avocado_qemu import Test
737
738
739 class Version(Test):
740 """
741 :avocado: tags=quick
742 """
743 def test_qmp_human_info_version(self):
744 self.vm.launch()
745 res = self.vm.command('human-monitor-command',
746 command_line='info version')
747 self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)')
748
749 To execute your test, run:
750
751 .. code::
752
753 avocado run version.py
754
755 Tests may be classified according to a convention by using docstring
756 directives such as ``:avocado: tags=TAG1,TAG2``. To run all tests
757 in the current directory, tagged as "quick", run:
758
759 .. code::
760
761 avocado run -t quick .
762
763 The ``avocado_qemu.Test`` base test class
764 -----------------------------------------
765
766 The ``avocado_qemu.Test`` class has a number of characteristics that
767 are worth being mentioned right away.
768
769 First of all, it attempts to give each test a ready to use QEMUMachine
770 instance, available at ``self.vm``. Because many tests will tweak the
771 QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``)
772 is left to the test writer.
773
774 The base test class has also support for tests with more than one
775 QEMUMachine. The way to get machines is through the ``self.get_vm()``
776 method which will return a QEMUMachine instance. The ``self.get_vm()``
777 method accepts arguments that will be passed to the QEMUMachine creation
778 and also an optional `name` attribute so you can identify a specific
779 machine and get it more than once through the tests methods. A simple
780 and hypothetical example follows:
781
782 .. code::
783
784 from avocado_qemu import Test
785
786
787 class MultipleMachines(Test):
788 def test_multiple_machines(self):
789 first_machine = self.get_vm()
790 second_machine = self.get_vm()
791 self.get_vm(name='third_machine').launch()
792
793 first_machine.launch()
794 second_machine.launch()
795
796 first_res = first_machine.command(
797 'human-monitor-command',
798 command_line='info version')
799
800 second_res = second_machine.command(
801 'human-monitor-command',
802 command_line='info version')
803
804 third_res = self.get_vm(name='third_machine').command(
805 'human-monitor-command',
806 command_line='info version')
807
808 self.assertEquals(first_res, second_res, third_res)
809
810 At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines
811 shutdown.
812
813 The ``avocado_qemu.LinuxTest`` base test class
814 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
815
816 The ``avocado_qemu.LinuxTest`` is further specialization of the
817 ``avocado_qemu.Test`` class, so it contains all the characteristics of
818 the later plus some extra features.
819
820 First of all, this base class is intended for tests that need to
821 interact with a fully booted and operational Linux guest. At this
822 time, it uses a Fedora 31 guest image. The most basic example looks
823 like this:
824
825 .. code::
826
827 from avocado_qemu import LinuxTest
828
829
830 class SomeTest(LinuxTest):
831
832 def test(self):
833 self.launch_and_wait()
834 self.ssh_command('some_command_to_be_run_in_the_guest')
835
836 Please refer to tests that use ``avocado_qemu.LinuxTest`` under
837 ``tests/acceptance`` for more examples.
838
839 QEMUMachine
840 ~~~~~~~~~~~
841
842 The QEMUMachine API is already widely used in the Python iotests,
843 device-crash-test and other Python scripts. It's a wrapper around the
844 execution of a QEMU binary, giving its users:
845
846 * the ability to set command line arguments to be given to the QEMU
847 binary
848
849 * a ready to use QMP connection and interface, which can be used to
850 send commands and inspect its results, as well as asynchronous
851 events
852
853 * convenience methods to set commonly used command line arguments in
854 a more succinct and intuitive way
855
856 QEMU binary selection
857 ~~~~~~~~~~~~~~~~~~~~~
858
859 The QEMU binary used for the ``self.vm`` QEMUMachine instance will
860 primarily depend on the value of the ``qemu_bin`` parameter. If it's
861 not explicitly set, its default value will be the result of a dynamic
862 probe in the same source tree. A suitable binary will be one that
863 targets the architecture matching host machine.
864
865 Based on this description, test writers will usually rely on one of
866 the following approaches:
867
868 1) Set ``qemu_bin``, and use the given binary
869
870 2) Do not set ``qemu_bin``, and use a QEMU binary named like
871 "qemu-system-${arch}", either in the current
872 working directory, or in the current source tree.
873
874 The resulting ``qemu_bin`` value will be preserved in the
875 ``avocado_qemu.Test`` as an attribute with the same name.
876
877 Attribute reference
878 -------------------
879
880 Besides the attributes and methods that are part of the base
881 ``avocado.Test`` class, the following attributes are available on any
882 ``avocado_qemu.Test`` instance.
883
884 vm
885 ~~
886
887 A QEMUMachine instance, initially configured according to the given
888 ``qemu_bin`` parameter.
889
890 arch
891 ~~~~
892
893 The architecture can be used on different levels of the stack, e.g. by
894 the framework or by the test itself. At the framework level, it will
895 currently influence the selection of a QEMU binary (when one is not
896 explicitly given).
897
898 Tests are also free to use this attribute value, for their own needs.
899 A test may, for instance, use the same value when selecting the
900 architecture of a kernel or disk image to boot a VM with.
901
902 The ``arch`` attribute will be set to the test parameter of the same
903 name. If one is not given explicitly, it will either be set to
904 ``None``, or, if the test is tagged with one (and only one)
905 ``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``.
906
907 cpu
908 ~~~
909
910 The cpu model that will be set to all QEMUMachine instances created
911 by the test.
912
913 The ``cpu`` attribute will be set to the test parameter of the same
914 name. If one is not given explicitly, it will either be set to
915 ``None ``, or, if the test is tagged with one (and only one)
916 ``:avocado: tags=cpu:VALUE`` tag, it will be set to ``VALUE``.
917
918 machine
919 ~~~~~~~
920
921 The machine type that will be set to all QEMUMachine instances created
922 by the test.
923
924 The ``machine`` attribute will be set to the test parameter of the same
925 name. If one is not given explicitly, it will either be set to
926 ``None``, or, if the test is tagged with one (and only one)
927 ``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``.
928
929 qemu_bin
930 ~~~~~~~~
931
932 The preserved value of the ``qemu_bin`` parameter or the result of the
933 dynamic probe for a QEMU binary in the current working directory or
934 source tree.
935
936 LinuxTest
937 ~~~~~~~~~
938
939 Besides the attributes present on the ``avocado_qemu.Test`` base
940 class, the ``avocado_qemu.LinuxTest`` adds the following attributes:
941
942 distro
943 ......
944
945 The name of the Linux distribution used as the guest image for the
946 test. The name should match the **Provider** column on the list
947 of images supported by the avocado.utils.vmimage library:
948
949 https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
950
951 distro_version
952 ..............
953
954 The version of the Linux distribution as the guest image for the
955 test. The name should match the **Version** column on the list
956 of images supported by the avocado.utils.vmimage library:
957
958 https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
959
960 distro_checksum
961 ...............
962
963 The sha256 hash of the guest image file used for the test.
964
965 If this value is not set in the code or by a test parameter (with the
966 same name), no validation on the integrity of the image will be
967 performed.
968
969 Parameter reference
970 -------------------
971
972 To understand how Avocado parameters are accessed by tests, and how
973 they can be passed to tests, please refer to::
974
975 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#accessing-test-parameters
976
977 Parameter values can be easily seen in the log files, and will look
978 like the following:
979
980 .. code::
981
982 PARAMS (key=qemu_bin, path=*, default=./qemu-system-x86_64) => './qemu-system-x86_64
983
984 arch
985 ~~~~
986
987 The architecture that will influence the selection of a QEMU binary
988 (when one is not explicitly given).
989
990 Tests are also free to use this parameter value, for their own needs.
991 A test may, for instance, use the same value when selecting the
992 architecture of a kernel or disk image to boot a VM with.
993
994 This parameter has a direct relation with the ``arch`` attribute. If
995 not given, it will default to None.
996
997 cpu
998 ~~~
999
1000 The cpu model that will be set to all QEMUMachine instances created
1001 by the test.
1002
1003 machine
1004 ~~~~~~~
1005
1006 The machine type that will be set to all QEMUMachine instances created
1007 by the test.
1008
1009
1010 qemu_bin
1011 ~~~~~~~~
1012
1013 The exact QEMU binary to be used on QEMUMachine.
1014
1015 LinuxTest
1016 ~~~~~~~~~
1017
1018 Besides the parameters present on the ``avocado_qemu.Test`` base
1019 class, the ``avocado_qemu.LinuxTest`` adds the following parameters:
1020
1021 distro
1022 ......
1023
1024 The name of the Linux distribution used as the guest image for the
1025 test. The name should match the **Provider** column on the list
1026 of images supported by the avocado.utils.vmimage library:
1027
1028 https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1029
1030 distro_version
1031 ..............
1032
1033 The version of the Linux distribution as the guest image for the
1034 test. The name should match the **Version** column on the list
1035 of images supported by the avocado.utils.vmimage library:
1036
1037 https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1038
1039 distro_checksum
1040 ...............
1041
1042 The sha256 hash of the guest image file used for the test.
1043
1044 If this value is not set in the code or by this parameter no
1045 validation on the integrity of the image will be performed.
1046
1047 Skipping tests
1048 --------------
1049 The Avocado framework provides Python decorators which allow for easily skip
1050 tests running under certain conditions. For example, on the lack of a binary
1051 on the test system or when the running environment is a CI system. For further
1052 information about those decorators, please refer to::
1053
1054 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#skipping-tests
1055
1056 While the conditions for skipping tests are often specifics of each one, there
1057 are recurring scenarios identified by the QEMU developers and the use of
1058 environment variables became a kind of standard way to enable/disable tests.
1059
1060 Here is a list of the most used variables:
1061
1062 AVOCADO_ALLOW_LARGE_STORAGE
1063 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
1064 Tests which are going to fetch or produce assets considered *large* are not
1065 going to run unless that `AVOCADO_ALLOW_LARGE_STORAGE=1` is exported on
1066 the environment.
1067
1068 The definition of *large* is a bit arbitrary here, but it usually means an
1069 asset which occupies at least 1GB of size on disk when uncompressed.
1070
1071 AVOCADO_ALLOW_UNTRUSTED_CODE
1072 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1073 There are tests which will boot a kernel image or firmware that can be
1074 considered not safe to run on the developer's workstation, thus they are
1075 skipped by default. The definition of *not safe* is also arbitrary but
1076 usually it means a blob which either its source or build process aren't
1077 public available.
1078
1079 You should export `AVOCADO_ALLOW_UNTRUSTED_CODE=1` on the environment in
1080 order to allow tests which make use of those kind of assets.
1081
1082 AVOCADO_TIMEOUT_EXPECTED
1083 ~~~~~~~~~~~~~~~~~~~~~~~~
1084 The Avocado framework has a timeout mechanism which interrupts tests to avoid the
1085 test suite of getting stuck. The timeout value can be set via test parameter or
1086 property defined in the test class, for further details::
1087
1088 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#setting-a-test-timeout
1089
1090 Even though the timeout can be set by the test developer, there are some tests
1091 that may not have a well-defined limit of time to finish under certain
1092 conditions. For example, tests that take longer to execute when QEMU is
1093 compiled with debug flags. Therefore, the `AVOCADO_TIMEOUT_EXPECTED` variable
1094 has been used to determine whether those tests should run or not.
1095
1096 GITLAB_CI
1097 ~~~~~~~~~
1098 A number of tests are flagged to not run on the GitLab CI. Usually because
1099 they proved to the flaky or there are constraints on the CI environment which
1100 would make them fail. If you encounter a similar situation then use that
1101 variable as shown on the code snippet below to skip the test:
1102
1103 .. code::
1104
1105 @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
1106 def test(self):
1107 do_something()
1108
1109 Uninstalling Avocado
1110 --------------------
1111
1112 If you've followed the manual installation instructions above, you can
1113 easily uninstall Avocado. Start by listing the packages you have
1114 installed::
1115
1116 pip list --user
1117
1118 And remove any package you want with::
1119
1120 pip uninstall <package_name>
1121
1122 If you've used ``make check-acceptance``, the Python virtual environment where
1123 Avocado is installed will be cleaned up as part of ``make check-clean``.
1124
1125 .. _checktcg-ref:
1126
1127 Testing with "make check-tcg"
1128 =============================
1129
1130 The check-tcg tests are intended for simple smoke tests of both
1131 linux-user and softmmu TCG functionality. However to build test
1132 programs for guest targets you need to have cross compilers available.
1133 If your distribution supports cross compilers you can do something as
1134 simple as::
1135
1136 apt install gcc-aarch64-linux-gnu
1137
1138 The configure script will automatically pick up their presence.
1139 Sometimes compilers have slightly odd names so the availability of
1140 them can be prompted by passing in the appropriate configure option
1141 for the architecture in question, for example::
1142
1143 $(configure) --cross-cc-aarch64=aarch64-cc
1144
1145 There is also a ``--cross-cc-flags-ARCH`` flag in case additional
1146 compiler flags are needed to build for a given target.
1147
1148 If you have the ability to run containers as the user the build system
1149 will automatically use them where no system compiler is available. For
1150 architectures where we also support building QEMU we will generally
1151 use the same container to build tests. However there are a number of
1152 additional containers defined that have a minimal cross-build
1153 environment that is only suitable for building test cases. Sometimes
1154 we may use a bleeding edge distribution for compiler features needed
1155 for test cases that aren't yet in the LTS distros we support for QEMU
1156 itself.
1157
1158 See :ref:`container-ref` for more details.
1159
1160 Running subset of tests
1161 -----------------------
1162
1163 You can build the tests for one architecture::
1164
1165 make build-tcg-tests-$TARGET
1166
1167 And run with::
1168
1169 make run-tcg-tests-$TARGET
1170
1171 Adding ``V=1`` to the invocation will show the details of how to
1172 invoke QEMU for the test which is useful for debugging tests.
1173
1174 TCG test dependencies
1175 ---------------------
1176
1177 The TCG tests are deliberately very light on dependencies and are
1178 either totally bare with minimal gcc lib support (for softmmu tests)
1179 or just glibc (for linux-user tests). This is because getting a cross
1180 compiler to work with additional libraries can be challenging.
1181
1182 Other TCG Tests
1183 ---------------
1184
1185 There are a number of out-of-tree test suites that are used for more
1186 extensive testing of processor features.
1187
1188 KVM Unit Tests
1189 ~~~~~~~~~~~~~~
1190
1191 The KVM unit tests are designed to run as a Guest OS under KVM but
1192 there is no reason why they can't exercise the TCG as well. It
1193 provides a minimal OS kernel with hooks for enabling the MMU as well
1194 as reporting test results via a special device::
1195
1196 https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
1197
1198 Linux Test Project
1199 ~~~~~~~~~~~~~~~~~~
1200
1201 The LTP is focused on exercising the syscall interface of a Linux
1202 kernel. It checks that syscalls behave as documented and strives to
1203 exercise as many corner cases as possible. It is a useful test suite
1204 to run to exercise QEMU's linux-user code::
1205
1206 https://linux-test-project.github.io/