]> git.proxmox.com Git - mirror_qemu.git/blob - docs/devel/testing.rst
Merge tag 'qemu-slof-20211112' of github.com:aik/qemu into ppc-next
[mirror_qemu.git] / docs / devel / testing.rst
1 Testing in QEMU
2 ===============
3
4 This document describes the testing infrastructure in QEMU.
5
6 Testing with "make check"
7 -------------------------
8
9 The "make check" testing family includes most of the C based tests in QEMU. For
10 a quick help, run ``make check-help`` from the source tree.
11
12 The usual way to run these tests is:
13
14 .. code::
15
16 make check
17
18 which includes QAPI schema tests, unit tests, QTests and some iotests.
19 Different sub-types of "make check" tests will be explained below.
20
21 Before running tests, it is best to build QEMU programs first. Some tests
22 expect the executables to exist and will fail with obscure messages if they
23 cannot find them.
24
25 Unit tests
26 ~~~~~~~~~~
27
28 Unit tests, which can be invoked with ``make check-unit``, are simple C tests
29 that typically link to individual QEMU object files and exercise them by
30 calling exported functions.
31
32 If you are writing new code in QEMU, consider adding a unit test, especially
33 for utility modules that are relatively stateless or have few dependencies. To
34 add a new unit test:
35
36 1. Create a new source file. For example, ``tests/unit/foo-test.c``.
37
38 2. Write the test. Normally you would include the header file which exports
39 the module API, then verify the interface behaves as expected from your
40 test. The test code should be organized with the glib testing framework.
41 Copying and modifying an existing test is usually a good idea.
42
43 3. Add the test to ``tests/unit/meson.build``. The unit tests are listed in a
44 dictionary called ``tests``. The values are any additional sources and
45 dependencies to be linked with the test. For a simple test whose source
46 is in ``tests/unit/foo-test.c``, it is enough to add an entry like::
47
48 {
49 ...
50 'foo-test': [],
51 ...
52 }
53
54 Since unit tests don't require environment variables, the simplest way to debug
55 a unit test failure is often directly invoking it or even running it under
56 ``gdb``. However there can still be differences in behavior between ``make``
57 invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment
58 variable (which affects memory reclamation and catches invalid pointers better)
59 and gtester options. If necessary, you can run
60
61 .. code::
62
63 make check-unit V=1
64
65 and copy the actual command line which executes the unit test, then run
66 it from the command line.
67
68 QTest
69 ~~~~~
70
71 QTest is a device emulation testing framework. It can be very useful to test
72 device models; it could also control certain aspects of QEMU (such as virtual
73 clock stepping), with a special purpose "qtest" protocol. Refer to
74 :doc:`qtest` for more details.
75
76 QTest cases can be executed with
77
78 .. code::
79
80 make check-qtest
81
82 QAPI schema tests
83 ~~~~~~~~~~~~~~~~~
84
85 The QAPI schema tests validate the QAPI parser used by QMP, by feeding
86 predefined input to the parser and comparing the result with the reference
87 output.
88
89 The input/output data is managed under the ``tests/qapi-schema`` directory.
90 Each test case includes four files that have a common base name:
91
92 * ``${casename}.json`` - the file contains the JSON input for feeding the
93 parser
94 * ``${casename}.out`` - the file contains the expected stdout from the parser
95 * ``${casename}.err`` - the file contains the expected stderr from the parser
96 * ``${casename}.exit`` - the expected error code
97
98 Consider adding a new QAPI schema test when you are making a change on the QAPI
99 parser (either fixing a bug or extending/modifying the syntax). To do this:
100
101 1. Add four files for the new case as explained above. For example:
102
103 ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``.
104
105 2. Add the new test in ``tests/Makefile.include``. For example:
106
107 ``qapi-schema += foo.json``
108
109 check-block
110 ~~~~~~~~~~~
111
112 ``make check-block`` runs a subset of the block layer iotests (the tests that
113 are in the "auto" group).
114 See the "QEMU iotests" section below for more information.
115
116 QEMU iotests
117 ------------
118
119 QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing
120 framework widely used to test block layer related features. It is higher level
121 than "make check" tests and 99% of the code is written in bash or Python
122 scripts. The testing success criteria is golden output comparison, and the
123 test files are named with numbers.
124
125 To run iotests, make sure QEMU is built successfully, then switch to the
126 ``tests/qemu-iotests`` directory under the build directory, and run ``./check``
127 with desired arguments from there.
128
129 By default, "raw" format and "file" protocol is used; all tests will be
130 executed, except the unsupported ones. You can override the format and protocol
131 with arguments:
132
133 .. code::
134
135 # test with qcow2 format
136 ./check -qcow2
137 # or test a different protocol
138 ./check -nbd
139
140 It's also possible to list test numbers explicitly:
141
142 .. code::
143
144 # run selected cases with qcow2 format
145 ./check -qcow2 001 030 153
146
147 Cache mode can be selected with the "-c" option, which may help reveal bugs
148 that are specific to certain cache mode.
149
150 More options are supported by the ``./check`` script, run ``./check -h`` for
151 help.
152
153 Writing a new test case
154 ~~~~~~~~~~~~~~~~~~~~~~~
155
156 Consider writing a tests case when you are making any changes to the block
157 layer. An iotest case is usually the choice for that. There are already many
158 test cases, so it is possible that extending one of them may achieve the goal
159 and save the boilerplate to create one. (Unfortunately, there isn't a 100%
160 reliable way to find a related one out of hundreds of tests. One approach is
161 using ``git grep``.)
162
163 Usually an iotest case consists of two files. One is an executable that
164 produces output to stdout and stderr, the other is the expected reference
165 output. They are given the same number in file names. E.g. Test script ``055``
166 and reference output ``055.out``.
167
168 In rare cases, when outputs differ between cache mode ``none`` and others, a
169 ``.out.nocache`` file is added. In other cases, when outputs differ between
170 image formats, more than one ``.out`` files are created ending with the
171 respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``.
172
173 There isn't a hard rule about how to write a test script, but a new test is
174 usually a (copy and) modification of an existing case. There are a few
175 commonly used ways to create a test:
176
177 * A Bash script. It will make use of several environmental variables related
178 to the testing procedure, and could source a group of ``common.*`` libraries
179 for some common helper routines.
180
181 * A Python unittest script. Import ``iotests`` and create a subclass of
182 ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of
183 this approach is that the output is too scarce, and the script is considered
184 harder to debug.
185
186 * A simple Python script without using unittest module. This could also import
187 ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit
188 from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest
189 execution. This is a combination of 1 and 2.
190
191 Pick the language per your preference since both Bash and Python have
192 comparable library support for invoking and interacting with QEMU programs. If
193 you opt for Python, it is strongly recommended to write Python 3 compatible
194 code.
195
196 Both Python and Bash frameworks in iotests provide helpers to manage test
197 images. They can be used to create and clean up images under the test
198 directory. If no I/O or any protocol specific feature is needed, it is often
199 more convenient to use the pseudo block driver, ``null-co://``, as the test
200 image, which doesn't require image creation or cleaning up. Avoid system-wide
201 devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``.
202 Otherwise, image locking implications have to be considered. For example,
203 another application on the host may have locked the file, possibly leading to a
204 test failure. If using such devices are explicitly desired, consider adding
205 ``locking=off`` option to disable image locking.
206
207 Debugging a test case
208 ~~~~~~~~~~~~~~~~~~~~~
209
210 The following options to the ``check`` script can be useful when debugging
211 a failing test:
212
213 * ``-gdb`` wraps every QEMU invocation in a ``gdbserver``, which waits for a
214 connection from a gdb client. The options given to ``gdbserver`` (e.g. the
215 address on which to listen for connections) are taken from the ``$GDB_OPTIONS``
216 environment variable. By default (if ``$GDB_OPTIONS`` is empty), it listens on
217 ``localhost:12345``.
218 It is possible to connect to it for example with
219 ``gdb -iex "target remote $addr"``, where ``$addr`` is the address
220 ``gdbserver`` listens on.
221 If the ``-gdb`` option is not used, ``$GDB_OPTIONS`` is ignored,
222 regardless of whether it is set or not.
223
224 * ``-valgrind`` attaches a valgrind instance to QEMU. If it detects
225 warnings, it will print and save the log in
226 ``$TEST_DIR/<valgrind_pid>.valgrind``.
227 The final command line will be ``valgrind --log-file=$TEST_DIR/
228 <valgrind_pid>.valgrind --error-exitcode=99 $QEMU ...``
229
230 * ``-d`` (debug) just increases the logging verbosity, showing
231 for example the QMP commands and answers.
232
233 * ``-p`` (print) redirects QEMU’s stdout and stderr to the test output,
234 instead of saving it into a log file in
235 ``$TEST_DIR/qemu-machine-<random_string>``.
236
237 Test case groups
238 ~~~~~~~~~~~~~~~~
239
240 "Tests may belong to one or more test groups, which are defined in the form
241 of a comment in the test source file. By convention, test groups are listed
242 in the second line of the test file, after the "#!/..." line, like this:
243
244 .. code::
245
246 #!/usr/bin/env python3
247 # group: auto quick
248 #
249 ...
250
251 Another way of defining groups is creating the tests/qemu-iotests/group.local
252 file. This should be used only for downstream (this file should never appear
253 in upstream). This file may be used for defining some downstream test groups
254 or for temporarily disabling tests, like this:
255
256 .. code::
257
258 # groups for some company downstream process
259 #
260 # ci - tests to run on build
261 # down - our downstream tests, not for upstream
262 #
263 # Format of each line is:
264 # TEST_NAME TEST_GROUP [TEST_GROUP ]...
265
266 013 ci
267 210 disabled
268 215 disabled
269 our-ugly-workaround-test down ci
270
271 Note that the following group names have a special meaning:
272
273 - quick: Tests in this group should finish within a few seconds.
274
275 - auto: Tests in this group are used during "make check" and should be
276 runnable in any case. That means they should run with every QEMU binary
277 (also non-x86), with every QEMU configuration (i.e. must not fail if
278 an optional feature is not compiled in - but reporting a "skip" is ok),
279 work at least with the qcow2 file format, work with all kind of host
280 filesystems and users (e.g. "nobody" or "root") and must not take too
281 much memory and disk space (since CI pipelines tend to fail otherwise).
282
283 - disabled: Tests in this group are disabled and ignored by check.
284
285 .. _container-ref:
286
287 Container based tests
288 ---------------------
289
290 Introduction
291 ~~~~~~~~~~~~
292
293 The container testing framework in QEMU utilizes public images to
294 build and test QEMU in predefined and widely accessible Linux
295 environments. This makes it possible to expand the test coverage
296 across distros, toolchain flavors and library versions. The support
297 was originally written for Docker although we also support Podman as
298 an alternative container runtime. Although the many of the target
299 names and scripts are prefixed with "docker" the system will
300 automatically run on whichever is configured.
301
302 The container images are also used to augment the generation of tests
303 for testing TCG. See :ref:`checktcg-ref` for more details.
304
305 Docker Prerequisites
306 ~~~~~~~~~~~~~~~~~~~~
307
308 Install "docker" with the system package manager and start the Docker service
309 on your development machine, then make sure you have the privilege to run
310 Docker commands. Typically it means setting up passwordless ``sudo docker``
311 command or login as root. For example:
312
313 .. code::
314
315 $ sudo yum install docker
316 $ # or `apt-get install docker` for Ubuntu, etc.
317 $ sudo systemctl start docker
318 $ sudo docker ps
319
320 The last command should print an empty table, to verify the system is ready.
321
322 An alternative method to set up permissions is by adding the current user to
323 "docker" group and making the docker daemon socket file (by default
324 ``/var/run/docker.sock``) accessible to the group:
325
326 .. code::
327
328 $ sudo groupadd docker
329 $ sudo usermod $USER -a -G docker
330 $ sudo chown :docker /var/run/docker.sock
331
332 Note that any one of above configurations makes it possible for the user to
333 exploit the whole host with Docker bind mounting or other privileged
334 operations. So only do it on development machines.
335
336 Podman Prerequisites
337 ~~~~~~~~~~~~~~~~~~~~
338
339 Install "podman" with the system package manager.
340
341 .. code::
342
343 $ sudo dnf install podman
344 $ podman ps
345
346 The last command should print an empty table, to verify the system is ready.
347
348 Quickstart
349 ~~~~~~~~~~
350
351 From source tree, type ``make docker-help`` to see the help. Testing
352 can be started without configuring or building QEMU (``configure`` and
353 ``make`` are done in the container, with parameters defined by the
354 make target):
355
356 .. code::
357
358 make docker-test-build@centos8
359
360 This will create a container instance using the ``centos8`` image (the image
361 is downloaded and initialized automatically), in which the ``test-build`` job
362 is executed.
363
364 Registry
365 ~~~~~~~~
366
367 The QEMU project has a container registry hosted by GitLab at
368 ``registry.gitlab.com/qemu-project/qemu`` which will automatically be
369 used to pull in pre-built layers. This avoids unnecessary strain on
370 the distro archives created by multiple developers running the same
371 container build steps over and over again. This can be overridden
372 locally by using the ``NOCACHE`` build option:
373
374 .. code::
375
376 make docker-image-debian10 NOCACHE=1
377
378 Images
379 ~~~~~~
380
381 Along with many other images, the ``centos8`` image is defined in a Dockerfile
382 in ``tests/docker/dockerfiles/``, called ``centos8.docker``. ``make docker-help``
383 command will list all the available images.
384
385 To add a new image, simply create a new ``.docker`` file under the
386 ``tests/docker/dockerfiles/`` directory.
387
388 A ``.pre`` script can be added beside the ``.docker`` file, which will be
389 executed before building the image under the build context directory. This is
390 mainly used to do necessary host side setup. One such setup is ``binfmt_misc``,
391 for example, to make qemu-user powered cross build containers work.
392
393 Tests
394 ~~~~~
395
396 Different tests are added to cover various configurations to build and test
397 QEMU. Docker tests are the executables under ``tests/docker`` named
398 ``test-*``. They are typically shell scripts and are built on top of a shell
399 library, ``tests/docker/common.rc``, which provides helpers to find the QEMU
400 source and build it.
401
402 The full list of tests is printed in the ``make docker-help`` help.
403
404 Debugging a Docker test failure
405 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
406
407 When CI tasks, maintainers or yourself report a Docker test failure, follow the
408 below steps to debug it:
409
410 1. Locally reproduce the failure with the reported command line. E.g. run
411 ``make docker-test-mingw@fedora J=8``.
412 2. Add "V=1" to the command line, try again, to see the verbose output.
413 3. Further add "DEBUG=1" to the command line. This will pause in a shell prompt
414 in the container right before testing starts. You could either manually
415 build QEMU and run tests from there, or press Ctrl-D to let the Docker
416 testing continue.
417 4. If you press Ctrl-D, the same building and testing procedure will begin, and
418 will hopefully run into the error again. After that, you will be dropped to
419 the prompt for debug.
420
421 Options
422 ~~~~~~~
423
424 Various options can be used to affect how Docker tests are done. The full
425 list is in the ``make docker`` help text. The frequently used ones are:
426
427 * ``V=1``: the same as in top level ``make``. It will be propagated to the
428 container and enable verbose output.
429 * ``J=$N``: the number of parallel tasks in make commands in the container,
430 similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in
431 top level ``make`` will not be propagated into the container.)
432 * ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test
433 failure" section.
434
435 Thread Sanitizer
436 ----------------
437
438 Thread Sanitizer (TSan) is a tool which can detect data races. QEMU supports
439 building and testing with this tool.
440
441 For more information on TSan:
442
443 https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
444
445 Thread Sanitizer in Docker
446 ~~~~~~~~~~~~~~~~~~~~~~~~~~
447 TSan is currently supported in the ubuntu2004 docker.
448
449 The test-tsan test will build using TSan and then run make check.
450
451 .. code::
452
453 make docker-test-tsan@ubuntu2004
454
455 TSan warnings under docker are placed in files located at build/tsan/.
456
457 We recommend using DEBUG=1 to allow launching the test from inside the docker,
458 and to allow review of the warnings generated by TSan.
459
460 Building and Testing with TSan
461 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
462
463 It is possible to build and test with TSan, with a few additional steps.
464 These steps are normally done automatically in the docker.
465
466 There is a one time patch needed in clang-9 or clang-10 at this time:
467
468 .. code::
469
470 sed -i 's/^const/static const/g' \
471 /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
472
473 To configure the build for TSan:
474
475 .. code::
476
477 ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \
478 --disable-werror --extra-cflags="-O0"
479
480 The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment
481 variable.
482
483 More information on the TSAN_OPTIONS can be found here:
484
485 https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
486
487 For example:
488
489 .. code::
490
491 export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \
492 detect_deadlocks=false history_size=7 exitcode=0 \
493 log_path=<build path>/tsan/tsan_warning
494
495 The above exitcode=0 has TSan continue without error if any warnings are found.
496 This allows for running the test and then checking the warnings afterwards.
497 If you want TSan to stop and exit with error on warnings, use exitcode=66.
498
499 TSan Suppressions
500 ~~~~~~~~~~~~~~~~~
501 Keep in mind that for any data race warning, although there might be a data race
502 detected by TSan, there might be no actual bug here. TSan provides several
503 different mechanisms for suppressing warnings. In general it is recommended
504 to fix the code if possible to eliminate the data race rather than suppress
505 the warning.
506
507 A few important files for suppressing warnings are:
508
509 tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime.
510 The comment on each suppression will typically indicate why we are
511 suppressing it. More information on the file format can be found here:
512
513 https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions
514
515 tests/tsan/blacklist.tsan - Has TSan warnings we wish to disable
516 at compile time for test or debug.
517 Add flags to configure to enable:
518
519 "--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/blacklist.tsan"
520
521 More information on the file format can be found here under "Blacklist Format":
522
523 https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
524
525 TSan Annotations
526 ~~~~~~~~~~~~~~~~
527 include/qemu/tsan.h defines annotations. See this file for more descriptions
528 of the annotations themselves. Annotations can be used to suppress
529 TSan warnings or give TSan more information so that it can detect proper
530 relationships between accesses of data.
531
532 Annotation examples can be found here:
533
534 https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/
535
536 Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp
537
538 The full set of annotations can be found here:
539
540 https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp
541
542 VM testing
543 ----------
544
545 This test suite contains scripts that bootstrap various guest images that have
546 necessary packages to build QEMU. The basic usage is documented in ``Makefile``
547 help which is displayed with ``make vm-help``.
548
549 Quickstart
550 ~~~~~~~~~~
551
552 Run ``make vm-help`` to list available make targets. Invoke a specific make
553 command to run build test in an image. For example, ``make vm-build-freebsd``
554 will build the source tree in the FreeBSD image. The command can be executed
555 from either the source tree or the build dir; if the former, ``./configure`` is
556 not needed. The command will then generate the test image in ``./tests/vm/``
557 under the working directory.
558
559 Note: images created by the scripts accept a well-known RSA key pair for SSH
560 access, so they SHOULD NOT be exposed to external interfaces if you are
561 concerned about attackers taking control of the guest and potentially
562 exploiting a QEMU security bug to compromise the host.
563
564 QEMU binaries
565 ~~~~~~~~~~~~~
566
567 By default, ``qemu-system-x86_64`` is searched in $PATH to run the guest. If
568 there isn't one, or if it is older than 2.10, the test won't work. In this case,
569 provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``.
570
571 Likewise the path to ``qemu-img`` can be set in QEMU_IMG environment variable.
572
573 Make jobs
574 ~~~~~~~~~
575
576 The ``-j$X`` option in the make command line is not propagated into the VM,
577 specify ``J=$X`` to control the make jobs in the guest.
578
579 Debugging
580 ~~~~~~~~~
581
582 Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive
583 debugging and verbose output. If this is not enough, see the next section.
584 ``V=1`` will be propagated down into the make jobs in the guest.
585
586 Manual invocation
587 ~~~~~~~~~~~~~~~~~
588
589 Each guest script is an executable script with the same command line options.
590 For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``:
591
592 .. code::
593
594 $ cd $QEMU_SRC/tests/vm
595
596 # To bootstrap the image
597 $ ./netbsd --build-image --image /var/tmp/netbsd.img
598 <...>
599
600 # To run an arbitrary command in guest (the output will not be echoed unless
601 # --debug is added)
602 $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a
603
604 # To build QEMU in guest
605 $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC
606
607 # To get to an interactive shell
608 $ ./netbsd --interactive --image /var/tmp/netbsd.img sh
609
610 Adding new guests
611 ~~~~~~~~~~~~~~~~~
612
613 Please look at existing guest scripts for how to add new guests.
614
615 Most importantly, create a subclass of BaseVM and implement ``build_image()``
616 method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from
617 the script's ``main()``.
618
619 * Usually in ``build_image()``, a template image is downloaded from a
620 predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and
621 the checksum, so consider using it.
622
623 * Once the image is downloaded, users, SSH server and QEMU build deps should
624 be set up:
625
626 - Root password set to ``BaseVM.ROOT_PASS``
627 - User ``BaseVM.GUEST_USER`` is created, and password set to
628 ``BaseVM.GUEST_PASS``
629 - SSH service is enabled and started on boot,
630 ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys``
631 file of both root and the normal user
632 - DHCP client service is enabled and started on boot, so that it can
633 automatically configure the virtio-net-pci NIC and communicate with QEMU
634 user net (10.0.2.2)
635 - Necessary packages are installed to untar the source tarball and build
636 QEMU
637
638 * Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that
639 untars a raw virtio-blk block device, which is the tarball data blob of the
640 QEMU source tree, then configure/build it. Running "make check" is also
641 recommended.
642
643 Image fuzzer testing
644 --------------------
645
646 An image fuzzer was added to exercise format drivers. Currently only qcow2 is
647 supported. To start the fuzzer, run
648
649 .. code::
650
651 tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2
652
653 Alternatively, some command different from ``qemu-img info`` can be tested, by
654 changing the ``-c`` option.
655
656 Integration tests using the Avocado Framework
657 ---------------------------------------------
658
659 The ``tests/avocado`` directory hosts integration tests. They're usually
660 higher level tests, and may interact with external resources and with
661 various guest operating systems.
662
663 These tests are written using the Avocado Testing Framework (which must
664 be installed separately) in conjunction with a the ``avocado_qemu.Test``
665 class, implemented at ``tests/avocado/avocado_qemu``.
666
667 Tests based on ``avocado_qemu.Test`` can easily:
668
669 * Customize the command line arguments given to the convenience
670 ``self.vm`` attribute (a QEMUMachine instance)
671
672 * Interact with the QEMU monitor, send QMP commands and check
673 their results
674
675 * Interact with the guest OS, using the convenience console device
676 (which may be useful to assert the effectiveness and correctness of
677 command line arguments or QMP commands)
678
679 * Interact with external data files that accompany the test itself
680 (see ``self.get_data()``)
681
682 * Download (and cache) remote data files, such as firmware and kernel
683 images
684
685 * Have access to a library of guest OS images (by means of the
686 ``avocado.utils.vmimage`` library)
687
688 * Make use of various other test related utilities available at the
689 test class itself and at the utility library:
690
691 - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test
692 - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html
693
694 Running tests
695 ~~~~~~~~~~~~~
696
697 You can run the avocado tests simply by executing:
698
699 .. code::
700
701 make check-avocado
702
703 This involves the automatic creation of Python virtual environment
704 within the build tree (at ``tests/venv``) which will have all the
705 right dependencies, and will save tests results also within the
706 build tree (at ``tests/results``).
707
708 Note: the build environment must be using a Python 3 stack, and have
709 the ``venv`` and ``pip`` packages installed. If necessary, make sure
710 ``configure`` is called with ``--python=`` and that those modules are
711 available. On Debian and Ubuntu based systems, depending on the
712 specific version, they may be on packages named ``python3-venv`` and
713 ``python3-pip``.
714
715 It is also possible to run tests based on tags using the
716 ``make check-avocado`` command and the ``AVOCADO_TAGS`` environment
717 variable:
718
719 .. code::
720
721 make check-avocado AVOCADO_TAGS=quick
722
723 Note that tags separated with commas have an AND behavior, while tags
724 separated by spaces have an OR behavior. For more information on Avocado
725 tags, see:
726
727 https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/tags.html
728
729 To run a single test file, a couple of them, or a test within a file
730 using the ``make check-avocado`` command, set the ``AVOCADO_TESTS``
731 environment variable with the test files or test names. To run all
732 tests from a single file, use:
733
734 .. code::
735
736 make check-avocado AVOCADO_TESTS=$FILEPATH
737
738 The same is valid to run tests from multiple test files:
739
740 .. code::
741
742 make check-avocado AVOCADO_TESTS='$FILEPATH1 $FILEPATH2'
743
744 To run a single test within a file, use:
745
746 .. code::
747
748 make check-avocado AVOCADO_TESTS=$FILEPATH:$TESTCLASS.$TESTNAME
749
750 The same is valid to run single tests from multiple test files:
751
752 .. code::
753
754 make check-avocado AVOCADO_TESTS='$FILEPATH1:$TESTCLASS1.$TESTNAME1 $FILEPATH2:$TESTCLASS2.$TESTNAME2'
755
756 The scripts installed inside the virtual environment may be used
757 without an "activation". For instance, the Avocado test runner
758 may be invoked by running:
759
760 .. code::
761
762 tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/avocado/
763
764 Note that if ``make check-avocado`` was not executed before, it is
765 possible to create the Python virtual environment with the dependencies
766 needed running:
767
768 .. code::
769
770 make check-venv
771
772 It is also possible to run tests from a single file or a single test within
773 a test file. To run tests from a single file within the build tree, use:
774
775 .. code::
776
777 tests/venv/bin/avocado run tests/avocado/$TESTFILE
778
779 To run a single test within a test file, use:
780
781 .. code::
782
783 tests/venv/bin/avocado run tests/avocado/$TESTFILE:$TESTCLASS.$TESTNAME
784
785 Valid test names are visible in the output from any previous execution
786 of Avocado or ``make check-avocado``, and can also be queried using:
787
788 .. code::
789
790 tests/venv/bin/avocado list tests/avocado
791
792 Manual Installation
793 ~~~~~~~~~~~~~~~~~~~
794
795 To manually install Avocado and its dependencies, run:
796
797 .. code::
798
799 pip install --user avocado-framework
800
801 Alternatively, follow the instructions on this link:
802
803 https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/installing.html
804
805 Overview
806 ~~~~~~~~
807
808 The ``tests/avocado/avocado_qemu`` directory provides the
809 ``avocado_qemu`` Python module, containing the ``avocado_qemu.Test``
810 class. Here's a simple usage example:
811
812 .. code::
813
814 from avocado_qemu import QemuSystemTest
815
816
817 class Version(QemuSystemTest):
818 """
819 :avocado: tags=quick
820 """
821 def test_qmp_human_info_version(self):
822 self.vm.launch()
823 res = self.vm.command('human-monitor-command',
824 command_line='info version')
825 self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)')
826
827 To execute your test, run:
828
829 .. code::
830
831 avocado run version.py
832
833 Tests may be classified according to a convention by using docstring
834 directives such as ``:avocado: tags=TAG1,TAG2``. To run all tests
835 in the current directory, tagged as "quick", run:
836
837 .. code::
838
839 avocado run -t quick .
840
841 The ``avocado_qemu.Test`` base test class
842 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
843
844 The ``avocado_qemu.Test`` class has a number of characteristics that
845 are worth being mentioned right away.
846
847 First of all, it attempts to give each test a ready to use QEMUMachine
848 instance, available at ``self.vm``. Because many tests will tweak the
849 QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``)
850 is left to the test writer.
851
852 The base test class has also support for tests with more than one
853 QEMUMachine. The way to get machines is through the ``self.get_vm()``
854 method which will return a QEMUMachine instance. The ``self.get_vm()``
855 method accepts arguments that will be passed to the QEMUMachine creation
856 and also an optional ``name`` attribute so you can identify a specific
857 machine and get it more than once through the tests methods. A simple
858 and hypothetical example follows:
859
860 .. code::
861
862 from avocado_qemu import QemuSystemTest
863
864
865 class MultipleMachines(QemuSystemTest):
866 def test_multiple_machines(self):
867 first_machine = self.get_vm()
868 second_machine = self.get_vm()
869 self.get_vm(name='third_machine').launch()
870
871 first_machine.launch()
872 second_machine.launch()
873
874 first_res = first_machine.command(
875 'human-monitor-command',
876 command_line='info version')
877
878 second_res = second_machine.command(
879 'human-monitor-command',
880 command_line='info version')
881
882 third_res = self.get_vm(name='third_machine').command(
883 'human-monitor-command',
884 command_line='info version')
885
886 self.assertEquals(first_res, second_res, third_res)
887
888 At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines
889 shutdown.
890
891 The ``avocado_qemu.LinuxTest`` base test class
892 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
893
894 The ``avocado_qemu.LinuxTest`` is further specialization of the
895 ``avocado_qemu.Test`` class, so it contains all the characteristics of
896 the later plus some extra features.
897
898 First of all, this base class is intended for tests that need to
899 interact with a fully booted and operational Linux guest. At this
900 time, it uses a Fedora 31 guest image. The most basic example looks
901 like this:
902
903 .. code::
904
905 from avocado_qemu import LinuxTest
906
907
908 class SomeTest(LinuxTest):
909
910 def test(self):
911 self.launch_and_wait()
912 self.ssh_command('some_command_to_be_run_in_the_guest')
913
914 Please refer to tests that use ``avocado_qemu.LinuxTest`` under
915 ``tests/avocado`` for more examples.
916
917 QEMUMachine
918 ~~~~~~~~~~~
919
920 The QEMUMachine API is already widely used in the Python iotests,
921 device-crash-test and other Python scripts. It's a wrapper around the
922 execution of a QEMU binary, giving its users:
923
924 * the ability to set command line arguments to be given to the QEMU
925 binary
926
927 * a ready to use QMP connection and interface, which can be used to
928 send commands and inspect its results, as well as asynchronous
929 events
930
931 * convenience methods to set commonly used command line arguments in
932 a more succinct and intuitive way
933
934 QEMU binary selection
935 ^^^^^^^^^^^^^^^^^^^^^
936
937 The QEMU binary used for the ``self.vm`` QEMUMachine instance will
938 primarily depend on the value of the ``qemu_bin`` parameter. If it's
939 not explicitly set, its default value will be the result of a dynamic
940 probe in the same source tree. A suitable binary will be one that
941 targets the architecture matching host machine.
942
943 Based on this description, test writers will usually rely on one of
944 the following approaches:
945
946 1) Set ``qemu_bin``, and use the given binary
947
948 2) Do not set ``qemu_bin``, and use a QEMU binary named like
949 "qemu-system-${arch}", either in the current
950 working directory, or in the current source tree.
951
952 The resulting ``qemu_bin`` value will be preserved in the
953 ``avocado_qemu.Test`` as an attribute with the same name.
954
955 Attribute reference
956 ~~~~~~~~~~~~~~~~~~~
957
958 Test
959 ^^^^
960
961 Besides the attributes and methods that are part of the base
962 ``avocado.Test`` class, the following attributes are available on any
963 ``avocado_qemu.Test`` instance.
964
965 vm
966 ''
967
968 A QEMUMachine instance, initially configured according to the given
969 ``qemu_bin`` parameter.
970
971 arch
972 ''''
973
974 The architecture can be used on different levels of the stack, e.g. by
975 the framework or by the test itself. At the framework level, it will
976 currently influence the selection of a QEMU binary (when one is not
977 explicitly given).
978
979 Tests are also free to use this attribute value, for their own needs.
980 A test may, for instance, use the same value when selecting the
981 architecture of a kernel or disk image to boot a VM with.
982
983 The ``arch`` attribute will be set to the test parameter of the same
984 name. If one is not given explicitly, it will either be set to
985 ``None``, or, if the test is tagged with one (and only one)
986 ``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``.
987
988 cpu
989 '''
990
991 The cpu model that will be set to all QEMUMachine instances created
992 by the test.
993
994 The ``cpu`` attribute will be set to the test parameter of the same
995 name. If one is not given explicitly, it will either be set to
996 ``None ``, or, if the test is tagged with one (and only one)
997 ``:avocado: tags=cpu:VALUE`` tag, it will be set to ``VALUE``.
998
999 machine
1000 '''''''
1001
1002 The machine type that will be set to all QEMUMachine instances created
1003 by the test.
1004
1005 The ``machine`` attribute will be set to the test parameter of the same
1006 name. If one is not given explicitly, it will either be set to
1007 ``None``, or, if the test is tagged with one (and only one)
1008 ``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``.
1009
1010 qemu_bin
1011 ''''''''
1012
1013 The preserved value of the ``qemu_bin`` parameter or the result of the
1014 dynamic probe for a QEMU binary in the current working directory or
1015 source tree.
1016
1017 LinuxTest
1018 ^^^^^^^^^
1019
1020 Besides the attributes present on the ``avocado_qemu.Test`` base
1021 class, the ``avocado_qemu.LinuxTest`` adds the following attributes:
1022
1023 distro
1024 ''''''
1025
1026 The name of the Linux distribution used as the guest image for the
1027 test. The name should match the **Provider** column on the list
1028 of images supported by the avocado.utils.vmimage library:
1029
1030 https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1031
1032 distro_version
1033 ''''''''''''''
1034
1035 The version of the Linux distribution as the guest image for the
1036 test. The name should match the **Version** column on the list
1037 of images supported by the avocado.utils.vmimage library:
1038
1039 https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1040
1041 distro_checksum
1042 '''''''''''''''
1043
1044 The sha256 hash of the guest image file used for the test.
1045
1046 If this value is not set in the code or by a test parameter (with the
1047 same name), no validation on the integrity of the image will be
1048 performed.
1049
1050 Parameter reference
1051 ~~~~~~~~~~~~~~~~~~~
1052
1053 To understand how Avocado parameters are accessed by tests, and how
1054 they can be passed to tests, please refer to::
1055
1056 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#accessing-test-parameters
1057
1058 Parameter values can be easily seen in the log files, and will look
1059 like the following:
1060
1061 .. code::
1062
1063 PARAMS (key=qemu_bin, path=*, default=./qemu-system-x86_64) => './qemu-system-x86_64
1064
1065 Test
1066 ^^^^
1067
1068 arch
1069 ''''
1070
1071 The architecture that will influence the selection of a QEMU binary
1072 (when one is not explicitly given).
1073
1074 Tests are also free to use this parameter value, for their own needs.
1075 A test may, for instance, use the same value when selecting the
1076 architecture of a kernel or disk image to boot a VM with.
1077
1078 This parameter has a direct relation with the ``arch`` attribute. If
1079 not given, it will default to None.
1080
1081 cpu
1082 '''
1083
1084 The cpu model that will be set to all QEMUMachine instances created
1085 by the test.
1086
1087 machine
1088 '''''''
1089
1090 The machine type that will be set to all QEMUMachine instances created
1091 by the test.
1092
1093 qemu_bin
1094 ''''''''
1095
1096 The exact QEMU binary to be used on QEMUMachine.
1097
1098 LinuxTest
1099 ^^^^^^^^^
1100
1101 Besides the parameters present on the ``avocado_qemu.Test`` base
1102 class, the ``avocado_qemu.LinuxTest`` adds the following parameters:
1103
1104 distro
1105 ''''''
1106
1107 The name of the Linux distribution used as the guest image for the
1108 test. The name should match the **Provider** column on the list
1109 of images supported by the avocado.utils.vmimage library:
1110
1111 https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1112
1113 distro_version
1114 ''''''''''''''
1115
1116 The version of the Linux distribution as the guest image for the
1117 test. The name should match the **Version** column on the list
1118 of images supported by the avocado.utils.vmimage library:
1119
1120 https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images
1121
1122 distro_checksum
1123 '''''''''''''''
1124
1125 The sha256 hash of the guest image file used for the test.
1126
1127 If this value is not set in the code or by this parameter no
1128 validation on the integrity of the image will be performed.
1129
1130 Skipping tests
1131 ~~~~~~~~~~~~~~
1132
1133 The Avocado framework provides Python decorators which allow for easily skip
1134 tests running under certain conditions. For example, on the lack of a binary
1135 on the test system or when the running environment is a CI system. For further
1136 information about those decorators, please refer to::
1137
1138 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#skipping-tests
1139
1140 While the conditions for skipping tests are often specifics of each one, there
1141 are recurring scenarios identified by the QEMU developers and the use of
1142 environment variables became a kind of standard way to enable/disable tests.
1143
1144 Here is a list of the most used variables:
1145
1146 AVOCADO_ALLOW_LARGE_STORAGE
1147 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
1148 Tests which are going to fetch or produce assets considered *large* are not
1149 going to run unless that ``AVOCADO_ALLOW_LARGE_STORAGE=1`` is exported on
1150 the environment.
1151
1152 The definition of *large* is a bit arbitrary here, but it usually means an
1153 asset which occupies at least 1GB of size on disk when uncompressed.
1154
1155 AVOCADO_ALLOW_UNTRUSTED_CODE
1156 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1157 There are tests which will boot a kernel image or firmware that can be
1158 considered not safe to run on the developer's workstation, thus they are
1159 skipped by default. The definition of *not safe* is also arbitrary but
1160 usually it means a blob which either its source or build process aren't
1161 public available.
1162
1163 You should export ``AVOCADO_ALLOW_UNTRUSTED_CODE=1`` on the environment in
1164 order to allow tests which make use of those kind of assets.
1165
1166 AVOCADO_TIMEOUT_EXPECTED
1167 ^^^^^^^^^^^^^^^^^^^^^^^^
1168 The Avocado framework has a timeout mechanism which interrupts tests to avoid the
1169 test suite of getting stuck. The timeout value can be set via test parameter or
1170 property defined in the test class, for further details::
1171
1172 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#setting-a-test-timeout
1173
1174 Even though the timeout can be set by the test developer, there are some tests
1175 that may not have a well-defined limit of time to finish under certain
1176 conditions. For example, tests that take longer to execute when QEMU is
1177 compiled with debug flags. Therefore, the ``AVOCADO_TIMEOUT_EXPECTED`` variable
1178 has been used to determine whether those tests should run or not.
1179
1180 GITLAB_CI
1181 ^^^^^^^^^
1182 A number of tests are flagged to not run on the GitLab CI. Usually because
1183 they proved to the flaky or there are constraints on the CI environment which
1184 would make them fail. If you encounter a similar situation then use that
1185 variable as shown on the code snippet below to skip the test:
1186
1187 .. code::
1188
1189 @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
1190 def test(self):
1191 do_something()
1192
1193 Uninstalling Avocado
1194 ~~~~~~~~~~~~~~~~~~~~
1195
1196 If you've followed the manual installation instructions above, you can
1197 easily uninstall Avocado. Start by listing the packages you have
1198 installed::
1199
1200 pip list --user
1201
1202 And remove any package you want with::
1203
1204 pip uninstall <package_name>
1205
1206 If you've used ``make check-avocado``, the Python virtual environment where
1207 Avocado is installed will be cleaned up as part of ``make check-clean``.
1208
1209 .. _checktcg-ref:
1210
1211 Testing with "make check-tcg"
1212 -----------------------------
1213
1214 The check-tcg tests are intended for simple smoke tests of both
1215 linux-user and softmmu TCG functionality. However to build test
1216 programs for guest targets you need to have cross compilers available.
1217 If your distribution supports cross compilers you can do something as
1218 simple as::
1219
1220 apt install gcc-aarch64-linux-gnu
1221
1222 The configure script will automatically pick up their presence.
1223 Sometimes compilers have slightly odd names so the availability of
1224 them can be prompted by passing in the appropriate configure option
1225 for the architecture in question, for example::
1226
1227 $(configure) --cross-cc-aarch64=aarch64-cc
1228
1229 There is also a ``--cross-cc-flags-ARCH`` flag in case additional
1230 compiler flags are needed to build for a given target.
1231
1232 If you have the ability to run containers as the user the build system
1233 will automatically use them where no system compiler is available. For
1234 architectures where we also support building QEMU we will generally
1235 use the same container to build tests. However there are a number of
1236 additional containers defined that have a minimal cross-build
1237 environment that is only suitable for building test cases. Sometimes
1238 we may use a bleeding edge distribution for compiler features needed
1239 for test cases that aren't yet in the LTS distros we support for QEMU
1240 itself.
1241
1242 See :ref:`container-ref` for more details.
1243
1244 Running subset of tests
1245 ~~~~~~~~~~~~~~~~~~~~~~~
1246
1247 You can build the tests for one architecture::
1248
1249 make build-tcg-tests-$TARGET
1250
1251 And run with::
1252
1253 make run-tcg-tests-$TARGET
1254
1255 Adding ``V=1`` to the invocation will show the details of how to
1256 invoke QEMU for the test which is useful for debugging tests.
1257
1258 TCG test dependencies
1259 ~~~~~~~~~~~~~~~~~~~~~
1260
1261 The TCG tests are deliberately very light on dependencies and are
1262 either totally bare with minimal gcc lib support (for softmmu tests)
1263 or just glibc (for linux-user tests). This is because getting a cross
1264 compiler to work with additional libraries can be challenging.
1265
1266 Other TCG Tests
1267 ---------------
1268
1269 There are a number of out-of-tree test suites that are used for more
1270 extensive testing of processor features.
1271
1272 KVM Unit Tests
1273 ~~~~~~~~~~~~~~
1274
1275 The KVM unit tests are designed to run as a Guest OS under KVM but
1276 there is no reason why they can't exercise the TCG as well. It
1277 provides a minimal OS kernel with hooks for enabling the MMU as well
1278 as reporting test results via a special device::
1279
1280 https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
1281
1282 Linux Test Project
1283 ~~~~~~~~~~~~~~~~~~
1284
1285 The LTP is focused on exercising the syscall interface of a Linux
1286 kernel. It checks that syscalls behave as documented and strives to
1287 exercise as many corner cases as possible. It is a useful test suite
1288 to run to exercise QEMU's linux-user code::
1289
1290 https://linux-test-project.github.io/
1291
1292 GCC gcov support
1293 ----------------
1294
1295 ``gcov`` is a GCC tool to analyze the testing coverage by
1296 instrumenting the tested code. To use it, configure QEMU with
1297 ``--enable-gcov`` option and build. Then run the tests as usual.
1298
1299 If you want to gather coverage information on a single test the ``make
1300 clean-gcda`` target can be used to delete any existing coverage
1301 information before running a single test.
1302
1303 You can generate a HTML coverage report by executing ``make
1304 coverage-html`` which will create
1305 ``meson-logs/coveragereport/index.html``.
1306
1307 Further analysis can be conducted by running the ``gcov`` command
1308 directly on the various .gcda output files. Please read the ``gcov``
1309 documentation for more information.