]> git.proxmox.com Git - mirror_zfs.git/blame - tests/README.md
Enable zfs_rename_002_pos, zfs_rename_005_neg, zfs_rename_007_pos
[mirror_zfs.git] / tests / README.md
CommitLineData
6bb24f4d
BB
1# ZFS Test Suite README
2
31) Building and installing the ZFS Test Suite
4
5The ZFS Test Suite runs under the test-runner framework. This framework
6is built along side the standard ZFS utilities and is included as part of
7zfs-test package. The zfs-test package can be built from source as follows:
8
9 $ ./configure
10 $ make pkg-utils
11
12The resulting packages can be installed using the rpm or dpkg command as
13appropriate for your distributions. Alternately, if you have installed
14ZFS from a distributions repository (not from source) the zfs-test package
15may be provided for your distribution.
16
17 - Installed from source
18 $ rpm -ivh ./zfs-test*.rpm, or
19 $ dpkg -i ./zfs-test*.deb,
20
21 - Installed from package repository
22 $ yum install zfs-test
23 $ apt-get install zfs-test
24
252) Running the ZFS Test Suite
26
27The pre-requisites for running the ZFS Test Suite are:
28
29 * Three scratch disks
30 * Specify the disks you wish to use in the $DISKS variable, as a
31 space delimited list like this: DISKS='vdb vdc vdd'. By default
32 the zfs-tests.sh sciprt will construct three loopback devices to
33 be used for testing: DISKS='loop0 loop1 loop2'.
34 * A non-root user with a full set of basic privileges and the ability
35 to sudo(8) to root without a password to run the test.
36 * Specify any pools you wish to preserve as a space delimited list in
37 the $KEEP variable. All pools detected at the start of testing are
38 added automatically.
39 * The ZFS Test Suite will add users and groups to test machine to
40 verify functionality. Therefore it is strongly advised that a
41 dedicated test machine, which can be a VM, be used for testing.
42
43Once the pre-requisites are satisfied simply run the zfs-tests.sh script:
44
45 $ /usr/share/zfs/zfs-tests.sh
46
47Alternately, the zfs-tests.sh script can be run from the source tree to allow
48developers to rapidly validate their work. In this mode the ZFS utilities and
49modules from the source tree will be used (rather than those installed on the
50system). In order to avoid certain types of failures you will need to ensure
51the ZFS udev rules are installed. This can be done manually or by ensuring
52some version of ZFS is installed on the system.
53
54 $ ./scripts/zfs-tests.sh
55
56The following zfs-tests.sh options are supported:
57
58 -v Verbose zfs-tests.sh output When specified additional
59 information describing the test environment will be logged
60 prior to invoking test-runner. This includes the runfile
61 being used, the DISKS targeted, pools to keep, etc.
62
63 -q Quiet test-runner output. When specified it is passed to
64 test-runner(1) which causes output to be written to the
65 console only for tests that do not pass and the results
66 summary.
67
68 -x Remove all testpools, dm, lo, and files (unsafe). When
69 specified the script will attempt to remove any leftover
70 configuration from a previous test run. This includes
71 destroying any pools named testpool, unused DM devices,
72 and loopback devices backed by file-vdevs. This operation
73 can be DANGEROUS because it is possible that the script
74 will mistakenly remove a resource not related to the testing.
75
76 -k Disable cleanup after test failure. When specified the
77 zfs-tests.sh script will not perform any additional cleanup
78 when test-runner exists. This is useful when the results of
79 a specific test need to be preserved for further analysis.
80
81 -f Use sparse files directly instread of loopback devices for
82 the testing. When running in this mode certain tests will
83 be skipped which depend on real block devices.
84
85 -d DIR Create sparse files for vdevs in the DIR directory. By
86 default these files are created under /var/tmp/.
87
88 -s SIZE Use vdevs of SIZE (default: 2G)
89
90 -r RUNFILE Run tests in RUNFILE (default: linux.run)
91
92
93The ZFS Test Suite allows the user to specify a subset of the tests via a
94runfile. The format of the runfile is explained in test-runner(1), and
95the files that zfs-tests.sh uses are available for reference under
96/usr/share/zfs/runfiles. To specify a custom runfile, use the -r option:
97
98 $ /usr/share/zfs/zfs-tests.sh -r my_tests.run
99
1003) Test results
101
102While the ZFS Test Suite is running, one informational line is printed at the
103end of each test, and a results summary is printed at the end of the run. The
104results summary includes the location of the complete logs, which is logged in
105the form /var/tmp/test_results/[ISO 8601 date]. A normal test run launched
106with the `zfs-tests.sh` wrapper script will look something like this:
107
108$ /usr/share/zfs/zfs-tests.sh -v -d /mnt
109
110--- Configuration ---
111Runfile: /usr/share/zfs/runfiles/linux.run
112STF_TOOLS: /usr/share/zfs/test-runner
113STF_SUITE: /usr/share/zfs/zfs-tests
114FILEDIR: /mnt
115FILES: /mnt/file-vdev0 /mnt/file-vdev1 /mnt/file-vdev2
116LOOPBACKS: /dev/loop0 /dev/loop1 /dev/loop2
117DISKS: loop0 loop1 loop2
118NUM_DISKS: 3
119FILESIZE: 2G
120Keep pool(s): rpool
121
122/usr/share/zfs/test-runner/bin/test-runner.py -c \
123 /usr/share/zfs/runfiles/linux.run -i /usr/share/zfs/zfs-tests
124Test: .../tests/functional/acl/posix/setup (run as root) [00:00] [PASS]
125...470 additional tests...
126Test: .../tests/functional/zvol/zvol_cli/cleanup (run as root) [00:00] [PASS]
127
128Results Summary
129PASS 472
130
131Running Time: 00:45:09
132Percent passed: 100.0%
133Log directory: /var/tmp/test_results/20160316T181651