]> git.proxmox.com Git - rustc.git/blob - src/doc/book/2018-edition/src/ch11-02-running-tests.md
New upstream version 1.31.0~beta.4+dfsg1
[rustc.git] / src / doc / book / 2018-edition / src / ch11-02-running-tests.md
1 ## Controlling How Tests Are Run
2
3 Just as `cargo run` compiles your code and then runs the resulting binary,
4 `cargo test` compiles your code in test mode and runs the resulting test
5 binary. You can specify command line options to change the default behavior of
6 `cargo test`. For example, the default behavior of the binary produced by
7 `cargo test` is to run all the tests in parallel and capture output generated
8 during test runs, preventing the output from being displayed and making it
9 easier to read the output related to the test results.
10
11 Some command line options go to `cargo test`, and some go to the resulting test
12 binary. To separate these two types of arguments, you list the arguments that
13 go to `cargo test` followed by the separator `--` and then the ones that go to
14 the test binary. Running `cargo test --help` displays the options you can use
15 with `cargo test`, and running `cargo test -- --help` displays the options you
16 can use after the separator `--`.
17
18 ### Running Tests in Parallel or Consecutively
19
20 When you run multiple tests, by default they run in parallel using threads.
21 This means the tests will finish running faster so you can get feedback quicker
22 on whether or not your code is working. Because the tests are running at the
23 same time, make sure your tests don’t depend on each other or on any shared
24 state, including a shared environment, such as the current working directory or
25 environment variables.
26
27 For example, say each of your tests runs some code that creates a file on disk
28 named *test-output.txt* and writes some data to that file. Then each test reads
29 the data in that file and asserts that the file contains a particular value,
30 which is different in each test. Because the tests run at the same time, one
31 test might overwrite the file between when another test writes and reads the
32 file. The second test will then fail, not because the code is incorrect but
33 because the tests have interfered with each other while running in parallel.
34 One solution is to make sure each test writes to a different file; another
35 solution is to run the tests one at a time.
36
37 If you don’t want to run the tests in parallel or if you want more fine-grained
38 control over the number of threads used, you can send the `--test-threads` flag
39 and the number of threads you want to use to the test binary. Take a look at
40 the following example:
41
42 ```text
43 $ cargo test -- --test-threads=1
44 ```
45
46 We set the number of test threads to `1`, telling the program not to use any
47 parallelism. Running the tests using one thread will take longer than running
48 them in parallel, but the tests won’t interfere with each other if they share
49 state.
50
51 ### Showing Function Output
52
53 By default, if a test passes, Rust’s test library captures anything printed to
54 standard output. For example, if we call `println!` in a test and the test
55 passes, we won’t see the `println!` output in the terminal; we’ll see only the
56 line that indicates the test passed. If a test fails, we’ll see whatever was
57 printed to standard output with the rest of the failure message.
58
59 As an example, Listing 11-10 has a silly function that prints the value of its
60 parameter and returns 10, as well as a test that passes and a test that fails.
61
62 <span class="filename">Filename: src/lib.rs</span>
63
64 ```rust,panics
65 fn prints_and_returns_10(a: i32) -> i32 {
66 println!("I got the value {}", a);
67 10
68 }
69
70 #[cfg(test)]
71 mod tests {
72 use super::*;
73
74 #[test]
75 fn this_test_will_pass() {
76 let value = prints_and_returns_10(4);
77 assert_eq!(10, value);
78 }
79
80 #[test]
81 fn this_test_will_fail() {
82 let value = prints_and_returns_10(8);
83 assert_eq!(5, value);
84 }
85 }
86 ```
87
88 <span class="caption">Listing 11-10: Tests for a function that calls
89 `println!`</span>
90
91 When we run these tests with `cargo test`, we’ll see the following output:
92
93 ```text
94 running 2 tests
95 test tests::this_test_will_pass ... ok
96 test tests::this_test_will_fail ... FAILED
97
98 failures:
99
100 ---- tests::this_test_will_fail stdout ----
101 I got the value 8
102 thread 'tests::this_test_will_fail' panicked at 'assertion failed: `(left == right)`
103 left: `5`,
104 right: `10`', src/lib.rs:19:8
105 note: Run with `RUST_BACKTRACE=1` for a backtrace.
106
107 failures:
108 tests::this_test_will_fail
109
110 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
111 ```
112
113 Note that nowhere in this output do we see `I got the value 4`, which is what
114 is printed when the test that passes runs. That output has been captured. The
115 output from the test that failed, `I got the value 8`, appears in the section
116 of the test summary output, which also shows the cause of the test failure.
117
118 If we want to see printed values for passing tests as well, we can disable the
119 output capture behavior by using the `--nocapture` flag:
120
121 ```text
122 $ cargo test -- --nocapture
123 ```
124
125 When we run the tests in Listing 11-10 again with the `--nocapture` flag, we
126 see the following output:
127
128 ```text
129 running 2 tests
130 I got the value 4
131 I got the value 8
132 test tests::this_test_will_pass ... ok
133 thread 'tests::this_test_will_fail' panicked at 'assertion failed: `(left == right)`
134 left: `5`,
135 right: `10`', src/lib.rs:19:8
136 note: Run with `RUST_BACKTRACE=1` for a backtrace.
137 test tests::this_test_will_fail ... FAILED
138
139 failures:
140
141 failures:
142 tests::this_test_will_fail
143
144 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
145 ```
146
147 Note that the output for the tests and the test results are interleaved; the
148 reason is that the tests are running in parallel, as we talked about in the
149 previous section. Try using the `--test-threads=1` option and the `--nocapture`
150 flag, and see what the output looks like then!
151
152 ### Running a Subset of Tests by Name
153
154 Sometimes, running a full test suite can take a long time. If you’re working on
155 code in a particular area, you might want to run only the tests pertaining to
156 that code. You can choose which tests to run by passing `cargo test` the name
157 or names of the test(s) you want to run as an argument.
158
159 To demonstrate how to run a subset of tests, we’ll create three tests for our
160 `add_two` function, as shown in Listing 11-11, and choose which ones to run:
161
162 <span class="filename">Filename: src/lib.rs</span>
163
164 ```rust
165 pub fn add_two(a: i32) -> i32 {
166 a + 2
167 }
168
169 #[cfg(test)]
170 mod tests {
171 use super::*;
172
173 #[test]
174 fn add_two_and_two() {
175 assert_eq!(4, add_two(2));
176 }
177
178 #[test]
179 fn add_three_and_two() {
180 assert_eq!(5, add_two(3));
181 }
182
183 #[test]
184 fn one_hundred() {
185 assert_eq!(102, add_two(100));
186 }
187 }
188 ```
189
190 <span class="caption">Listing 11-11: Three tests with three different
191 names</span>
192
193 If we run the tests without passing any arguments, as we saw earlier, all the
194 tests will run in parallel:
195
196 ```text
197 running 3 tests
198 test tests::add_two_and_two ... ok
199 test tests::add_three_and_two ... ok
200 test tests::one_hundred ... ok
201
202 test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
203 ```
204
205 #### Running Single Tests
206
207 We can pass the name of any test function to `cargo test` to run only that test:
208
209 ```text
210 $ cargo test one_hundred
211 Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
212 Running target/debug/deps/adder-06a75b4a1f2515e9
213
214 running 1 test
215 test tests::one_hundred ... ok
216
217 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out
218 ```
219
220 Only the test with the name `one_hundred` ran; the other two tests didn’t match
221 that name. The test output lets us know we had more tests than what this
222 command ran by displaying `2 filtered out` at the end of the summary line.
223
224 We can’t specify the names of multiple tests in this way; only the first value
225 given to `cargo test` will be used. But there is a way to run multiple tests.
226
227 #### Filtering to Run Multiple Tests
228
229 We can specify part of a test name, and any test whose name matches that value
230 will be run. For example, because two of our tests’ names contain `add`, we can
231 run those two by running `cargo test add`:
232
233 ```text
234 $ cargo test add
235 Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
236 Running target/debug/deps/adder-06a75b4a1f2515e9
237
238 running 2 tests
239 test tests::add_two_and_two ... ok
240 test tests::add_three_and_two ... ok
241
242 test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out
243 ```
244
245 This command ran all tests with `add` in the name and filtered out the test
246 named `one_hundred`. Also note that the module in which tests appear becomes
247 part of the test’s name, so we can run all the tests in a module by filtering
248 on the module’s name.
249
250 ### Ignoring Some Tests Unless Specifically Requested
251
252 Sometimes a few specific tests can be very time-consuming to execute, so you
253 might want to exclude them during most runs of `cargo test`. Rather than
254 listing as arguments all tests you do want to run, you can instead annotate the
255 time-consuming tests using the `ignore` attribute to exclude them, as shown
256 here:
257
258 <span class="filename">Filename: src/lib.rs</span>
259
260 ```rust
261 #[test]
262 fn it_works() {
263 assert_eq!(2 + 2, 4);
264 }
265
266 #[test]
267 #[ignore]
268 fn expensive_test() {
269 // code that takes an hour to run
270 }
271 ```
272
273 After `#[test]` we add the `#[ignore]` line to the test we want to exclude. Now
274 when we run our tests, `it_works` runs, but `expensive_test` doesn’t:
275
276 ```text
277 $ cargo test
278 Compiling adder v0.1.0 (file:///projects/adder)
279 Finished dev [unoptimized + debuginfo] target(s) in 0.24 secs
280 Running target/debug/deps/adder-ce99bcc2479f4607
281
282 running 2 tests
283 test expensive_test ... ignored
284 test it_works ... ok
285
286 test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out
287 ```
288
289 The `expensive_test` function is listed as `ignored`. If we want to run only
290 the ignored tests, we can use `cargo test -- --ignored`:
291
292 ```text
293 $ cargo test -- --ignored
294 Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
295 Running target/debug/deps/adder-ce99bcc2479f4607
296
297 running 1 test
298 test expensive_test ... ok
299
300 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out
301 ```
302
303 By controlling which tests run, you can make sure your `cargo test` results
304 will be fast. When you’re at a point where it makes sense to check the results
305 of the `ignored` tests and you have time to wait for the results, you can run
306 `cargo test -- --ignored` instead.