]> git.proxmox.com Git - rustc.git/blame - src/doc/book/second-edition/src/ch11-02-running-tests.md
New upstream version 1.18.0+dfsg1
[rustc.git] / src / doc / book / second-edition / src / ch11-02-running-tests.md
CommitLineData
cc61c64b
XL
1## Controlling How Tests are Run
2
3Just as `cargo run` compiles your code and then runs the resulting binary,
4`cargo test` compiles your code in test mode and runs the resulting test
5binary. There are options you can use to change the default behavior of `cargo
6test`. For example, the default behavior of the binary produced by `cargo test`
7is to run all the tests in parallel and capture output generated during test
8runs, preventing it from being displayed to make it easier to read the output
9related to the test results. You can change this default behavior by specifying
10command line options.
11
12Some command line options can be passed to `cargo test`, and some need to be
13passed instead to the resulting test binary. To separate these two types of
14arguments, you list the arguments that go to `cargo test`, then the separator
15`--`, and then the arguments that go to the test binary. Running `cargo test
16--help` will tell you about the options that go with `cargo test`, and running
17`cargo test -- --help` will tell you about the options that go after the
18separator `--`.
19
20### Running Tests in Parallel or Consecutively
21
22<!-- Are we safe assuming the reader will know enough about threads in this
23context? -->
24<!-- Yes /Carol -->
25
26When multiple tests are run, by default they run in parallel using threads.
27This means the tests will finish running faster, so that we can get faster
28feedback on whether or not our code is working. Since the tests are running at
29the same time, you should take care that your tests do not depend on each other
30or on any shared state, including a shared environment such as the current
31working directory or environment variables.
32
33For example, say each of your tests runs some code that creates a file on disk
34named `test-output.txt` and writes some data to that file. Then each test reads
35the data in that file and asserts that the file contains a particular value,
36which is different in each test. Because the tests are all run at the same
37time, one test might overwrite the file between when another test writes and
38reads the file. The second test will then fail, not because the code is
39incorrect, but because the tests have interfered with each other while running
40in parallel. One solution would be to make sure each test writes to a different
41file; another solution is to run the tests one at a time.
42
43If you don't want to run the tests in parallel, or if you want more
44fine-grained control over the number of threads used, you can send the
45`--test-threads` flag and the number of threads you want to use to the test
46binary. For example:
47
48```text
49$ cargo test -- --test-threads=1
50```
51
52We set the number of test threads to 1, telling the program not to use any
53parallelism. This will take longer than running them in parallel, but the tests
54won't be potentially interfering with each other if they share state.
55
56### Showing Function Output
57
58By default, if a test passes, Rust's test library captures anything printed to
59standard output. For example, if we call `println!` in a test and the test
60passes, we won't see the `println!` output in the terminal: we'll only see the
61line that says the test passed. If a test fails, we'll see whatever was printed
62to standard output with the rest of the failure message.
63
64For example, Listing 11-10 has a silly function that prints out the value of
65its parameter and then returns 10. We then have a test that passes and a test
66that fails:
67
68<span class="filename">Filename: src/lib.rs</span>
69
70```rust
71fn prints_and_returns_10(a: i32) -> i32 {
72 println!("I got the value {}", a);
73 10
74}
75
76#[cfg(test)]
77mod tests {
78 use super::*;
79
80 #[test]
81 fn this_test_will_pass() {
82 let value = prints_and_returns_10(4);
83 assert_eq!(10, value);
84 }
85
86 #[test]
87 fn this_test_will_fail() {
88 let value = prints_and_returns_10(8);
89 assert_eq!(5, value);
90 }
91}
92```
93
94<span class="caption">Listing 11-10: Tests for a function that calls `println!`
95</span>
96
97The output we'll see when we run these tests with `cargo test` is:
98
99```text
100running 2 tests
101test tests::this_test_will_pass ... ok
102test tests::this_test_will_fail ... FAILED
103
104failures:
105
106---- tests::this_test_will_fail stdout ----
107 I got the value 8
108thread 'tests::this_test_will_fail' panicked at 'assertion failed: `(left ==
109right)` (left: `5`, right: `10`)', src/lib.rs:19
110note: Run with `RUST_BACKTRACE=1` for a backtrace.
111
112failures:
113 tests::this_test_will_fail
114
115test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured
116```
117
118Note that nowhere in this output do we see `I got the value 4`, which is what
119gets printed when the test that passes runs. That output has been captured. The
120output from the test that failed, `I got the value 8`, appears in the section
121of the test summary output that also shows the cause of the test failure.
122
123If we want to be able to see printed values for passing tests as well, the
124output capture behavior can be disabled by using the `--nocapture` flag:
125
126```text
127$ cargo test -- --nocapture
128```
129
130Running the tests from Listing 11-10 again with the `--nocapture` flag now
131shows:
132
133```text
134running 2 tests
135I got the value 4
136I got the value 8
137test tests::this_test_will_pass ... ok
138thread 'tests::this_test_will_fail' panicked at 'assertion failed: `(left ==
139right)` (left: `5`, right: `10`)', src/lib.rs:19
140note: Run with `RUST_BACKTRACE=1` for a backtrace.
141test tests::this_test_will_fail ... FAILED
142
143failures:
144
145failures:
146 tests::this_test_will_fail
147
148test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured
149```
150
151Note that the output for the tests and the test results is interleaved; this is
152because the tests are running in parallel as we talked about in the previous
153section. Try using both the `--test-threads=1` option and the `--nocapture`
154function and see what the output looks like then!
155
156### Running a Subset of Tests by Name
157
158Sometimes, running a full test suite can take a long time. If you're working on
159code in a particular area, you might want to run only the tests pertaining to
160that code. You can choose which tests to run by passing `cargo test` the name
161or names of the test/s you want to run as an argument.
162
163To demonstrate how to run a subset of tests, we'll create three tests for our
164`add_two` function as shown in Listing 11-11 and choose which ones to run:
165
166<span class="filename">Filename: src/lib.rs</span>
167
168```rust
169pub fn add_two(a: i32) -> i32 {
170 a + 2
171}
172
173#[cfg(test)]
174mod tests {
175 use super::*;
176
177 #[test]
178 fn add_two_and_two() {
179 assert_eq!(4, add_two(2));
180 }
181
182 #[test]
183 fn add_three_and_two() {
184 assert_eq!(5, add_two(3));
185 }
186
187 #[test]
188 fn one_hundred() {
189 assert_eq!(102, add_two(100));
190 }
191}
192```
193
194<span class="caption">Listing 11-11: Three tests with a variety of names</span>
195
196If we run the tests without passing any arguments, as we've already seen, all
197the tests will run in parallel:
198
199```text
200running 3 tests
201test tests::add_two_and_two ... ok
202test tests::add_three_and_two ... ok
203test tests::one_hundred ... ok
204
205test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured
206```
207
208#### Running Single Tests
209
210We can pass the name of any test function to `cargo test` to run only that test:
211
212```text
213$ cargo test one_hundred
214 Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
215 Running target/debug/deps/adder-06a75b4a1f2515e9
216
217running 1 test
218test tests::one_hundred ... ok
219
220test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
221```
222
223We can't specify the names of multiple tests in this way, only the first value
224given to `cargo test` will be used.
225
226#### Filtering to Run Multiple Tests
227
228However, we can specify part of a test name, and any test whose name matches
229that value will get run. For example, since two of our tests' names contain
230`add`, we can run those two by running `cargo test add`:
231
232```text
233$ cargo test add
234 Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
235 Running target/debug/deps/adder-06a75b4a1f2515e9
236
237running 2 tests
238test tests::add_two_and_two ... ok
239test tests::add_three_and_two ... ok
240
241test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured
242```
243
244This ran all tests with `add` in the name. Also note that the module in which
245tests appear becomes part of the test's name, so we can run all the tests in a
246module by filtering on the module's name.
247
248<!-- in what kind of situation might you need to run only some tests, when you
249have lots and lots in a program? -->
250<!-- We covered this in the first paragraph of the "Running a Subset of Tests
251by Name" section, do you think it should be repeated so soon? Most people who
252use tests have sufficient motivation for wanting to run a subset of the tests,
253they just need to know how to do it with Rust, so we don't think this is a
254point that needs to be emphasized multiple times. /Carol -->
255
256### Ignore Some Tests Unless Specifically Requested
257
258Sometimes a few specific tests can be very time-consuming to execute, so you
259might want to exclude them during most runs of `cargo test`. Rather than
260listing as arguments all tests you do want to run, we can instead annotate the
261time consuming tests with the `ignore` attribute to exclude them:
262
263<span class="filename">Filename: src/lib.rs</span>
264
265```rust
266#[test]
267fn it_works() {
268 assert!(true);
269}
270
271#[test]
272#[ignore]
273fn expensive_test() {
274 // code that takes an hour to run
275}
276```
277
278We add the `#[ignore]` line to the test we want to exclude, after `#[test]`.
279Now if we run our tests, we'll see `it_works` runs, but `expensive_test` does
280not:
281
282```text
283$ cargo test
284 Compiling adder v0.1.0 (file:///projects/adder)
285 Finished debug [unoptimized + debuginfo] target(s) in 0.24 secs
286 Running target/debug/deps/adder-ce99bcc2479f4607
287
288running 2 tests
289test expensive_test ... ignored
290test it_works ... ok
291
292test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured
293
294 Doc-tests adder
295
296running 0 tests
297
298test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
299```
300
301`expensive_test` is listed as `ignored`. If we want to run only the ignored
302tests, we can ask for them to be run with `cargo test -- --ignored`:
303
304<!-- what does the double `-- --` mean? That seems interesting -->
305<!-- We covered that in the second paragraph after the "Controlling How Tests
306are Run" heading, and this section is beneath that heading, so I don't think a
307back reference is needed /Carol -->
308
309<!-- is that right, this way the program knows to run only the test with
310`ignore` if we add this, or it knows to run all tests? -->
311<!-- Is this unclear from the output that shows `expensive_test` was run and
312the `it_works` test does not appear? I'm not sure how to make this clearer.
313/Carol -->
314
315```text
316$ cargo test -- --ignored
317 Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
318 Running target/debug/deps/adder-ce99bcc2479f4607
319
320running 1 test
321test expensive_test ... ok
322
323test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
324```
325
326By controlling which tests run, you can make sure your `cargo test` results
327will be fast. When you're at a point that it makes sense to check the results
328of the `ignored` tests and you have time to wait for the results, you can
329choose to run `cargo test -- --ignored` instead.