]> git.proxmox.com Git - rustc.git/blob - src/doc/book/nostarch/chapter11.md
New upstream version 1.60.0+dfsg1
[rustc.git] / src / doc / book / nostarch / chapter11.md
1 <!-- DO NOT EDIT THIS FILE.
2
3 This file is periodically generated from the content in the `/src/`
4 directory, so all fixes need to be made in `/src/`.
5 -->
6
7 [TOC]
8
9 # Writing Automated Tests
10
11 In his 1972 essay “The Humble Programmer,” Edsger W. Dijkstra said that
12 “Program testing can be a very effective way to show the presence of bugs, but
13 it is hopelessly inadequate for showing their absence.” That doesn’t mean we
14 shouldn’t try to test as much as we can!
15
16 Correctness in our programs is the extent to which our code does what we intend
17 it to do. Rust is designed with a high degree of concern about the correctness
18 of programs, but correctness is complex and not easy to prove. Rust’s type
19 system shoulders a huge part of this burden, but the type system cannot catch
20 every kind of incorrectness. As such, Rust includes support for writing
21 automated software tests within the language.
22
23 As an example, say we write a function called `add_two` that adds 2 to whatever
24 number is passed to it. This function’s signature accepts an integer as a
25 parameter and returns an integer as a result. When we implement and compile
26 that function, Rust does all the type checking and borrow checking that you’ve
27 learned so far to ensure that, for instance, we aren’t passing a `String` value
28 or an invalid reference to this function. But Rust *can’t* check that this
29 function will do precisely what we intend, which is return the parameter plus 2
30 rather than, say, the parameter plus 10 or the parameter minus 50! That’s where
31 tests come in.
32
33 We can write tests that assert, for example, that when we pass `3` to the
34 `add_two` function, the returned value is `5`. We can run these tests whenever
35 we make changes to our code to make sure any existing correct behavior has not
36 changed.
37
38 Testing is a complex skill: although we can’t cover every detail about how to
39 write good tests in one chapter, we’ll discuss the mechanics of Rust’s testing
40 facilities. We’ll talk about the annotations and macros available to you when
41 writing your tests, the default behavior and options provided for running your
42 tests, and how to organize tests into unit tests and integration tests.
43
44 ## How to Write Tests
45
46 Tests are Rust functions that verify that the non-test code is functioning in
47 the expected manner. The bodies of test functions typically perform these three
48 actions:
49
50 1. Set up any needed data or state.
51 2. Run the code you want to test.
52 3. Assert the results are what you expect.
53
54 Let’s look at the features Rust provides specifically for writing tests that
55 take these actions, which include the `test` attribute, a few macros, and the
56 `should_panic` attribute.
57
58 ### The Anatomy of a Test Function
59
60 At its simplest, a test in Rust is a function that’s annotated with the `test`
61 attribute. Attributes are metadata about pieces of Rust code; one example is
62 the `derive` attribute we used with structs in Chapter 5. To change a function
63 into a test function, add `#[test]` on the line before `fn`. When you run your
64 tests with the `cargo test` command, Rust builds a test runner binary that runs
65 the functions annotated with the `test` attribute and reports on whether each
66 test function passes or fails.
67
68 When we make a new library project with Cargo, a test module with a test
69 function in it is automatically generated for us. This module helps you start
70 writing your tests so you don’t have to look up the exact structure and syntax
71 of test functions every time you start a new project. You can add as many
72 additional test functions and as many test modules as you want!
73
74 We’ll explore some aspects of how tests work by experimenting with the template
75 test generated for us without actually testing any code. Then we’ll write some
76 real-world tests that call some code that we’ve written and assert that its
77 behavior is correct.
78
79 Let’s create a new library project called `adder`:
80
81 ```
82 $ cargo new adder --lib
83 Created library `adder` project
84 $ cd adder
85 ```
86
87 The contents of the *src/lib.rs* file in your `adder` library should look like
88 Listing 11-1.
89
90 Filename: src/lib.rs
91
92 ```
93 #[cfg(test)]
94 mod tests {
95 [1] #[test]
96 fn it_works() {
97 [2] assert_eq!(2 + 2, 4);
98 }
99 }
100 ```
101
102 Listing 11-1: The test module and function generated automatically by `cargo
103 new`
104
105 For now, let’s ignore the top two lines and focus on the function to see how it
106 works. Note the `#[test]` annotation [1]: this attribute indicates this is a
107 test function, so the test runner knows to treat this function as a test. We
108 could also have non-test functions in the `tests` module to help set up common
109 scenarios or perform common operations, so we need to indicate which functions
110 are tests by using the `#[test]` attribute.
111
112 The function body uses the `assert_eq!` macro [2] to assert that 2 + 2 equals
113 4. This assertion serves as an example of the format for a typical test. Let’s
114 run it to see that this test passes.
115
116 The `cargo test` command runs all tests in our project, as shown in Listing
117 11-2.
118
119 ```
120 $ cargo test
121 Compiling adder v0.1.0 (file:///projects/adder)
122 Finished test [unoptimized + debuginfo] target(s) in 0.57s
123 Running unittests (target/debug/deps/adder-92948b65e88960b4)
124
125 [1] running 1 test
126 [2] test tests::it_works ... ok
127
128 [3] test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
129
130 [4] Doc-tests adder
131
132 running 0 tests
133
134 test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
135
136 ```
137
138 Listing 11-2: The output from running the automatically generated test
139
140 Cargo compiled and ran the test. After the `Compiling`, `Finished`, and
141 `Running` lines is the line `running 1 test` [1]. The next line shows the name
142 of the generated test function, called `it_works`, and the result of running
143 that test, `ok` [2]. The overall summary of running the tests appears next. The
144 text `test result: ok.` [3] means that all the tests passed, and the portion
145 that reads `1 passed; 0 failed` totals the number of tests that passed or
146 failed.
147
148 Because we don’t have any tests we’ve marked as ignored, the summary shows `0
149 ignored`. We also haven’t filtered the tests being run, so the end of the
150 summary shows `0 filtered out`. We’ll talk about ignoring and filtering out
151 tests in the next section, “Controlling How Tests Are
152 Run.”
153
154 The `0 measured` statistic is for benchmark tests that measure performance.
155 Benchmark tests are, as of this writing, only available in nightly Rust. See
156 the documentation about benchmark tests at
157 *https://doc.rust-lang.org/unstable-book/library-features/test.html* to learn
158 more.
159
160 The next part of the test output, which starts with `Doc-tests adder` [4], is
161 for the results of any documentation tests. We don’t have any documentation
162 tests yet, but Rust can compile any code examples that appear in our API
163 documentation. This feature helps us keep our docs and our code in sync! We’ll
164 discuss how to write documentation tests in the “Documentation Comments as
165 Tests” section of Chapter 14. For now, we’ll ignore the `Doc-tests` output.
166
167 Let’s change the name of our test to see how that changes the test output.
168 Change the `it_works` function to a different name, such as `exploration`, like
169 so:
170
171 Filename: src/lib.rs
172
173 ```
174 #[cfg(test)]
175 mod tests {
176 #[test]
177 fn exploration() {
178 assert_eq!(2 + 2, 4);
179 }
180 }
181 ```
182
183 Then run `cargo test` again. The output now shows `exploration` instead of
184 `it_works`:
185
186 ```
187 running 1 test
188 test tests::exploration ... ok
189
190 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out;
191 ```
192
193 Let’s add another test, but this time we’ll make a test that fails! Tests fail
194 when something in the test function panics. Each test is run in a new thread,
195 and when the main thread sees that a test thread has died, the test is marked
196 as failed. We talked about the simplest way to cause a panic in Chapter 9,
197 which is to call the `panic!` macro. Enter the new test, `another`, so your
198 *src/lib.rs* file looks like Listing 11-3.
199
200 Filename: src/lib.rs
201
202 ```
203 #[cfg(test)]
204 mod tests {
205 #[test]
206 fn exploration() {
207 assert_eq!(2 + 2, 4);
208 }
209
210 #[test]
211 fn another() {
212 panic!("Make this test fail");
213 }
214 }
215 ```
216
217 Listing 11-3: Adding a second test that will fail because we call the `panic!`
218 macro
219
220 Run the tests again using `cargo test`. The output should look like Listing
221 11-4, which shows that our `exploration` test passed and `another` failed.
222
223 ```
224 running 2 tests
225 test tests::exploration ... ok
226 [1] test tests::another ... FAILED
227
228 [2] failures:
229
230 ---- tests::another stdout ----
231 thread 'main' panicked at 'Make this test fail', src/lib.rs:10:9
232 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
233
234 [3] failures:
235 tests::another
236
237 [4] test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
238
239 error: test failed, to rerun pass '--lib'
240 ```
241
242 Listing 11-4: Test results when one test passes and one test fails
243
244 Instead of `ok`, the line `test tests::another` shows `FAILED` [1]. Two new
245 sections appear between the individual results and the summary: the first
246 section [2] displays the detailed reason for each test failure. In this case,
247 `another` failed because it `panicked at 'Make this test fail'`, which happened
248 on line 10 in the *src/lib.rs* file. The next section [3] lists just the names
249 of all the failing tests, which is useful when there are lots of tests and lots
250 of detailed failing test output. We can use the name of a failing test to run
251 just that test to more easily debug it; we’ll talk more about ways to run tests
252 in the “Controlling How Tests Are Run” section.
253
254 The summary line displays at the end [4]: overall, our test result is `FAILED`.
255 We had one test pass and one test fail.
256
257 Now that you’ve seen what the test results look like in different scenarios,
258 let’s look at some macros other than `panic!` that are useful in tests.
259
260 ### Checking Results with the `assert!` Macro
261
262 The `assert!` macro, provided by the standard library, is useful when you want
263 to ensure that some condition in a test evaluates to `true`. We give the
264 `assert!` macro an argument that evaluates to a Boolean. If the value is
265 `true`, `assert!` does nothing and the test passes. If the value is `false`,
266 the `assert!` macro calls the `panic!` macro, which causes the test to fail.
267 Using the `assert!` macro helps us check that our code is functioning in the
268 way we intend.
269
270 In Chapter 5, Listing 5-15, we used a `Rectangle` struct and a `can_hold`
271 method, which are repeated here in Listing 11-5. Let’s put this code in the
272 *src/lib.rs* file and write some tests for it using the `assert!` macro.
273
274 Filename: src/lib.rs
275
276 ```
277 #[derive(Debug)]
278 struct Rectangle {
279 width: u32,
280 height: u32,
281 }
282
283 impl Rectangle {
284 fn can_hold(&self, other: &Rectangle) -> bool {
285 self.width > other.width && self.height > other.height
286 }
287 }
288 ```
289
290 Listing 11-5: Using the `Rectangle` struct and its `can_hold` method from
291 Chapter 5
292
293 The `can_hold` method returns a Boolean, which means it’s a perfect use case
294 for the `assert!` macro. In Listing 11-6, we write a test that exercises the
295 `can_hold` method by creating a `Rectangle` instance that has a width of 8 and
296 a height of 7 and asserting that it can hold another `Rectangle` instance that
297 has a width of 5 and a height of 1.
298
299 Filename: src/lib.rs
300
301 ```
302 #[cfg(test)]
303 mod tests {
304 [1] use super::*;
305
306 #[test]
307 [2] fn larger_can_hold_smaller() {
308 [3] let larger = Rectangle {
309 width: 8,
310 height: 7,
311 };
312 let smaller = Rectangle {
313 width: 5,
314 height: 1,
315 };
316
317 [4] assert!(larger.can_hold(&smaller));
318 }
319 }
320 ```
321
322 Listing 11-6: A test for `can_hold` that checks whether a larger rectangle can
323 indeed hold a smaller rectangle
324
325 Note that we’ve added a new line inside the `tests` module: `use super::*;`
326 [1]. The `tests` module is a regular module that follows the usual visibility
327 rules we covered in Chapter 7 in the “Paths for Referring to an Item in the
328 Module Tree” section. Because the `tests` module is an inner module, we need to
329 bring the code under test in the outer module into the scope of the inner
330 module. We use a glob here so anything we define in the outer module is
331 available to this `tests` module.
332
333 We’ve named our test `larger_can_hold_smaller` [2], and we’ve created the two
334 `Rectangle` instances that we need [3]. Then we called the `assert!` macro and
335 passed it the result of calling `larger.can_hold(&smaller)` [4]. This
336 expression is supposed to return `true`, so our test should pass. Let’s find
337 out!
338
339 ```
340 running 1 test
341 test tests::larger_can_hold_smaller ... ok
342
343 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
344 ```
345
346 It does pass! Let’s add another test, this time asserting that a smaller
347 rectangle cannot hold a larger rectangle:
348
349 Filename: src/lib.rs
350
351 ```
352 #[cfg(test)]
353 mod tests {
354 use super::*;
355
356 #[test]
357 fn larger_can_hold_smaller() {
358 // --snip--
359 }
360
361 #[test]
362 fn smaller_cannot_hold_larger() {
363 let larger = Rectangle {
364 width: 8,
365 height: 7,
366 };
367 let smaller = Rectangle {
368 width: 5,
369 height: 1,
370 };
371
372 assert!(!smaller.can_hold(&larger));
373 }
374 }
375 ```
376
377 Because the correct result of the `can_hold` function in this case is `false`,
378 we need to negate that result before we pass it to the `assert!` macro. As a
379 result, our test will pass if `can_hold` returns `false`:
380
381 ```
382 running 2 tests
383 test tests::larger_can_hold_smaller ... ok
384 test tests::smaller_cannot_hold_larger ... ok
385
386 test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
387 ```
388
389 Two tests that pass! Now let’s see what happens to our test results when we
390 introduce a bug in our code. Let’s change the implementation of the `can_hold`
391 method by replacing the greater than sign with a less than sign when it
392 compares the widths:
393
394 ```
395 // --snip--
396 impl Rectangle {
397 fn can_hold(&self, other: &Rectangle) -> bool {
398 self.width < other.width && self.height > other.height
399 }
400 }
401 ```
402
403 Running the tests now produces the following:
404
405 ```
406 running 2 tests
407 test tests::smaller_cannot_hold_larger ... ok
408 test tests::larger_can_hold_smaller ... FAILED
409
410 failures:
411
412 ---- tests::larger_can_hold_smaller stdout ----
413 thread 'main' panicked at 'assertion failed: larger.can_hold(&smaller)', src/lib.rs:28:9
414 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
415
416
417 failures:
418 tests::larger_can_hold_smaller
419
420 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
421 ```
422
423 Our tests caught the bug! Because `larger.width` is 8 and `smaller.width` is
424 5, the comparison of the widths in `can_hold` now returns `false`: 8 is not
425 less than 5.
426
427 ### Testing Equality with the `assert_eq!` and `assert_ne!` Macros
428
429 A common way to test functionality is to compare the result of the code under
430 test to the value you expect the code to return to make sure they’re equal. You
431 could do this using the `assert!` macro and passing it an expression using the
432 `==` operator. However, this is such a common test that the standard library
433 provides a pair of macros—`assert_eq!` and `assert_ne!`—to perform this test
434 more conveniently. These macros compare two arguments for equality or
435 inequality, respectively. They’ll also print the two values if the assertion
436 fails, which makes it easier to see *why* the test failed; conversely, the
437 `assert!` macro only indicates that it got a `false` value for the `==`
438 expression, not the values that led to the `false` value.
439
440 In Listing 11-7, we write a function named `add_two` that adds `2` to its
441 parameter and returns the result. Then we test this function using the
442 `assert_eq!` macro.
443
444 Filename: src/lib.rs
445
446 ```
447 pub fn add_two(a: i32) -> i32 {
448 a + 2
449 }
450
451 #[cfg(test)]
452 mod tests {
453 use super::*;
454
455 #[test]
456 fn it_adds_two() {
457 assert_eq!(4, add_two(2));
458 }
459 }
460 ```
461
462 Listing 11-7: Testing the function `add_two` using the `assert_eq!` macro
463
464 Let’s check that it passes!
465
466 ```
467 running 1 test
468 test tests::it_adds_two ... ok
469
470 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
471 ```
472
473 The first argument we gave to the `assert_eq!` macro, `4`, is equal to the
474 result of calling `add_two(2)`. The line for this test is `test
475 tests::it_adds_two ... ok`, and the `ok` text indicates that our test passed!
476
477 Let’s introduce a bug into our code to see what it looks like when a test that
478 uses `assert_eq!` fails. Change the implementation of the `add_two` function to
479 instead add `3`:
480
481 ```
482 pub fn add_two(a: i32) -> i32 {
483 a + 3
484 }
485 ```
486
487 Run the tests again:
488
489 ```
490 running 1 test
491 test tests::it_adds_two ... FAILED
492
493 failures:
494
495 ---- tests::it_adds_two stdout ----
496 [1] thread 'main' panicked at 'assertion failed: `(left == right)`
497 [2] left: `4`,
498 [3] right: `5`', src/lib.rs:11:9
499 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
500
501 failures:
502 tests::it_adds_two
503
504 test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
505 ```
506
507 Our test caught the bug! The `it_adds_two` test failed, displaying the message
508 `` assertion failed: `(left == right)` `` [1] and showing that `left` was `4`
509 [2] and `right` was `5` [3]. This message is useful and helps us start
510 debugging: it means the `left` argument to `assert_eq!` was `4` but the `right`
511 argument, where we had `add_two(2)`, was `5`.
512
513 Note that in some languages and test frameworks, the parameters to the
514 functions that assert two values are equal are called `expected` and `actual`,
515 and the order in which we specify the arguments matters. However, in Rust,
516 they’re called `left` and `right`, and the order in which we specify the value
517 we expect and the value that the code under test produces doesn’t matter. We
518 could write the assertion in this test as `assert_eq!(add_two(2), 4)`, which
519 would result in a failure message that displays `` assertion failed: `(left ==
520 right)` `` and that `left` was `5` and `right` was `4`.
521
522 The `assert_ne!` macro will pass if the two values we give it are not equal and
523 fail if they’re equal. This macro is most useful for cases when we’re not sure
524 what a value *will* be, but we know what the value definitely *won’t* be if our
525 code is functioning as we intend. For example, if we’re testing a function that
526 is guaranteed to change its input in some way, but the way in which the input
527 is changed depends on the day of the week that we run our tests, the best thing
528 to assert might be that the output of the function is not equal to the input.
529
530 Under the surface, the `assert_eq!` and `assert_ne!` macros use the operators
531 `==` and `!=`, respectively. When the assertions fail, these macros print their
532 arguments using debug formatting, which means the values being compared must
533 implement the `PartialEq` and `Debug` traits. All the primitive types and most
534 of the standard library types implement these traits. For structs and enums
535 that you define, you’ll need to implement `PartialEq` to assert that values of
536 those types are equal or not equal. You’ll need to implement `Debug` to print
537 the values when the assertion fails. Because both traits are derivable traits,
538 as mentioned in Listing 5-12 in Chapter 5, this is usually as straightforward
539 as adding the `#[derive(PartialEq, Debug)]` annotation to your struct or enum
540 definition. See Appendix C, “Derivable Traits,” for more details about these
541 and other derivable traits.
542
543 ### Adding Custom Failure Messages
544
545 You can also add a custom message to be printed with the failure message as
546 optional arguments to the `assert!`, `assert_eq!`, and `assert_ne!` macros. Any
547 arguments specified after the one required argument to `assert!` or the two
548 required arguments to `assert_eq!` and `assert_ne!` are passed along to the
549 `format!` macro (discussed in Chapter 8 in the “Concatenation with the `+`
550 Operator or the `format!` Macro” section), so you can pass a format string that
551 contains `{}` placeholders and values to go in those placeholders. Custom
552 messages are useful to document what an assertion means; when a test fails,
553 you’ll have a better idea of what the problem is with the code.
554
555 For example, let’s say we have a function that greets people by name and we
556 want to test that the name we pass into the function appears in the output:
557
558 Filename: src/lib.rs
559
560 ```
561 pub fn greeting(name: &str) -> String {
562 format!("Hello {}!", name)
563 }
564
565 #[cfg(test)]
566 mod tests {
567 use super::*;
568
569 #[test]
570 fn greeting_contains_name() {
571 let result = greeting("Carol");
572 assert!(result.contains("Carol"));
573 }
574 }
575 ```
576
577 The requirements for this program haven’t been agreed upon yet, and we’re
578 pretty sure the `Hello` text at the beginning of the greeting will change. We
579 decided we don’t want to have to update the test when the requirements change,
580 so instead of checking for exact equality to the value returned from the
581 `greeting` function, we’ll just assert that the output contains the text of the
582 input parameter.
583
584 Let’s introduce a bug into this code by changing `greeting` to not include
585 `name` to see what this test failure looks like:
586
587 ```
588 pub fn greeting(name: &str) -> String {
589 String::from("Hello!")
590 }
591 ```
592
593 Running this test produces the following:
594
595 ```
596 running 1 test
597 test tests::greeting_contains_name ... FAILED
598
599 failures:
600
601 ---- tests::greeting_contains_name stdout ----
602 thread 'main' panicked at 'assertion failed: result.contains(\"Carol\")', src/lib.rs:12:9
603 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
604
605
606 failures:
607 tests::greeting_contains_name
608 ```
609
610 This result just indicates that the assertion failed and which line the
611 assertion is on. A more useful failure message in this case would print the
612 value we got from the `greeting` function. Let’s change the test function,
613 giving it a custom failure message made from a format string with a placeholder
614 filled in with the actual value we got from the `greeting` function:
615
616 ```
617 #[test]
618 fn greeting_contains_name() {
619 let result = greeting("Carol");
620 assert!(
621 result.contains("Carol"),
622 "Greeting did not contain name, value was `{}`",
623 result
624 );
625 }
626 ```
627
628 Now when we run the test, we’ll get a more informative error message:
629
630 ```
631 ---- tests::greeting_contains_name stdout ----
632 thread 'main' panicked at 'Greeting did not contain name, value was `Hello!`', src/lib.rs:12:9
633 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
634 ```
635
636 We can see the value we actually got in the test output, which would help us
637 debug what happened instead of what we were expecting to happen.
638
639 ### Checking for Panics with `should_panic`
640
641 In addition to checking that our code returns the correct values we expect,
642 it’s also important to check that our code handles error conditions as we
643 expect. For example, consider the `Guess` type that we created in Chapter 9,
644 Listing 9-13. Other code that uses `Guess` depends on the guarantee that `Guess`
645 instances will contain only values between 1 and 100. We can write a test that
646 ensures that attempting to create a `Guess` instance with a value outside that
647 range panics.
648
649 We do this by adding another attribute, `should_panic`, to our test function.
650 This attribute makes a test pass if the code inside the function panics; the
651 test will fail if the code inside the function doesn’t panic.
652
653 Listing 11-8 shows a test that checks that the error conditions of `Guess::new`
654 happen when we expect them to.
655
656 Filename: src/lib.rs
657
658 ```
659 pub struct Guess {
660 value: i32,
661 }
662
663 impl Guess {
664 pub fn new(value: i32) -> Guess {
665 if value < 1 || value > 100 {
666 panic!("Guess value must be between 1 and 100, got {}.", value);
667 }
668
669 Guess { value }
670 }
671 }
672
673 #[cfg(test)]
674 mod tests {
675 use super::*;
676
677 #[test]
678 #[should_panic]
679 fn greater_than_100() {
680 Guess::new(200);
681 }
682 }
683 ```
684
685 Listing 11-8: Testing that a condition will cause a `panic!`
686
687 We place the `#[should_panic]` attribute after the `#[test]` attribute and
688 before the test function it applies to. Let’s look at the result when this test
689 passes:
690
691 ```
692 running 1 test
693 test tests::greater_than_100 - should panic ... ok
694
695 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
696 ```
697
698 Looks good! Now let’s introduce a bug in our code by removing the condition
699 that the `new` function will panic if the value is greater than 100:
700
701 ```
702 // --snip--
703 impl Guess {
704 pub fn new(value: i32) -> Guess {
705 if value < 1 {
706 panic!("Guess value must be between 1 and 100, got {}.", value);
707 }
708
709 Guess { value }
710 }
711 }
712 ```
713
714 When we run the test in Listing 11-8, it will fail:
715
716 ```
717 running 1 test
718 test tests::greater_than_100 - should panic ... FAILED
719
720 failures:
721
722 ---- tests::greater_than_100 stdout ----
723 note: test did not panic as expected
724
725 failures:
726 tests::greater_than_100
727
728 test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
729 ```
730
731 We don’t get a very helpful message in this case, but when we look at the test
732 function, we see that it’s annotated with `#[should_panic]`. The failure we got
733 means that the code in the test function did not cause a panic.
734
735 Tests that use `should_panic` can be imprecise because they only indicate that
736 the code has caused some panic. A `should_panic` test would pass even if the
737 test panics for a different reason from the one we were expecting to happen. To
738 make `should_panic` tests more precise, we can add an optional `expected`
739 parameter to the `should_panic` attribute. The test harness will make sure that
740 the failure message contains the provided text. For example, consider the
741 modified code for `Guess` in Listing 11-9 where the `new` function panics with
742 different messages depending on whether the value is too small or too large.
743
744 Filename: src/lib.rs
745
746 ```
747 // --snip--
748
749 impl Guess {
750 pub fn new(value: i32) -> Guess {
751 if value < 1 {
752 panic!(
753 "Guess value must be greater than or equal to 1, got {}.",
754 value
755 );
756 } else if value > 100 {
757 panic!(
758 "Guess value must be less than or equal to 100, got {}.",
759 value
760 );
761 }
762
763 Guess { value }
764 }
765 }
766
767 #[cfg(test)]
768 mod tests {
769 use super::*;
770
771 #[test]
772 #[should_panic(expected = "Guess value must be less than or equal to 100")]
773 fn greater_than_100() {
774 Guess::new(200);
775 }
776 }
777 ```
778
779 Listing 11-9: Testing that a condition will cause a `panic!` with a particular
780 panic message
781
782 This test will pass because the value we put in the `should_panic` attribute’s
783 `expected` parameter is a substring of the message that the `Guess::new`
784 function panics with. We could have specified the entire panic message that we
785 expect, which in this case would be `Guess value must be less than or equal to
786 100, got 200.` What you choose to specify in the expected parameter for
787 `should_panic` depends on how much of the panic message is unique or dynamic
788 and how precise you want your test to be. In this case, a substring of the
789 panic message is enough to ensure that the code in the test function executes
790 the `else if value > 100` case.
791
792 To see what happens when a `should_panic` test with an `expected` message
793 fails, let’s again introduce a bug into our code by swapping the bodies of the
794 `if value < 1` and the `else if value > 100` blocks:
795
796 ```
797 if value < 1 {
798 panic!("Guess value must be less than or equal to 100, got {}.", value);
799 } else if value > 100 {
800 panic!("Guess value must be greater than or equal to 1, got {}.", value);
801 }
802 ```
803
804 This time when we run the `should_panic` test, it will fail:
805
806 ```
807 running 1 test
808 test tests::greater_than_100 - should panic ... FAILED
809
810 failures:
811
812 ---- tests::greater_than_100 stdout ----
813 thread 'main' panicked at 'Guess value must be greater than or equal to 1, got 200.', src/lib.rs:13:13
814 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
815 note: panic did not contain expected string
816 panic message: `"Guess value must be greater than or equal to 1, got 200."`,
817 expected substring: `"Guess value must be less than or equal to 100"`
818
819 failures:
820 tests::greater_than_100
821
822 test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
823 ```
824
825 The failure message indicates that this test did indeed panic as we expected,
826 but the panic message did not include the expected string `'Guess value must be
827 less than or equal to 100'`. The panic message that we did get in this case was
828 `Guess value must be greater than or equal to 1, got 200.` Now we can start
829 figuring out where our bug is!
830
831 ### Using `Result<T, E>` in Tests
832
833 So far, we’ve written tests that panic when they fail. We can also write tests
834 that use `Result<T, E>`! Here’s the test from Listing 11-1, rewritten to use
835 `Result<T, E>` and return an `Err` instead of panicking:
836
837 ```
838 #[cfg(test)]
839 mod tests {
840 #[test]
841 fn it_works() -> Result<(), String> {
842 if 2 + 2 == 4 {
843 Ok(())
844 } else {
845 Err(String::from("two plus two does not equal four"))
846 }
847 }
848 }
849 ```
850
851 The `it_works` function now has a return type, `Result<(), String>`. In the
852 body of the function, rather than calling the `assert_eq!` macro, we return
853 `Ok(())` when the test passes and an `Err` with a `String` inside when the test
854 fails.
855
856 Writing tests so they return a `Result<T, E>` enables you to use the question
857 mark operator in the body of tests, which can be a convenient way to write
858 tests that should fail if any operation within them returns an `Err` variant.
859
860 You can’t use the `#[should_panic]` annotation on tests that use `Result<T,
861 E>`. To assert that an operation returns an `Err` variant, *don’t* use the
862 question mark operator on the `Result<T, E>` value. Instead, use
863 `assert!(value.is_err())`.
864
865 Now that you know several ways to write tests, let’s look at what is happening
866 when we run our tests and explore the different options we can use with `cargo
867 test`.
868
869 ## Controlling How Tests Are Run
870
871 Just as `cargo run` compiles your code and then runs the resulting binary,
872 `cargo test` compiles your code in test mode and runs the resulting test
873 binary. You can specify command line options to change the default behavior of
874 `cargo test`. For example, the default behavior of the binary produced by
875 `cargo test` is to run all the tests in parallel and capture output generated
876 during test runs, preventing the output from being displayed and making it
877 easier to read the output related to the test results.
878
879 Some command line options go to `cargo test`, and some go to the resulting test
880 binary. To separate these two types of arguments, you list the arguments that
881 go to `cargo test` followed by the separator `--` and then the ones that go to
882 the test binary. Running `cargo test --help` displays the options you can use
883 with `cargo test`, and running `cargo test -- --help` displays the options you
884 can use after the separator `--`.
885
886 ### Running Tests in Parallel or Consecutively
887
888 When you run multiple tests, by default they run in parallel using threads.
889 This means the tests will finish running faster so you can get feedback quicker
890 on whether or not your code is working. Because the tests are running at the
891 same time, make sure your tests don’t depend on each other or on any shared
892 state, including a shared environment, such as the current working directory or
893 environment variables.
894
895 For example, say each of your tests runs some code that creates a file on disk
896 named *test-output.txt* and writes some data to that file. Then each test reads
897 the data in that file and asserts that the file contains a particular value,
898 which is different in each test. Because the tests run at the same time, one
899 test might overwrite the file between when another test writes and reads the
900 file. The second test will then fail, not because the code is incorrect but
901 because the tests have interfered with each other while running in parallel.
902 One solution is to make sure each test writes to a different file; another
903 solution is to run the tests one at a time.
904
905 If you don’t want to run the tests in parallel or if you want more fine-grained
906 control over the number of threads used, you can send the `--test-threads` flag
907 and the number of threads you want to use to the test binary. Take a look at
908 the following example:
909
910 ```
911 $ cargo test -- --test-threads=1
912 ```
913
914 We set the number of test threads to `1`, telling the program not to use any
915 parallelism. Running the tests using one thread will take longer than running
916 them in parallel, but the tests won’t interfere with each other if they share
917 state.
918
919 ### Showing Function Output
920
921 By default, if a test passes, Rust’s test library captures anything printed to
922 standard output. For example, if we call `println!` in a test and the test
923 passes, we won’t see the `println!` output in the terminal; we’ll see only the
924 line that indicates the test passed. If a test fails, we’ll see whatever was
925 printed to standard output with the rest of the failure message.
926
927 As an example, Listing 11-10 has a silly function that prints the value of its
928 parameter and returns 10, as well as a test that passes and a test that fails.
929
930 Filename: src/lib.rs
931
932 ```
933 fn prints_and_returns_10(a: i32) -> i32 {
934 println!("I got the value {}", a);
935 10
936 }
937
938 #[cfg(test)]
939 mod tests {
940 use super::*;
941
942 #[test]
943 fn this_test_will_pass() {
944 let value = prints_and_returns_10(4);
945 assert_eq!(10, value);
946 }
947
948 #[test]
949 fn this_test_will_fail() {
950 let value = prints_and_returns_10(8);
951 assert_eq!(5, value);
952 }
953 }
954 ```
955
956 Listing 11-10: Tests for a function that calls `println!`
957
958 When we run these tests with `cargo test`, we’ll see the following output:
959
960 ```
961 running 2 tests
962 test tests::this_test_will_pass ... ok
963 test tests::this_test_will_fail ... FAILED
964
965 failures:
966
967 ---- tests::this_test_will_fail stdout ----
968 [1] I got the value 8
969 thread 'main' panicked at 'assertion failed: `(left == right)`
970 left: `5`,
971 right: `10`', src/lib.rs:19:9
972 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
973
974 failures:
975 tests::this_test_will_fail
976
977 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
978 ```
979
980 Note that nowhere in this output do we see `I got the value 4`, which is what
981 is printed when the test that passes runs. That output has been captured. The
982 output from the test that failed, `I got the value 8` [1], appears in the
983 section of the test summary output, which also shows the cause of the test
984 failure.
985
986 If we want to see printed values for passing tests as well, we can tell Rust
987 to also show the output of successful tests at the end with `--show-output`.
988
989 ```
990 $ cargo test -- --show-output
991 ```
992
993 When we run the tests in Listing 11-10 again with the `--show-output` flag, we
994 see the following output:
995
996 ```
997 running 2 tests
998 test tests::this_test_will_pass ... ok
999 test tests::this_test_will_fail ... FAILED
1000
1001 successes:
1002
1003 ---- tests::this_test_will_pass stdout ----
1004 I got the value 4
1005
1006
1007 successes:
1008 tests::this_test_will_pass
1009
1010 failures:
1011
1012 ---- tests::this_test_will_fail stdout ----
1013 I got the value 8
1014 thread 'main' panicked at 'assertion failed: `(left == right)`
1015 left: `5`,
1016 right: `10`', src/lib.rs:19:9
1017 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
1018
1019 failures:
1020 tests::this_test_will_fail
1021
1022 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1023 ```
1024
1025 ### Running a Subset of Tests by Name
1026
1027 Sometimes, running a full test suite can take a long time. If you’re working on
1028 code in a particular area, you might want to run only the tests pertaining to
1029 that code. You can choose which tests to run by passing `cargo test` the name
1030 or names of the test(s) you want to run as an argument.
1031
1032 To demonstrate how to run a subset of tests, we’ll create three tests for our
1033 `add_two` function, as shown in Listing 11-11, and choose which ones to run.
1034
1035 Filename: src/lib.rs
1036
1037 ```
1038 pub fn add_two(a: i32) -> i32 {
1039 a + 2
1040 }
1041
1042 #[cfg(test)]
1043 mod tests {
1044 use super::*;
1045
1046 #[test]
1047 fn add_two_and_two() {
1048 assert_eq!(4, add_two(2));
1049 }
1050
1051 #[test]
1052 fn add_three_and_two() {
1053 assert_eq!(5, add_two(3));
1054 }
1055
1056 #[test]
1057 fn one_hundred() {
1058 assert_eq!(102, add_two(100));
1059 }
1060 }
1061 ```
1062
1063 Listing 11-11: Three tests with three different names
1064
1065 If we run the tests without passing any arguments, as we saw earlier, all the
1066 tests will run in parallel:
1067
1068 ```
1069 running 3 tests
1070 test tests::add_three_and_two ... ok
1071 test tests::add_two_and_two ... ok
1072 test tests::one_hundred ... ok
1073
1074 test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1075 ```
1076
1077 #### Running Single Tests
1078
1079 We can pass the name of any test function to `cargo test` to run only that test:
1080
1081 ```
1082 $ cargo test one_hundred
1083 Compiling adder v0.1.0 (file:///projects/adder)
1084 Finished test [unoptimized + debuginfo] target(s) in 0.69s
1085 Running unittests (target/debug/deps/adder-92948b65e88960b4)
1086
1087 running 1 test
1088 test tests::one_hundred ... ok
1089
1090 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s
1091 ```
1092
1093 Only the test with the name `one_hundred` ran; the other two tests didn’t match
1094 that name. The test output lets us know we had more tests than what this
1095 command ran by displaying `2 filtered out` at the end of the summary line.
1096
1097 We can’t specify the names of multiple tests in this way; only the first value
1098 given to `cargo test` will be used. But there is a way to run multiple tests.
1099
1100 #### Filtering to Run Multiple Tests
1101
1102 We can specify part of a test name, and any test whose name matches that value
1103 will be run. For example, because two of our tests’ names contain `add`, we can
1104 run those two by running `cargo test add`:
1105
1106 ```
1107 $ cargo test add
1108 Compiling adder v0.1.0 (file:///projects/adder)
1109 Finished test [unoptimized + debuginfo] target(s) in 0.61s
1110 Running unittests (target/debug/deps/adder-92948b65e88960b4)
1111
1112 running 2 tests
1113 test tests::add_three_and_two ... ok
1114 test tests::add_two_and_two ... ok
1115
1116 test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s
1117 ```
1118
1119 This command ran all tests with `add` in the name and filtered out the test
1120 named `one_hundred`. Also note that the module in which a test appears becomes
1121 part of the test’s name, so we can run all the tests in a module by filtering
1122 on the module’s name.
1123
1124 ### Ignoring Some Tests Unless Specifically Requested
1125
1126 Sometimes a few specific tests can be very time-consuming to execute, so you
1127 might want to exclude them during most runs of `cargo test`. Rather than
1128 listing as arguments all tests you do want to run, you can instead annotate the
1129 time-consuming tests using the `ignore` attribute to exclude them, as shown
1130 here:
1131
1132 Filename: src/lib.rs
1133
1134 ```
1135 #[test]
1136 fn it_works() {
1137 assert_eq!(2 + 2, 4);
1138 }
1139
1140 #[test]
1141 #[ignore]
1142 fn expensive_test() {
1143 // code that takes an hour to run
1144 }
1145 ```
1146
1147 After `#[test]` we add the `#[ignore]` line to the test we want to exclude. Now
1148 when we run our tests, `it_works` runs, but `expensive_test` doesn’t:
1149
1150 ```
1151 $ cargo test
1152 Compiling adder v0.1.0 (file:///projects/adder)
1153 Finished test [unoptimized + debuginfo] target(s) in 0.60s
1154 Running unittests (target/debug/deps/adder-92948b65e88960b4)
1155
1156 running 2 tests
1157 test expensive_test ... ignored
1158 test it_works ... ok
1159
1160 test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
1161 ```
1162
1163 The `expensive_test` function is listed as `ignored`. If we want to run only
1164 the ignored tests, we can use `cargo test -- --ignored`:
1165
1166 ```
1167 $ cargo test -- --ignored
1168 Finished test [unoptimized + debuginfo] target(s) in 0.61s
1169 Running unittests (target/debug/deps/adder-92948b65e88960b4)
1170
1171 running 1 test
1172 test expensive_test ... ok
1173
1174 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s
1175 ```
1176
1177 By controlling which tests run, you can make sure your `cargo test` results
1178 will be fast. When you’re at a point where it makes sense to check the results
1179 of the `ignored` tests and you have time to wait for the results, you can run
1180 `cargo test -- --ignored` instead. If you want to run all tests whether they’re
1181 ignored or not, you can run `cargo test -- --include-ignored`.
1182
1183 ## Test Organization
1184
1185 As mentioned at the start of the chapter, testing is a complex discipline, and
1186 different people use different terminology and organization. The Rust community
1187 thinks about tests in terms of two main categories: *unit tests* and
1188 *integration tests*. Unit tests are small and more focused, testing one module
1189 in isolation at a time, and can test private interfaces. Integration tests are
1190 entirely external to your library and use your code in the same way any other
1191 external code would, using only the public interface and potentially exercising
1192 multiple modules per test.
1193
1194 Writing both kinds of tests is important to ensure that the pieces of your
1195 library are doing what you expect them to, separately and together.
1196
1197 ### Unit Tests
1198
1199 The purpose of unit tests is to test each unit of code in isolation from the
1200 rest of the code to quickly pinpoint where code is and isn’t working as
1201 expected. You’ll put unit tests in the *src* directory in each file with the
1202 code that they’re testing. The convention is to create a module named `tests`
1203 in each file to contain the test functions and to annotate the module with
1204 `cfg(test)`.
1205
1206 #### The Tests Module and `#[cfg(test)]`
1207
1208 The `#[cfg(test)]` annotation on the tests module tells Rust to compile and run
1209 the test code only when you run `cargo test`, not when you run `cargo build`.
1210 This saves compile time when you only want to build the library and saves space
1211 in the resulting compiled artifact because the tests are not included. You’ll
1212 see that because integration tests go in a different directory, they don’t need
1213 the `#[cfg(test)]` annotation. However, because unit tests go in the same files
1214 as the code, you’ll use `#[cfg(test)]` to specify that they shouldn’t be
1215 included in the compiled result.
1216
1217 Recall that when we generated the new `adder` project in the first section of
1218 this chapter, Cargo generated this code for us:
1219
1220 Filename: src/lib.rs
1221
1222 ```
1223 #[cfg(test)]
1224 mod tests {
1225 #[test]
1226 fn it_works() {
1227 assert_eq!(2 + 2, 4);
1228 }
1229 }
1230 ```
1231
1232 This code is the automatically generated test module. The attribute `cfg`
1233 stands for *configuration* and tells Rust that the following item should only
1234 be included given a certain configuration option. In this case, the
1235 configuration option is `test`, which is provided by Rust for compiling and
1236 running tests. By using the `cfg` attribute, Cargo compiles our test code only
1237 if we actively run the tests with `cargo test`. This includes any helper
1238 functions that might be within this module, in addition to the functions
1239 annotated with `#[test]`.
1240
1241 #### Testing Private Functions
1242
1243 There’s debate within the testing community about whether or not private
1244 functions should be tested directly, and other languages make it difficult or
1245 impossible to test private functions. Regardless of which testing ideology you
1246 adhere to, Rust’s privacy rules do allow you to test private functions.
1247 Consider the code in Listing 11-12 with the private function `internal_adder`.
1248
1249 Filename: src/lib.rs
1250
1251 ```
1252 pub fn add_two(a: i32) -> i32 {
1253 internal_adder(a, 2)
1254 }
1255
1256 fn internal_adder(a: i32, b: i32) -> i32 {
1257 a + b
1258 }
1259
1260 #[cfg(test)]
1261 mod tests {
1262 use super::*;
1263
1264 #[test]
1265 fn internal() {
1266 assert_eq!(4, internal_adder(2, 2));
1267 }
1268 }
1269 ```
1270
1271 Listing 11-12: Testing a private function
1272
1273 Note that the `internal_adder` function is not marked as `pub`. Tests are just
1274 Rust code, and the `tests` module is just another module. As we discussed in
1275 the “Paths for Referring to an Item in the Module Tree” section, items in child
1276 modules can use the items in their ancestor modules. In this test, we bring all
1277 of the `test` module’s parent’s items into scope with `use super::*`, and then
1278 the test can call `internal_adder`. If you don’t think private functions should
1279 be tested, there’s nothing in Rust that will compel you to do so.
1280
1281 ### Integration Tests
1282
1283 In Rust, integration tests are entirely external to your library. They use your
1284 library in the same way any other code would, which means they can only call
1285 functions that are part of your library’s public API. Their purpose is to test
1286 whether many parts of your library work together correctly. Units of code that
1287 work correctly on their own could have problems when integrated, so test
1288 coverage of the integrated code is important as well. To create integration
1289 tests, you first need a *tests* directory.
1290
1291 #### The *tests* Directory
1292
1293 We create a *tests* directory at the top level of our project directory, next
1294 to *src*. Cargo knows to look for integration test files in this directory. We
1295 can then make as many test files as we want to in this directory, and Cargo
1296 will compile each of the files as an individual crate.
1297
1298 Let’s create an integration test. With the code in Listing 11-12 still in the
1299 *src/lib.rs* file, make a *tests* directory, create a new file named
1300 *tests/integration_test.rs*, and enter the code in Listing 11-13.
1301
1302 Filename: tests/integration_test.rs
1303
1304 ```
1305 use adder;
1306
1307 #[test]
1308 fn it_adds_two() {
1309 assert_eq!(4, adder::add_two(2));
1310 }
1311 ```
1312
1313 Listing 11-13: An integration test of a function in the `adder` crate
1314
1315 We’ve added `use adder` at the top of the code, which we didn’t need in the
1316 unit tests. The reason is that each file in the `tests` directory is a separate
1317 crate, so we need to bring our library into each test crate’s scope.
1318
1319 We don’t need to annotate any code in *tests/integration_test.rs* with
1320 `#[cfg(test)]`. Cargo treats the `tests` directory specially and compiles files
1321 in this directory only when we run `cargo test`. Run `cargo test` now:
1322
1323 ```
1324 $ cargo test
1325 Compiling adder v0.1.0 (file:///projects/adder)
1326 Finished test [unoptimized + debuginfo] target(s) in 1.31s
1327 Running unittests (target/debug/deps/adder-1082c4b063a8fbe6)
1328
1329 [1] running 1 test
1330 test tests::internal ... ok
1331
1332 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1333
1334 [2] Running tests/integration_test.rs (target/debug/deps/integration_test-1082c4b063a8fbe6)
1335
1336 running 1 test
1337 [3] test it_adds_two ... ok
1338
1339 [4] test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1340
1341 Doc-tests adder
1342
1343 running 0 tests
1344
1345 test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1346 ```
1347
1348 The three sections of output include the unit tests, the integration test, and
1349 the doc tests. The first section for the unit tests [1] is the same as we’ve
1350 been seeing: one line for each unit test (one named `internal` that we added in
1351 Listing 11-12) and then a summary line for the unit tests.
1352
1353 The integration tests section starts with the line `Running
1354 tests/integration_test.rs` [2]. Next, there is a line for each test function in
1355 that integration test [3] and a summary line for the results of the integration
1356 test [4] just before the `Doc-tests adder` section starts.
1357
1358 Similarly to how adding more unit test functions adds more result lines to the
1359 unit tests section, adding more test functions to the integration test file
1360 adds more result lines to this integration test file’s section. Each
1361 integration test file has its own section, so if we add more files in the
1362 *tests* directory, there will be more integration test sections.
1363
1364 We can still run a particular integration test function by specifying the test
1365 function’s name as an argument to `cargo test`. To run all the tests in a
1366 particular integration test file, use the `--test` argument of `cargo test`
1367 followed by the name of the file:
1368
1369 ```
1370 $ cargo test --test integration_test
1371 Finished test [unoptimized + debuginfo] target(s) in 0.64s
1372 Running tests/integration_test.rs (target/debug/deps/integration_test-82e7799c1bc62298)
1373
1374 running 1 test
1375 test it_adds_two ... ok
1376
1377 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1378 ```
1379
1380 This command runs only the tests in the *tests/integration_test.rs* file.
1381
1382 #### Submodules in Integration Tests
1383
1384 As you add more integration tests, you might want to make more than one file in
1385 the *tests* directory to help organize them; for example, you can group the
1386 test functions by the functionality they’re testing. As mentioned earlier, each
1387 file in the *tests* directory is compiled as its own separate crate.
1388
1389 Treating each integration test file as its own crate is useful to create
1390 separate scopes that are more like the way end users will be using your crate.
1391 However, this means files in the *tests* directory don’t share the same
1392 behavior as files in *src* do, as you learned in Chapter 7 regarding how to
1393 separate code into modules and files.
1394
1395 The different behavior of files in the *tests* directory is most noticeable
1396 when you have a set of helper functions that would be useful in multiple
1397 integration test files and you try to follow the steps in the “Separating
1398 Modules into Different Files” section of Chapter 7 to extract them into a
1399 common module. For example, if we create *tests/common.rs* and place a function
1400 named `setup` in it, we can add some code to `setup` that we want to call from
1401 multiple test functions in multiple test files:
1402
1403 Filename: tests/common.rs
1404
1405 ```
1406 pub fn setup() {
1407 // setup code specific to your library's tests would go here
1408 }
1409 ```
1410
1411 When we run the tests again, we’ll see a new section in the test output for the
1412 *common.rs* file, even though this file doesn’t contain any test functions nor
1413 did we call the `setup` function from anywhere:
1414
1415 ```
1416 running 1 test
1417 test tests::internal ... ok
1418
1419 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1420
1421 Running tests/common.rs (target/debug/deps/common-92948b65e88960b4)
1422
1423 running 0 tests
1424
1425 test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1426
1427 Running tests/integration_test.rs (target/debug/deps/integration_test-92948b65e88960b4)
1428
1429 running 1 test
1430 test it_adds_two ... ok
1431
1432 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1433
1434 Doc-tests adder
1435
1436 running 0 tests
1437
1438 test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1439 ```
1440
1441 Having `common` appear in the test results with `running 0 tests` displayed for
1442 it is not what we wanted. We just wanted to share some code with the other
1443 integration test files.
1444
1445 To avoid having `common` appear in the test output, instead of creating
1446 *tests/common.rs*, we’ll create *tests/common/mod.rs*. This is an alternate
1447 naming convention that Rust also understands. Naming the file this way tells
1448 Rust not to treat the `common` module as an integration test file. When we move
1449 the `setup` function code into *tests/common/mod.rs* and delete the
1450 *tests/common.rs* file, the section in the test output will no longer appear.
1451 Files in subdirectories of the *tests* directory don’t get compiled as separate
1452 crates or have sections in the test output.
1453
1454 After we’ve created *tests/common/mod.rs*, we can use it from any of the
1455 integration test files as a module. Here’s an example of calling the `setup`
1456 function from the `it_adds_two` test in *tests/integration_test.rs*:
1457
1458 Filename: tests/integration_test.rs
1459
1460 ```
1461 use adder;
1462
1463 mod common;
1464
1465 #[test]
1466 fn it_adds_two() {
1467 common::setup();
1468 assert_eq!(4, adder::add_two(2));
1469 }
1470 ```
1471
1472 Note that the `mod common;` declaration is the same as the module declaration
1473 we demonstrated in Listing 7-21. Then in the test function, we can call the
1474 `common::setup()` function.
1475
1476 #### Integration Tests for Binary Crates
1477
1478 If our project is a binary crate that only contains a *src/main.rs* file and
1479 doesn’t have a *src/lib.rs* file, we can’t create integration tests in the
1480 *tests* directory and bring functions defined in the *src/main.rs* file into
1481 scope with a `use` statement. Only library crates expose functions that other
1482 crates can use; binary crates are meant to be run on their own.
1483
1484 This is one of the reasons Rust projects that provide a binary have a
1485 straightforward *src/main.rs* file that calls logic that lives in the
1486 *src/lib.rs* file. Using that structure, integration tests *can* test the
1487 library crate with `use` to make the important functionality available.
1488 If the important functionality works, the small amount of code in the
1489 *src/main.rs* file will work as well, and that small amount of code doesn’t
1490 need to be tested.
1491
1492 ## Summary
1493
1494 Rust’s testing features provide a way to specify how code should function to
1495 ensure it continues to work as you expect, even as you make changes. Unit tests
1496 exercise different parts of a library separately and can test private
1497 implementation details. Integration tests check that many parts of the library
1498 work together correctly, and they use the library’s public API to test the code
1499 in the same way external code will use it. Even though Rust’s type system and
1500 ownership rules help prevent some kinds of bugs, tests are still important to
1501 reduce logic bugs having to do with how your code is expected to behave.
1502
1503 Let’s combine the knowledge you learned in this chapter and in previous
1504 chapters to work on a project!