]> git.proxmox.com Git - rustc.git/blob - src/doc/book/nostarch/chapter11.md
New upstream version 1.66.0+dfsg1
[rustc.git] / src / doc / book / nostarch / chapter11.md
1 <!-- DO NOT EDIT THIS FILE.
2
3 This file is periodically generated from the content in the `/src/`
4 directory, so all fixes need to be made in `/src/`.
5 -->
6
7 [TOC]
8
9 # Writing Automated Tests
10
11 In his 1972 essay “The Humble Programmer,” Edsger W. Dijkstra said that
12 “Program testing can be a very effective way to show the presence of bugs, but
13 it is hopelessly inadequate for showing their absence.” That doesn’t mean we
14 shouldn’t try to test as much as we can!
15
16 Correctness in our programs is the extent to which our code does what we intend
17 it to do. Rust is designed with a high degree of concern about the correctness
18 of programs, but correctness is complex and not easy to prove. Rust’s type
19 system shoulders a huge part of this burden, but the type system cannot catch
20 everything. As such, Rust includes support for writing automated software tests.
21
22 Say we write a function `add_two` that adds 2 to whatever number is passed to
23 it. This function’s signature accepts an integer as a parameter and returns an
24 integer as a result. When we implement and compile that function, Rust does all
25 the type checking and borrow checking that you’ve learned so far to ensure
26 that, for instance, we aren’t passing a `String` value or an invalid reference
27 to this function. But Rust *can’t* check that this function will do precisely
28 what we intend, which is return the parameter plus 2 rather than, say, the
29 parameter plus 10 or the parameter minus 50! That’s where tests come in.
30
31 We can write tests that assert, for example, that when we pass `3` to the
32 `add_two` function, the returned value is `5`. We can run these tests whenever
33 we make changes to our code to make sure any existing correct behavior has not
34 changed.
35
36 Testing is a complex skill: although we can’t cover in one chapter every detail
37 about how to write good tests, in this chapter we will discuss the mechanics of
38 Rust’s testing facilities. We’ll talk about the annotations and macros
39 available to you when writing your tests, the default behavior and options
40 provided for running your tests, and how to organize tests into unit tests and
41 integration tests.
42
43 ## How to Write Tests
44
45 Tests are Rust functions that verify that the non-test code is functioning in
46 the expected manner. The bodies of test functions typically perform these three
47 actions:
48
49 * Set up any needed data or state.
50 * Run the code you want to test.
51 * Assert that the results are what you expect.
52
53 Let’s look at the features Rust provides specifically for writing tests that
54 take these actions, which include the `test` attribute, a few macros, and the
55 `should_panic` attribute.
56
57 ### The Anatomy of a Test Function
58
59 At its simplest, a test in Rust is a function that’s annotated with the `test`
60 attribute. Attributes are metadata about pieces of Rust code; one example is
61 the `derive` attribute we used with structs in Chapter 5. To change a function
62 into a test function, add `#[test]` on the line before `fn`. When you run your
63 tests with the `cargo test` command, Rust builds a test runner binary that runs
64 the annotated functions and reports on whether each test function passes or
65 fails.
66
67 Whenever we make a new library project with Cargo, a test module with a test
68 function in it is automatically generated for us. This module gives you a
69 template for writing your tests so you don’t have to look up the exact
70 structure and syntax every time you start a new project. You can add as many
71 additional test functions and as many test modules as you want!
72
73 We’ll explore some aspects of how tests work by experimenting with the template
74 test before we actually test any code. Then we’ll write some real-world tests
75 that call some code that we’ve written and assert that its behavior is correct.
76
77 Let’s create a new library project called `adder` that will add two numbers:
78
79 ```
80 $ cargo new adder --lib
81 Created library `adder` project
82 $ cd adder
83 ```
84
85 The contents of the *src/lib.rs* file in your `adder` library should look like
86 Listing 11-1.
87
88 Filename: src/lib.rs
89
90 ```
91 #[cfg(test)]
92 mod tests {
93 1 #[test]
94 fn it_works() {
95 let result = 2 + 2;
96 2 assert_eq!(result, 4);
97 }
98 }
99 ```
100
101 Listing 11-1: The test module and function generated automatically by `cargo
102 new`
103
104 For now, let’s ignore the top two lines and focus on the function. Note the
105 `#[test]` annotation [1]: this attribute indicates this is a test function, so
106 the test runner knows to treat this function as a test. We might also have
107 non-test functions in the `tests` module to help set up common scenarios or
108 perform common operations, so we always need to indicate which functions are
109 tests.
110
111 The example function body uses the `assert_eq!` macro [2] to assert that
112 `result`, which contains the result of adding 2 and 2, equals 4. This assertion
113 serves as an example of the format for a typical test. Let’s run it to see that
114 this test passes.
115
116 The `cargo test` command runs all tests in our project, as shown in Listing
117 11-2.
118
119 ```
120 $ cargo test
121 Compiling adder v0.1.0 (file:///projects/adder)
122 Finished test [unoptimized + debuginfo] target(s) in 0.57s
123 Running unittests src/lib.rs (target/debug/deps/adder-
124 92948b65e88960b4)
125
126 1 running 1 test
127 2 test tests::it_works ... ok
128
129 3 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
130 filtered out; finished in 0.00s
131
132 4 Doc-tests adder
133
134 running 0 tests
135
136 test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0
137 filtered out; finished in 0.00s
138 ```
139
140 Listing 11-2: The output from running the automatically generated test
141
142 Cargo compiled and ran the test. We see the line `running 1 test` [1]. The next
143 line shows the name of the generated test function, called `it_works`, and that
144 the result of running that test is `ok` [2]. The overall summary `test result:
145 ok.` [3] means that all the tests passed, and the portion that reads `1 passed;
146 0 failed` totals the number of tests that passed or failed.
147
148 It’s possible to mark a test as ignored so it doesn’t run in a particular
149 instance; we’ll cover that in “Ignoring Some Tests Unless Specifically
150 Requested” on page XX. Because we haven’t done that here, the summary shows `0
151 ignored`. We can also pass an argument to the `cargo test` command to run only
152 tests whose name matches a string; this is called *filtering* and we’ll cover
153 it in “Running a Subset of Tests by Name” on page XX. Here we haven’t filtered
154 the tests being run, so the end of the summary shows `0 filtered out`.
155
156 The `0 measured` statistic is for benchmark tests that measure performance.
157 Benchmark tests are, as of this writing, only available in nightly Rust. See
158 the documentation about benchmark tests at
159 *https://doc.rust-lang.org/unstable-book/library-features/test.html* to learn
160 more.
161
162 The next part of the test output starting at `Doc-tests adder` [4] is for the
163 results of any documentation tests. We don’t have any documentation tests yet,
164 but Rust can compile any code examples that appear in our API documentation.
165 This feature helps keep your docs and your code in sync! We’ll discuss how to
166 write documentation tests in “Documentation Comments as Tests” on page XX. For
167 now, we’ll ignore the `Doc-tests` output.
168
169 Let’s start to customize the test to our own needs. First, change the name of
170 the `it_works` function to a different name, such as `exploration`, like so:
171
172 Filename: src/lib.rs
173
174 ```
175 #[cfg(test)]
176 mod tests {
177 #[test]
178 fn exploration() {
179 let result = 2 + 2;
180 assert_eq!(result, 4);
181 }
182 }
183 ```
184
185 Then run `cargo test` again. The output now shows `exploration` instead of
186 `it_works`:
187
188 ```
189 running 1 test
190 test tests::exploration ... ok
191
192 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
193 filtered out; finished in 0.00s
194 ```
195
196 Now we’ll add another test, but this time we’ll make a test that fails! Tests
197 fail when something in the test function panics. Each test is run in a new
198 thread, and when the main thread sees that a test thread has died, the test is
199 marked as failed. In Chapter 9, we talked about how the simplest way to panic
200 is to call the `panic!` macro. Enter the new test as a function named
201 `another`, so your *src/lib.rs* file looks like Listing 11-3.
202
203 Filename: src/lib.rs
204
205 ```
206 #[cfg(test)]
207 mod tests {
208 #[test]
209 fn exploration() {
210 assert_eq!(2 + 2, 4);
211 }
212
213 #[test]
214 fn another() {
215 panic!("Make this test fail");
216 }
217 }
218 ```
219
220 Listing 11-3: Adding a second test that will fail because we call the `panic!`
221 macro
222
223 Run the tests again using `cargo test`. The output should look like Listing
224 11-4, which shows that our `exploration` test passed and `another` failed.
225
226 ```
227 running 2 tests
228 test tests::exploration ... ok
229 1 test tests::another ... FAILED
230
231 2 failures:
232
233 ---- tests::another stdout ----
234 thread 'main' panicked at 'Make this test fail', src/lib.rs:10:9
235 note: run with `RUST_BACKTRACE=1` environment variable to display
236 a backtrace
237
238 3 failures:
239 tests::another
240
241 4 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0
242 filtered out; finished in 0.00s
243
244 error: test failed, to rerun pass '--lib'
245 ```
246
247 Listing 11-4: Test results when one test passes and one test fails
248
249 Instead of `ok`, the line `test tests::another` shows `FAILED` [1]. Two new
250 sections appear between the individual results and the summary: the first [2]
251 displays the detailed reason for each test failure. In this case, we get the
252 details that `another` failed because it `panicked at 'Make this test fail'` on
253 line 10 in the *src/lib.rs* file. The next section [3] lists just the names of
254 all the failing tests, which is useful when there are lots of tests and lots of
255 detailed failing test output. We can use the name of a failing test to run just
256 that test to more easily debug it; we’ll talk more about ways to run tests in
257 “Controlling How Tests Are Run” on page XX.
258
259 The summary line displays at the end [4]: overall, our test result is `FAILED`.
260 We had one test pass and one test fail.
261
262 Now that you’ve seen what the test results look like in different scenarios,
263 let’s look at some macros other than `panic!` that are useful in tests.
264
265 ### Checking Results with the assert! Macro
266
267 The `assert!` macro, provided by the standard library, is useful when you want
268 to ensure that some condition in a test evaluates to `true`. We give the
269 `assert!` macro an argument that evaluates to a Boolean. If the value is
270 `true`, nothing happens and the test passes. If the value is `false`, the
271 `assert!` macro calls `panic!` to cause the test to fail. Using the `assert!`
272 macro helps us check that our code is functioning in the way we intend.
273
274 In Listing 5-15, we used a `Rectangle` struct and a `can_hold` method, which
275 are repeated here in Listing 11-5. Let’s put this code in the *src/lib.rs*
276 file, then write some tests for it using the `assert!` macro.
277
278 Filename: src/lib.rs
279
280 ```
281 #[derive(Debug)]
282 struct Rectangle {
283 width: u32,
284 height: u32,
285 }
286
287 impl Rectangle {
288 fn can_hold(&self, other: &Rectangle) -> bool {
289 self.width > other.width && self.height > other.height
290 }
291 }
292 ```
293
294 Listing 11-5: Using the `Rectangle` struct and its `can_hold` method from
295 Chapter 5
296
297 The `can_hold` method returns a Boolean, which means it’s a perfect use case
298 for the `assert!` macro. In Listing 11-6, we write a test that exercises the
299 `can_hold` method by creating a `Rectangle` instance that has a width of 8 and
300 a height of 7 and asserting that it can hold another `Rectangle` instance that
301 has a width of 5 and a height of 1.
302
303 Filename: src/lib.rs
304
305 ```
306 #[cfg(test)]
307 mod tests {
308 1 use super::*;
309
310 #[test]
311 2 fn larger_can_hold_smaller() {
312 3 let larger = Rectangle {
313 width: 8,
314 height: 7,
315 };
316 let smaller = Rectangle {
317 width: 5,
318 height: 1,
319 };
320
321 4 assert!(larger.can_hold(&smaller));
322 }
323 }
324 ```
325
326 Listing 11-6: A test for `can_hold` that checks whether a larger rectangle can
327 indeed hold a smaller rectangle
328
329 Note that we’ve added a new line inside the `tests` module: `use super::*;`
330 [1]. The `tests` module is a regular module that follows the usual visibility
331 rules we covered in “Paths for Referring to an Item in the Module Tree” on page
332 XX. Because the `tests` module is an inner module, we need to bring the code
333 under test in the outer module into the scope of the inner module. We use a
334 glob here, so anything we define in the outer module is available to this
335 `tests` module.
336
337 We’ve named our test `larger_can_hold_smaller` [2], and we’ve created the two
338 `Rectangle` instances that we need [3]. Then we called the `assert!` macro and
339 passed it the result of calling `larger.can_hold(&smaller)` [4]. This
340 expression is supposed to return `true`, so our test should pass. Let’s find
341 out!
342
343 ```
344 running 1 test
345 test tests::larger_can_hold_smaller ... ok
346
347 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
348 filtered out; finished in 0.00s
349 ```
350
351 It does pass! Let’s add another test, this time asserting that a smaller
352 rectangle cannot hold a larger rectangle:
353
354 Filename: src/lib.rs
355
356 ```
357 #[cfg(test)]
358 mod tests {
359 use super::*;
360
361 #[test]
362 fn larger_can_hold_smaller() {
363 --snip--
364 }
365
366 #[test]
367 fn smaller_cannot_hold_larger() {
368 let larger = Rectangle {
369 width: 8,
370 height: 7,
371 };
372 let smaller = Rectangle {
373 width: 5,
374 height: 1,
375 };
376
377 assert!(!smaller.can_hold(&larger));
378 }
379 }
380 ```
381
382 Because the correct result of the `can_hold` function in this case is `false`,
383 we need to negate that result before we pass it to the `assert!` macro. As a
384 result, our test will pass if `can_hold` returns `false`:
385
386 ```
387 running 2 tests
388 test tests::larger_can_hold_smaller ... ok
389 test tests::smaller_cannot_hold_larger ... ok
390
391 test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 0
392 filtered out; finished in 0.00s
393 ```
394
395 Two tests that pass! Now let’s see what happens to our test results when we
396 introduce a bug in our code. We’ll change the implementation of the `can_hold`
397 method by replacing the greater-than sign with a less-than sign when it
398 compares the widths:
399
400 ```
401 --snip--
402
403 impl Rectangle {
404 fn can_hold(&self, other: &Rectangle) -> bool {
405 self.width < other.width && self.height > other.height
406 }
407 }
408 ```
409
410 Running the tests now produces the following:
411
412 ```
413 running 2 tests
414 test tests::smaller_cannot_hold_larger ... ok
415 test tests::larger_can_hold_smaller ... FAILED
416
417 failures:
418
419 ---- tests::larger_can_hold_smaller stdout ----
420 thread 'main' panicked at 'assertion failed:
421 larger.can_hold(&smaller)', src/lib.rs:28:9
422 note: run with `RUST_BACKTRACE=1` environment variable to display
423 a backtrace
424
425
426 failures:
427 tests::larger_can_hold_smaller
428
429 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0
430 filtered out; finished in 0.00s
431 ```
432
433 Our tests caught the bug! Because `larger.width` is `8` and `smaller.width` is
434 `5`, the comparison of the widths in `can_hold` now returns `false`: 8 is not
435 less than 5.
436
437 ### Testing Equality with the assert_eq! and assert_ne! Macros
438
439 A common way to verify functionality is to test for equality between the result
440 of the code under test and the value you expect the code to return. You could
441 do this by using the `assert!` macro and passing it an expression using the
442 `==` operator. However, this is such a common test that the standard library
443 provides a pair of macros—`assert_eq!` and `assert_ne!`—to perform this test
444 more conveniently. These macros compare two arguments for equality or
445 inequality, respectively. They’ll also print the two values if the assertion
446 fails, which makes it easier to see *why* the test failed; conversely, the
447 `assert!` macro only indicates that it got a `false` value for the `==`
448 expression, without printing the values that led to the `false` value.
449
450 In Listing 11-7, we write a function named `add_two` that adds `2` to its
451 parameter, then we test this function using the `assert_eq!` macro.
452
453 Filename: src/lib.rs
454
455 ```
456 pub fn add_two(a: i32) -> i32 {
457 a + 2
458 }
459
460 #[cfg(test)]
461 mod tests {
462 use super::*;
463
464 #[test]
465 fn it_adds_two() {
466 assert_eq!(4, add_two(2));
467 }
468 }
469 ```
470
471 Listing 11-7: Testing the function `add_two` using the `assert_eq!` macro
472
473 Let’s check that it passes!
474
475 ```
476 running 1 test
477 test tests::it_adds_two ... ok
478
479 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
480 filtered out; finished in 0.00s
481 ```
482
483 We pass `4` as the argument to `assert_eq!`, which is equal to the result of
484 calling `add_two(2)`. The line for this test is `test tests::it_adds_two ...
485 ok`, and the `ok` text indicates that our test passed!
486
487 Let’s introduce a bug into our code to see what `assert_eq!` looks like when it
488 fails. Change the implementation of the `add_two` function to instead add `3`:
489
490 ```
491 pub fn add_two(a: i32) -> i32 {
492 a + 3
493 }
494 ```
495
496 Run the tests again:
497
498 ```
499 running 1 test
500 test tests::it_adds_two ... FAILED
501
502 failures:
503
504 ---- tests::it_adds_two stdout ----
505 1 thread 'main' panicked at 'assertion failed: `(left == right)`
506 2 left: `4`,
507 3 right: `5`', src/lib.rs:11:9
508 note: run with `RUST_BACKTRACE=1` environment variable to display
509 a backtrace
510
511 failures:
512 tests::it_adds_two
513
514 test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0
515 filtered out; finished in 0.00s
516 ```
517
518 Our test caught the bug! The `it_adds_two` test failed, and the message tells
519 us that the assertion that failed was `assertion failed: `(left == right)`` [1]
520 and what the `left` [2] and `right` [3] values are. This message helps us start
521 debugging: the `left` argument was `4` but the `right` argument, where we had
522 `add_two(2)`, was `5`. You can imagine that this would be especially helpful
523 when we have a lot of tests going on.
524
525 Note that in some languages and test frameworks, the parameters to equality
526 assertion functions are called `expected` and `actual`, and the order in which
527 we specify the arguments matters. However, in Rust, they’re called `left` and
528 `right`, and the order in which we specify the value we expect and the value
529 the code produces doesn’t matter. We could write the assertion in this test as
530 `assert_eq!(add_two(2), 4)`, which would result in the same failure message
531 that displays `assertion failed: `(left == right)``.
532
533 The `assert_ne!` macro will pass if the two values we give it are not equal and
534 fail if they’re equal. This macro is most useful for cases when we’re not sure
535 what a value *will* be, but we know what the value definitely *shouldn’t* be.
536 For example, if we’re testing a function that is guaranteed to change its input
537 in some way, but the way in which the input is changed depends on the day of
538 the week that we run our tests, the best thing to assert might be that the
539 output of the function is not equal to the input.
540
541 Under the surface, the `assert_eq!` and `assert_ne!` macros use the operators
542 `==` and `!=`, respectively. When the assertions fail, these macros print their
543 arguments using debug formatting, which means the values being compared must
544 implement the `PartialEq` and `Debug` traits. All primitive types and most of
545 the standard library types implement these traits. For structs and enums that
546 you define yourself, you’ll need to implement `PartialEq` to assert equality of
547 those types. You’ll also need to implement `Debug` to print the values when the
548 assertion fails. Because both traits are derivable traits, as mentioned in
549 Listing 5-12, this is usually as straightforward as adding the
550 `#[derive(PartialEq, Debug)]` annotation to your struct or enum definition. See
551 Appendix C for more details about these and other derivable traits.
552
553 ### Adding Custom Failure Messages
554
555 You can also add a custom message to be printed with the failure message as
556 optional arguments to the `assert!`, `assert_eq!`, and `assert_ne!` macros. Any
557 arguments specified after the required arguments are passed along to the
558 `format!` macro (discussed in “Concatenation with the + Operator or the format!
559 Macro” on page XX), so you can pass a format string that contains `{}`
560 placeholders and values to go in those placeholders. Custom messages are useful
561 for documenting what an assertion means; when a test fails, you’ll have a
562 better idea of what the problem is with the code.
563
564 For example, let’s say we have a function that greets people by name and we
565 want to test that the name we pass into the function appears in the output:
566
567 Filename: src/lib.rs
568
569 ```
570 pub fn greeting(name: &str) -> String {
571 format!("Hello {name}!")
572 }
573
574 #[cfg(test)]
575 mod tests {
576 use super::*;
577
578 #[test]
579 fn greeting_contains_name() {
580 let result = greeting("Carol");
581 assert!(result.contains("Carol"));
582 }
583 }
584 ```
585
586 The requirements for this program haven’t been agreed upon yet, and we’re
587 pretty sure the `Hello` text at the beginning of the greeting will change. We
588 decided we don’t want to have to update the test when the requirements change,
589 so instead of checking for exact equality to the value returned from the
590 `greeting` function, we’ll just assert that the output contains the text of the
591 input parameter.
592
593 Now let’s introduce a bug into this code by changing `greeting` to exclude
594 `name` to see what the default test failure looks like:
595
596 ```
597 pub fn greeting(name: &str) -> String {
598 String::from("Hello!")
599 }
600 ```
601
602 Running this test produces the following:
603
604 ```
605 running 1 test
606 test tests::greeting_contains_name ... FAILED
607
608 failures:
609
610 ---- tests::greeting_contains_name stdout ----
611 thread 'main' panicked at 'assertion failed:
612 result.contains(\"Carol\")', src/lib.rs:12:9
613 note: run with `RUST_BACKTRACE=1` environment variable to display
614 a backtrace
615
616
617 failures:
618 tests::greeting_contains_name
619 ```
620
621 This result just indicates that the assertion failed and which line the
622 assertion is on. A more useful failure message would print the value from the
623 `greeting` function. Let’s add a custom failure message composed of a format
624 string with a placeholder filled in with the actual value we got from the
625 `greeting` function:
626
627 ```
628 #[test]
629 fn greeting_contains_name() {
630 let result = greeting("Carol");
631 assert!(
632 result.contains("Carol"),
633 "Greeting did not contain name, value was `{result}`"
634 );
635 }
636 ```
637
638 Now when we run the test, we’ll get a more informative error message:
639
640 ```
641 ---- tests::greeting_contains_name stdout ----
642 thread 'main' panicked at 'Greeting did not contain name, value
643 was `Hello!`', src/lib.rs:12:9
644 note: run with `RUST_BACKTRACE=1` environment variable to display
645 a backtrace
646 ```
647
648 We can see the value we actually got in the test output, which would help us
649 debug what happened instead of what we were expecting to happen.
650
651 ### Checking for Panics with should_panic
652
653 In addition to checking return values, it’s important to check that our code
654 handles error conditions as we expect. For example, consider the `Guess` type
655 that we created in Listing 9-13. Other code that uses `Guess` depends on the
656 guarantee that `Guess` instances will contain only values between 1 and 100. We
657 can write a test that ensures that attempting to create a `Guess` instance with
658 a value outside that range panics.
659
660 We do this by adding the attribute `should_panic` to our test function. The
661 test passes if the code inside the function panics; the test fails if the code
662 inside the function doesn’t panic.
663
664 Listing 11-8 shows a test that checks that the error conditions of `Guess::new`
665 happen when we expect them to.
666
667 ```
668 // src/lib.rs
669 pub struct Guess {
670 value: i32,
671 }
672
673 impl Guess {
674 pub fn new(value: i32) -> Guess {
675 if value < 1 || value > 100 {
676 panic!(
677 "Guess value must be between 1 and 100, got {}.",
678 value
679 );
680 }
681
682 Guess { value }
683 }
684 }
685
686 #[cfg(test)]
687 mod tests {
688 use super::*;
689
690 #[test]
691 #[should_panic]
692 fn greater_than_100() {
693 Guess::new(200);
694 }
695 }
696 ```
697
698 Listing 11-8: Testing that a condition will cause a panic!
699
700 We place the `#[should_panic]` attribute after the `#[test]` attribute and
701 before the test function it applies to. Let’s look at the result when this test
702 passes:
703
704 ```
705 running 1 test
706 test tests::greater_than_100 - should panic ... ok
707
708 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
709 filtered out; finished in 0.00s
710 ```
711
712 Looks good! Now let’s introduce a bug in our code by removing the condition
713 that the `new` function will panic if the value is greater than 100:
714
715 ```
716 // src/lib.rs
717 --snip--
718
719 impl Guess {
720 pub fn new(value: i32) -> Guess {
721 if value < 1 {
722 panic!(
723 "Guess value must be between 1 and 100, got {}.",
724 value
725 );
726 }
727
728 Guess { value }
729 }
730 }
731 ```
732
733 When we run the test in Listing 11-8, it will fail:
734
735 ```
736 running 1 test
737 test tests::greater_than_100 - should panic ... FAILED
738
739 failures:
740
741 ---- tests::greater_than_100 stdout ----
742 note: test did not panic as expected
743
744 failures:
745 tests::greater_than_100
746
747 test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0
748 filtered out; finished in 0.00s
749 ```
750
751 We don’t get a very helpful message in this case, but when we look at the test
752 function, we see that it’s annotated with `#[should_panic]`. The failure we got
753 means that the code in the test function did not cause a panic.
754
755 Tests that use `should_panic` can be imprecise. A `should_panic` test would
756 pass even if the test panics for a different reason from the one we were
757 expecting. To make `should_panic` tests more precise, we can add an optional
758 `expected` parameter to the `should_panic` attribute. The test harness will
759 make sure that the failure message contains the provided text. For example,
760 consider the modified code for `Guess` in Listing 11-9 where the `new` function
761 panics with different messages depending on whether the value is too small or
762 too large.
763
764 ```
765 // src/lib.rs
766 --snip--
767
768 impl Guess {
769 pub fn new(value: i32) -> Guess {
770 if value < 1 {
771 panic!(
772 "Guess value must be greater than or equal to 1, got {}.",
773 value
774 );
775 } else if value > 100 {
776 panic!(
777 "Guess value must be less than or equal to 100, got {}.",
778 value
779 );
780 }
781
782 Guess { value }
783 }
784 }
785
786 #[cfg(test)]
787 mod tests {
788 use super::*;
789
790 #[test]
791 #[should_panic(expected = "less than or equal to 100")]
792 fn greater_than_100() {
793 Guess::new(200);
794 }
795 }
796 ```
797
798 Listing 11-9: Testing for a `panic!` with a panic message containing a
799 specified substring
800
801 This test will pass because the value we put in the `should_panic` attribute’s
802 `expected` parameter is a substring of the message that the `Guess::new`
803 function panics with. We could have specified the entire panic message that we
804 expect, which in this case would be `Guess value must be less than or equal to
805 100, got 200`. What you choose to specify depends on how much of the panic
806 message is unique or dynamic and how precise you want your test to be. In this
807 case, a substring of the panic message is enough to ensure that the code in the
808 test function executes the `else if value > 100` case.
809
810 To see what happens when a `should_panic` test with an `expected` message
811 fails, let’s again introduce a bug into our code by swapping the bodies of the
812 `if value < 1` and the `else if value > 100` blocks:
813
814 ```
815 // src/lib.rs
816 --snip--
817 if value < 1 {
818 panic!(
819 "Guess value must be less than or equal to 100, got {}.",
820 value
821 );
822 } else if value > 100 {
823 panic!(
824 "Guess value must be greater than or equal to 1, got {}.",
825 value
826 );
827 }
828 --snip--
829 ```
830
831 This time when we run the `should_panic` test, it will fail:
832
833 ```
834 running 1 test
835 test tests::greater_than_100 - should panic ... FAILED
836
837 failures:
838
839 ---- tests::greater_than_100 stdout ----
840 thread 'main' panicked at 'Guess value must be greater than or equal to 1, got
841 200.', src/lib.rs:13:13
842 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
843 note: panic did not contain expected string
844 panic message: `"Guess value must be greater than or equal to 1, got
845 200."`,
846 expected substring: `"less than or equal to 100"`
847
848 failures:
849 tests::greater_than_100
850
851 test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out;
852 finished in 0.00s
853 ```
854
855 The failure message indicates that this test did indeed panic as we expected,
856 but the panic message did not include the expected string `'Guess value must be
857 less than or equal to 100'`. The panic message that we did get in this case was
858 `Guess value must be greater than or equal to 1, got 200`. Now we can start
859 figuring out where our bug is!
860
861 ### Using Result<T, E> in Tests
862
863 Our tests so far all panic when they fail. We can also write tests that use
864 `Result<T, E>`! Here’s the test from Listing 11-1, rewritten to use `Result<T,
865 E>` and return an `Err` instead of panicking:
866
867 Filename: src/lib.rs
868
869 ```
870 #[cfg(test)]
871 mod tests {
872 #[test]
873 fn it_works() -> Result<(), String> {
874 if 2 + 2 == 4 {
875 Ok(())
876 } else {
877 Err(String::from("two plus two does not equal four"))
878 }
879 }
880 }
881 ```
882
883 The `it_works` function now has the `Result<(), String>` return type. In the
884 body of the function, rather than calling the `assert_eq!` macro, we return
885 `Ok(())` when the test passes and an `Err` with a `String` inside when the test
886 fails.
887
888 Writing tests so they return a `Result<T, E>` enables you to use the question
889 mark operator in the body of tests, which can be a convenient way to write
890 tests that should fail if any operation within them returns an `Err` variant.
891
892 You can’t use the `#[should_panic]` annotation on tests that use `Result<T,
893 E>`. To assert that an operation returns an `Err` variant, *don’t* use the
894 question mark operator on the `Result<T, E>` value. Instead, use
895 `assert!(value.is_err())`.
896
897 Now that you know several ways to write tests, let’s look at what is happening
898 when we run our tests and explore the different options we can use with `cargo
899 test`.
900
901 ## Controlling How Tests Are Run
902
903 Just as `cargo run` compiles your code and then runs the resultant binary,
904 `cargo test` compiles your code in test mode and runs the resultant test
905 binary. The default behavior of the binary produced by `cargo test` is to run
906 all the tests in parallel and capture output generated during test runs,
907 preventing the output from being displayed and making it easier to read the
908 output related to the test results. You can, however, specify command line
909 options to change this default behavior.
910
911 Some command line options go to `cargo test`, and some go to the resultant test
912 binary. To separate these two types of arguments, you list the arguments that
913 go to `cargo test` followed by the separator `--` and then the ones that go to
914 the test binary. Running `cargo test --help` displays the options you can use
915 with `cargo test`, and running `cargo test -- --help` displays the options you
916 can use after the separator.
917
918 ### Running Tests in Parallel or Consecutively
919
920 When you run multiple tests, by default they run in parallel using threads,
921 meaning they finish running faster and you get feedback quicker. Because the
922 tests are running at the same time, you must make sure your tests don’t depend
923 on each other or on any shared state, including a shared environment, such as
924 the current working directory or environment variables.
925
926 For example, say each of your tests runs some code that creates a file on disk
927 named *test-output.txt* and writes some data to that file. Then each test reads
928 the data in that file and asserts that the file contains a particular value,
929 which is different in each test. Because the tests run at the same time, one
930 test might overwrite the file in the time between another test writing and
931 reading the file. The second test will then fail, not because the code is
932 incorrect but because the tests have interfered with each other while running
933 in parallel. One solution is to make sure each test writes to a different file;
934 another solution is to run the tests one at a time.
935
936 If you don’t want to run the tests in parallel or if you want more fine-grained
937 control over the number of threads used, you can send the `--test-threads` flag
938 and the number of threads you want to use to the test binary. Take a look at
939 the following example:
940
941 ```
942 $ cargo test -- --test-threads=1
943 ```
944
945 We set the number of test threads to `1`, telling the program not to use any
946 parallelism. Running the tests using one thread will take longer than running
947 them in parallel, but the tests won’t interfere with each other if they share
948 state.
949
950 ### Showing Function Output
951
952 By default, if a test passes, Rust’s test library captures anything printed to
953 standard output. For example, if we call `println!` in a test and the test
954 passes, we won’t see the `println!` output in the terminal; we’ll see only the
955 line that indicates the test passed. If a test fails, we’ll see whatever was
956 printed to standard output with the rest of the failure message.
957
958 As an example, Listing 11-10 has a silly function that prints the value of its
959 parameter and returns 10, as well as a test that passes and a test that fails.
960
961 Filename: src/lib.rs
962
963 ```
964 fn prints_and_returns_10(a: i32) -> i32 {
965 println!("I got the value {a}");
966 10
967 }
968
969 #[cfg(test)]
970 mod tests {
971 use super::*;
972
973 #[test]
974 fn this_test_will_pass() {
975 let value = prints_and_returns_10(4);
976 assert_eq!(10, value);
977 }
978
979 #[test]
980 fn this_test_will_fail() {
981 let value = prints_and_returns_10(8);
982 assert_eq!(5, value);
983 }
984 }
985 ```
986
987 Listing 11-10: Tests for a function that calls `println!`
988
989 When we run these tests with `cargo test`, we’ll see the following output:
990
991 ```
992 running 2 tests
993 test tests::this_test_will_pass ... ok
994 test tests::this_test_will_fail ... FAILED
995
996 failures:
997
998 ---- tests::this_test_will_fail stdout ----
999 1 I got the value 8
1000 thread 'main' panicked at 'assertion failed: `(left == right)`
1001 left: `5`,
1002 right: `10`', src/lib.rs:19:9
1003 note: run with `RUST_BACKTRACE=1` environment variable to display
1004 a backtrace
1005
1006 failures:
1007 tests::this_test_will_fail
1008
1009 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0
1010 filtered out; finished in 0.00s
1011 ```
1012
1013 Note that nowhere in this output do we see `I got the value 4`, which is
1014 printed when the test that passes runs. That output has been captured. The
1015 output from the test that failed, `I got the value 8` [1], appears in the
1016 section of the test summary output, which also shows the cause of the test
1017 failure.
1018
1019 If we want to see printed values for passing tests as well, we can tell Rust to
1020 also show the output of successful tests with `--show-output`:
1021
1022 ```
1023 $ cargo test -- --show-output
1024 ```
1025
1026 When we run the tests in Listing 11-10 again with the `--show-output` flag, we
1027 see the following output:
1028
1029 ```
1030 running 2 tests
1031 test tests::this_test_will_pass ... ok
1032 test tests::this_test_will_fail ... FAILED
1033
1034 successes:
1035
1036 ---- tests::this_test_will_pass stdout ----
1037 I got the value 4
1038
1039
1040 successes:
1041 tests::this_test_will_pass
1042
1043 failures:
1044
1045 ---- tests::this_test_will_fail stdout ----
1046 I got the value 8
1047 thread 'main' panicked at 'assertion failed: `(left == right)`
1048 left: `5`,
1049 right: `10`', src/lib.rs:19:9
1050 note: run with `RUST_BACKTRACE=1` environment variable to display
1051 a backtrace
1052
1053 failures:
1054 tests::this_test_will_fail
1055
1056 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0
1057 filtered out; finished in 0.00s
1058 ```
1059
1060 ### Running a Subset of Tests by Name
1061
1062 Sometimes, running a full test suite can take a long time. If you’re working on
1063 code in a particular area, you might want to run only the tests pertaining to
1064 that code. You can choose which tests to run by passing `cargo test` the name
1065 or names of the test(s) you want to run as an argument.
1066
1067 To demonstrate how to run a subset of tests, we’ll first create three tests for
1068 our `add_two` function, as shown in Listing 11-11, and choose which ones to run.
1069
1070 Filename: src/lib.rs
1071
1072 ```
1073 pub fn add_two(a: i32) -> i32 {
1074 a + 2
1075 }
1076
1077 #[cfg(test)]
1078 mod tests {
1079 use super::*;
1080
1081 #[test]
1082 fn add_two_and_two() {
1083 assert_eq!(4, add_two(2));
1084 }
1085
1086 #[test]
1087 fn add_three_and_two() {
1088 assert_eq!(5, add_two(3));
1089 }
1090
1091 #[test]
1092 fn one_hundred() {
1093 assert_eq!(102, add_two(100));
1094 }
1095 }
1096 ```
1097
1098 Listing 11-11: Three tests with three different names
1099
1100 If we run the tests without passing any arguments, as we saw earlier, all the
1101 tests will run in parallel:
1102
1103 ```
1104 running 3 tests
1105 test tests::add_three_and_two ... ok
1106 test tests::add_two_and_two ... ok
1107 test tests::one_hundred ... ok
1108
1109 test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0
1110 filtered out; finished in 0.00s
1111 ```
1112
1113 #### Running Single Tests
1114
1115 We can pass the name of any test function to `cargo test` to run only that test:
1116
1117 ```
1118 $ cargo test one_hundred
1119 Compiling adder v0.1.0 (file:///projects/adder)
1120 Finished test [unoptimized + debuginfo] target(s) in 0.69s
1121 Running unittests src/lib.rs (target/debug/deps/adder-
1122 92948b65e88960b4)
1123
1124 running 1 test
1125 test tests::one_hundred ... ok
1126
1127 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2
1128 filtered out; finished in 0.00s
1129 ```
1130
1131 Only the test with the name `one_hundred` ran; the other two tests didn’t match
1132 that name. The test output lets us know we had more tests that didn’t run by
1133 displaying `2 filtered out` at the end.
1134
1135 We can’t specify the names of multiple tests in this way; only the first value
1136 given to `cargo test` will be used. But there is a way to run multiple tests.
1137
1138 #### Filtering to Run Multiple Tests
1139
1140 We can specify part of a test name, and any test whose name matches that value
1141 will be run. For example, because two of our tests’ names contain `add`, we can
1142 run those two by running `cargo test add`:
1143
1144 ```
1145 $ cargo test add
1146 Compiling adder v0.1.0 (file:///projects/adder)
1147 Finished test [unoptimized + debuginfo] target(s) in 0.61s
1148 Running unittests src/lib.rs (target/debug/deps/adder-
1149 92948b65e88960b4)
1150
1151 running 2 tests
1152 test tests::add_three_and_two ... ok
1153 test tests::add_two_and_two ... ok
1154
1155 test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 1
1156 filtered out; finished in 0.00s
1157 ```
1158
1159 This command ran all tests with `add` in the name and filtered out the test
1160 named `one_hundred`. Also note that the module in which a test appears becomes
1161 part of the test’s name, so we can run all the tests in a module by filtering
1162 on the module’s name.
1163
1164 ### Ignoring Some Tests Unless Specifically Requested
1165
1166 Sometimes a few specific tests can be very time-consuming to execute, so you
1167 might want to exclude them during most runs of `cargo test`. Rather than
1168 listing as arguments all tests you do want to run, you can instead annotate the
1169 time-consuming tests using the `ignore` attribute to exclude them, as shown
1170 here:
1171
1172 Filename: src/lib.rs
1173
1174 ```
1175 #[test]
1176 fn it_works() {
1177 let result = 2 + 2;
1178 assert_eq!(result, 4);
1179 }
1180
1181 #[test]
1182 #[ignore]
1183 fn expensive_test() {
1184 // code that takes an hour to run
1185 }
1186 ```
1187
1188 After `#[test]`, we add the `#[ignore]` line to the test we want to exclude.
1189 Now when we run our tests, `it_works` runs, but `expensive_test` doesn’t:
1190
1191 ```
1192 $ cargo test
1193 Compiling adder v0.1.0 (file:///projects/adder)
1194 Finished test [unoptimized + debuginfo] target(s) in 0.60s
1195 Running unittests src/lib.rs (target/debug/deps/adder-
1196 92948b65e88960b4)
1197
1198 running 2 tests
1199 test expensive_test ... ignored
1200 test it_works ... ok
1201
1202 test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0
1203 filtered out; finished in 0.00s
1204 ```
1205
1206 The `expensive_test` function is listed as `ignored`. If we want to run only
1207 the ignored tests, we can use `cargo test -- --ignored`:
1208
1209 ```
1210 $ cargo test -- --ignored
1211 Finished test [unoptimized + debuginfo] target(s) in 0.61s
1212 Running unittests src/lib.rs (target/debug/deps/adder-
1213 92948b65e88960b4)
1214
1215 running 1 test
1216 test expensive_test ... ok
1217
1218 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 1
1219 filtered out; finished in 0.00s
1220 ```
1221
1222 By controlling which tests run, you can make sure your `cargo test` results
1223 will be returned quickly. When you’re at a point where it makes sense to check
1224 the results of the `ignored` tests and you have time to wait for the results,
1225 you can run `cargo test -- --ignored` instead. If you want to run all tests
1226 whether they’re ignored or not, you can run `cargo test -- --include-ignored`.
1227
1228 ## Test Organization
1229
1230 As mentioned at the start of the chapter, testing is a complex discipline, and
1231 different people use different terminology and organization. The Rust community
1232 thinks about tests in terms of two main categories: unit tests and integration
1233 tests. *Unit tests* are small and more focused, testing one module in isolation
1234 at a time, and can test private interfaces. *Integration tests* are entirely
1235 external to your library and use your code in the same way any other external
1236 code would, using only the public interface and potentially exercising multiple
1237 modules per test.
1238
1239 Writing both kinds of tests is important to ensure that the pieces of your
1240 library are doing what you expect them to, separately and together.
1241
1242 ### Unit Tests
1243
1244 The purpose of unit tests is to test each unit of code in isolation from the
1245 rest of the code to quickly pinpoint where code is and isn’t working as
1246 expected. You’ll put unit tests in the *src* directory in each file with the
1247 code that they’re testing. The convention is to create a module named `tests`
1248 in each file to contain the test functions and to annotate the module with
1249 `cfg(test)`.
1250
1251 #### The Tests Module and #[cfg(test)]
1252
1253 The `#[cfg(test)]` annotation on the `tests` module tells Rust to compile and
1254 run the test code only when you run `cargo test`, not when you run `cargo
1255 build`. This saves compile time when you only want to build the library and
1256 saves space in the resultant compiled artifact because the tests are not
1257 included. You’ll see that because integration tests go in a different
1258 directory, they don’t need the `#[cfg(test)]` annotation. However, because unit
1259 tests go in the same files as the code, you’ll use `#[cfg(test)]` to specify
1260 that they shouldn’t be included in the compiled result.
1261
1262 Recall that when we generated the new `adder` project in the first section of
1263 this chapter, Cargo generated this code for us:
1264
1265 Filename: src/lib.rs
1266
1267 ```
1268 #[cfg(test)]
1269 mod tests {
1270 #[test]
1271 fn it_works() {
1272 let result = 2 + 2;
1273 assert_eq!(result, 4);
1274 }
1275 }
1276 ```
1277
1278 This code is the automatically generated `tests` module. The attribute `cfg`
1279 stands for *configuration* and tells Rust that the following item should only
1280 be included given a certain configuration option. In this case, the
1281 configuration option is `test`, which is provided by Rust for compiling and
1282 running tests. By using the `cfg` attribute, Cargo compiles our test code only
1283 if we actively run the tests with `cargo test`. This includes any helper
1284 functions that might be within this module, in addition to the functions
1285 annotated with `#[test]`.
1286
1287 #### Testing Private Functions
1288
1289 There’s debate within the testing community about whether or not private
1290 functions should be tested directly, and other languages make it difficult or
1291 impossible to test private functions. Regardless of which testing ideology you
1292 adhere to, Rust’s privacy rules do allow you to test private functions.
1293 Consider the code in Listing 11-12 with the private function `internal_adder`.
1294
1295 Filename: src/lib.rs
1296
1297 ```
1298 pub fn add_two(a: i32) -> i32 {
1299 internal_adder(a, 2)
1300 }
1301
1302 fn internal_adder(a: i32, b: i32) -> i32 {
1303 a + b
1304 }
1305
1306 #[cfg(test)]
1307 mod tests {
1308 use super::*;
1309
1310 #[test]
1311 fn internal() {
1312 assert_eq!(4, internal_adder(2, 2));
1313 }
1314 }
1315 ```
1316
1317 Listing 11-12: Testing a private function
1318
1319 Note that the `internal_adder` function is not marked as `pub`. Tests are just
1320 Rust code, and the `tests` module is just another module. As we discussed in
1321 “Paths for Referring to an Item in the Module Tree” on page XX, items in child
1322 modules can use the items in their ancestor modules. In this test, we bring all
1323 of the `test` module’s parent’s items into scope with `use super::*`, and then
1324 the test can call `internal_adder`. If you don’t think private functions should
1325 be tested, there’s nothing in Rust that will compel you to do so.
1326
1327 ### Integration Tests
1328
1329 In Rust, integration tests are entirely external to your library. They use your
1330 library in the same way any other code would, which means they can only call
1331 functions that are part of your library’s public API. Their purpose is to test
1332 whether many parts of your library work together correctly. Units of code that
1333 work correctly on their own could have problems when integrated, so test
1334 coverage of the integrated code is important as well. To create integration
1335 tests, you first need a *tests* directory.
1336
1337 #### The tests Directory
1338
1339 We create a *tests* directory at the top level of our project directory, next
1340 to *src*. Cargo knows to look for integration test files in this directory. We
1341 can then make as many test files as we want, and Cargo will compile each of the
1342 files as an individual crate.
1343
1344 Let’s create an integration test. With the code in Listing 11-12 still in the
1345 *src/lib.rs* file, make a *tests* directory, and create a new file named
1346 *tests/integration_test.rs*. Your directory structure should look like this:
1347
1348 ```
1349 adder
1350 ├── Cargo.lock
1351 ├── Cargo.toml
1352 ├── src
1353 │ └── lib.rs
1354 └── tests
1355 └── integration_test.rs
1356 ```
1357
1358 Enter the code in Listing 11-13 into the *tests/integration_test.rs* file.
1359
1360 Filename: tests/integration_test.rs
1361
1362 ```
1363 use adder;
1364
1365 #[test]
1366 fn it_adds_two() {
1367 assert_eq!(4, adder::add_two(2));
1368 }
1369 ```
1370
1371 Listing 11-13: An integration test of a function in the `adder` crate
1372
1373 Each file in the *tests* directory is a separate crate, so we need to bring our
1374 library into each test crate’s scope. For that reason we add `use adder;` at
1375 the top of the code, which we didn’t need in the unit tests.
1376
1377 We don’t need to annotate any code in *tests/integration_test.rs* with
1378 `#[cfg(test)]`. Cargo treats the *tests* directory specially and compiles files
1379 in this directory only when we run `cargo test`. Run `cargo test` now:
1380
1381 ```
1382 $ cargo test
1383 Compiling adder v0.1.0 (file:///projects/adder)
1384 Finished test [unoptimized + debuginfo] target(s) in 1.31s
1385 Running unittests src/lib.rs (target/debug/deps/adder-
1386 1082c4b063a8fbe6)
1387
1388 1 running 1 test
1389 test tests::internal ... ok
1390
1391 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
1392 filtered out; finished in 0.00s
1393
1394 2 Running tests/integration_test.rs
1395 (target/debug/deps/integration_test-1082c4b063a8fbe6)
1396
1397 running 1 test
1398 3 test it_adds_two ... ok
1399
1400 4 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
1401 filtered out; finished in 0.00s
1402
1403 Doc-tests adder
1404
1405 running 0 tests
1406
1407 test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0
1408 filtered out; finished in 0.00s
1409 ```
1410
1411 The three sections of output include the unit tests, the integration test, and
1412 the doc tests. Note that if any test in a section fails, the following sections
1413 will not be run. For example, if a unit test fails, there won’t be any output
1414 for integration and doc tests because those tests will only be run if all unit
1415 tests are passing.
1416
1417 The first section for the unit tests [1] is the same as we’ve been seeing: one
1418 line for each unit test (one named `internal` that we added in Listing 11-12)
1419 and then a summary line for the unit tests.
1420
1421 The integration tests section starts with the line `Running
1422 tests/integration_test.rs` [2]. Next, there is a line for each test function in
1423 that integration test [3] and a summary line for the results of the integration
1424 test [4] just before the `Doc-tests adder` section starts.
1425
1426 Each integration test file has its own section, so if we add more files in the
1427 *tests* directory, there will be more integration test sections.
1428
1429 We can still run a particular integration test function by specifying the test
1430 function’s name as an argument to `cargo test`. To run all the tests in a
1431 particular integration test file, use the `--test` argument of `cargo test`
1432 followed by the name of the file:
1433
1434 ```
1435 $ cargo test --test integration_test
1436 Finished test [unoptimized + debuginfo] target(s) in 0.64s
1437 Running tests/integration_test.rs
1438 (target/debug/deps/integration_test-82e7799c1bc62298)
1439
1440 running 1 test
1441 test it_adds_two ... ok
1442
1443 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
1444 filtered out; finished in 0.00s
1445 ```
1446
1447 This command runs only the tests in the *tests/integration_test.rs* file.
1448
1449 #### Submodules in Integration Tests
1450
1451 As you add more integration tests, you might want to make more files in the
1452 *tests* directory to help organize them; for example, you can group the test
1453 functions by the functionality they’re testing. As mentioned earlier, each file
1454 in the *tests* directory is compiled as its own separate crate, which is useful
1455 for creating separate scopes to more closely imitate the way end users will be
1456 using your crate. However, this means files in the *tests* directory don’t
1457 share the same behavior as files in *src* do, as you learned in Chapter 7
1458 regarding how to separate code into modules and files.
1459
1460 The different behavior of *tests* directory files is most noticeable when you
1461 have a set of helper functions to use in multiple integration test files and
1462 you try to follow the steps in “Separating Modules into Different Files” on
1463 page XX to extract them into a common module. For example, if we create
1464 *tests/common.rs* and place a function named `setup` in it, we can add some
1465 code to `setup` that we want to call from multiple test functions in multiple
1466 test files:
1467
1468 Filename: tests/common.rs
1469
1470 ```
1471 pub fn setup() {
1472 // setup code specific to your library's tests would go here
1473 }
1474 ```
1475
1476 When we run the tests again, we’ll see a new section in the test output for the
1477 *common.rs* file, even though this file doesn’t contain any test functions nor
1478 did we call the `setup` function from anywhere:
1479
1480 ```
1481 running 1 test
1482 test tests::internal ... ok
1483
1484 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
1485 filtered out; finished in 0.00s
1486
1487 Running tests/common.rs (target/debug/deps/common-
1488 92948b65e88960b4)
1489
1490 running 0 tests
1491
1492 test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0
1493 filtered out; finished in 0.00s
1494
1495 Running tests/integration_test.rs
1496 (target/debug/deps/integration_test-92948b65e88960b4)
1497
1498 running 1 test
1499 test it_adds_two ... ok
1500
1501 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
1502 filtered out; finished in 0.00s
1503
1504 Doc-tests adder
1505
1506 running 0 tests
1507
1508 test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0
1509 filtered out; finished in 0.00s
1510 ```
1511
1512 Having `common` appear in the test results with `running 0 tests` displayed for
1513 it is not what we wanted. We just wanted to share some code with the other
1514 integration test files. To avoid having `common` appear in the test output,
1515 instead of creating *tests/common.rs*, we’ll create *tests/common/mod.rs*. The
1516 project directory now looks like this:
1517
1518 ```
1519 ├── Cargo.lock
1520 ├── Cargo.toml
1521 ├── src
1522 │ └── lib.rs
1523 └── tests
1524 ├── common
1525 │ └── mod.rs
1526 └── integration_test.rs
1527 ```
1528
1529 This is the older naming convention that Rust also understands that we
1530 mentioned in “Alternate File Paths” on page XX. Naming the file this way tells
1531 Rust not to treat the `common` module as an integration test file. When we move
1532 the `setup` function code into *tests/common/mod.rs* and delete the
1533 *tests/common.rs* file, the section in the test output will no longer appear.
1534 Files in subdirectories of the *tests* directory don’t get compiled as separate
1535 crates or have sections in the test output.
1536
1537 After we’ve created *tests/common/mod.rs*, we can use it from any of the
1538 integration test files as a module. Here’s an example of calling the `setup`
1539 function from the `it_adds_two` test in *tests/integration_test.rs*:
1540
1541 Filename: tests/integration_test.rs
1542
1543 ```
1544 use adder;
1545
1546 mod common;
1547
1548 #[test]
1549 fn it_adds_two() {
1550 common::setup();
1551 assert_eq!(4, adder::add_two(2));
1552 }
1553 ```
1554
1555 Note that the `mod common;` declaration is the same as the module declaration
1556 we demonstrated in Listing 7-21. Then, in the test function, we can call the
1557 `common::setup()` function.
1558
1559 #### Integration Tests for Binary Crates
1560
1561 If our project is a binary crate that only contains a *src/main.rs* file and
1562 doesn’t have a *src/lib.rs* file, we can’t create integration tests in the
1563 *tests* directory and bring functions defined in the *src/main.rs* file into
1564 scope with a `use` statement. Only library crates expose functions that other
1565 crates can use; binary crates are meant to be run on their own.
1566
1567 This is one of the reasons Rust projects that provide a binary have a
1568 straightforward *src/main.rs* file that calls logic that lives in the
1569 *src/lib.rs* file. Using that structure, integration tests *can* test the
1570 library crate with `use` to make the important functionality available. If the
1571 important functionality works, the small amount of code in the *src/main.rs*
1572 file will work as well, and that small amount of code doesn’t need to be tested.
1573
1574 ## Summary
1575
1576 Rust’s testing features provide a way to specify how code should function to
1577 ensure it continues to work as you expect, even as you make changes. Unit tests
1578 exercise different parts of a library separately and can test private
1579 implementation details. Integration tests check that many parts of the library
1580 work together correctly, and they use the library’s public API to test the code
1581 in the same way external code will use it. Even though Rust’s type system and
1582 ownership rules help prevent some kinds of bugs, tests are still important to
1583 reduce logic bugs having to do with how your code is expected to behave.
1584
1585 Let’s combine the knowledge you learned in this chapter and in previous
1586 chapters to work on a project!
1587