]>
Commit | Line | Data |
---|---|---|
1e59de90 TL |
1 | # Advanced googletest Topics |
2 | ||
3 | <!-- GOOGLETEST_CM0016 DO NOT DELETE --> | |
4 | ||
5 | <!-- GOOGLETEST_CM0035 DO NOT DELETE --> | |
6 | ||
7 | ## Introduction | |
8 | ||
9 | Now that you have read the [googletest Primer](primer.md) and learned how to | |
10 | write tests using googletest, it's time to learn some new tricks. This document | |
11 | will show you more assertions as well as how to construct complex failure | |
12 | messages, propagate fatal failures, reuse and speed up your test fixtures, and | |
13 | use various flags with your tests. | |
14 | ||
15 | ## More Assertions | |
16 | ||
17 | This section covers some less frequently used, but still significant, | |
18 | assertions. | |
19 | ||
20 | ### Explicit Success and Failure | |
21 | ||
22 | These three assertions do not actually test a value or expression. Instead, they | |
23 | generate a success or failure directly. Like the macros that actually perform a | |
24 | test, you may stream a custom failure message into them. | |
25 | ||
26 | ```c++ | |
27 | SUCCEED(); | |
28 | ``` | |
29 | ||
30 | Generates a success. This does **NOT** make the overall test succeed. A test is | |
31 | considered successful only if none of its assertions fail during its execution. | |
32 | ||
33 | NOTE: `SUCCEED()` is purely documentary and currently doesn't generate any | |
34 | user-visible output. However, we may add `SUCCEED()` messages to googletest's | |
35 | output in the future. | |
36 | ||
37 | ```c++ | |
38 | FAIL(); | |
39 | ADD_FAILURE(); | |
40 | ADD_FAILURE_AT("file_path", line_number); | |
41 | ``` | |
42 | ||
43 | `FAIL()` generates a fatal failure, while `ADD_FAILURE()` and `ADD_FAILURE_AT()` | |
44 | generate a nonfatal failure. These are useful when control flow, rather than a | |
45 | Boolean expression, determines the test's success or failure. For example, you | |
46 | might want to write something like: | |
47 | ||
48 | ```c++ | |
49 | switch(expression) { | |
50 | case 1: | |
51 | ... some checks ... | |
52 | case 2: | |
53 | ... some other checks ... | |
54 | default: | |
55 | FAIL() << "We shouldn't get here."; | |
56 | } | |
57 | ``` | |
58 | ||
59 | NOTE: you can only use `FAIL()` in functions that return `void`. See the | |
60 | [Assertion Placement section](#assertion-placement) for more information. | |
61 | ||
62 | ### Exception Assertions | |
63 | ||
64 | These are for verifying that a piece of code throws (or does not throw) an | |
65 | exception of the given type: | |
66 | ||
67 | Fatal assertion | Nonfatal assertion | Verifies | |
68 | ------------------------------------------ | ------------------------------------------ | -------- | |
69 | `ASSERT_THROW(statement, exception_type);` | `EXPECT_THROW(statement, exception_type);` | `statement` throws an exception of the given type | |
70 | `ASSERT_ANY_THROW(statement);` | `EXPECT_ANY_THROW(statement);` | `statement` throws an exception of any type | |
71 | `ASSERT_NO_THROW(statement);` | `EXPECT_NO_THROW(statement);` | `statement` doesn't throw any exception | |
72 | ||
73 | Examples: | |
74 | ||
75 | ```c++ | |
76 | ASSERT_THROW(Foo(5), bar_exception); | |
77 | ||
78 | EXPECT_NO_THROW({ | |
79 | int n = 5; | |
80 | Bar(&n); | |
81 | }); | |
82 | ``` | |
83 | ||
84 | **Availability**: requires exceptions to be enabled in the build environment | |
85 | ||
86 | ### Predicate Assertions for Better Error Messages | |
87 | ||
88 | Even though googletest has a rich set of assertions, they can never be complete, | |
89 | as it's impossible (nor a good idea) to anticipate all scenarios a user might | |
90 | run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a | |
91 | complex expression, for lack of a better macro. This has the problem of not | |
92 | showing you the values of the parts of the expression, making it hard to | |
93 | understand what went wrong. As a workaround, some users choose to construct the | |
94 | failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this | |
95 | is awkward especially when the expression has side-effects or is expensive to | |
96 | evaluate. | |
97 | ||
98 | googletest gives you three different options to solve this problem: | |
99 | ||
100 | #### Using an Existing Boolean Function | |
101 | ||
102 | If you already have a function or functor that returns `bool` (or a type that | |
103 | can be implicitly converted to `bool`), you can use it in a *predicate | |
104 | assertion* to get the function arguments printed for free: | |
105 | ||
106 | <!-- mdformat off(github rendering does not support multiline tables) --> | |
107 | ||
108 | | Fatal assertion | Nonfatal assertion | Verifies | | |
109 | | --------------------------------- | --------------------------------- | --------------------------- | | |
110 | | `ASSERT_PRED1(pred1, val1)` | `EXPECT_PRED1(pred1, val1)` | `pred1(val1)` is true | | |
111 | | `ASSERT_PRED2(pred2, val1, val2)` | `EXPECT_PRED2(pred2, val1, val2)` | `pred2(val1, val2)` is true | | |
112 | | `...` | `...` | `...` | | |
113 | ||
114 | <!-- mdformat on--> | |
115 | In the above, `predn` is an `n`-ary predicate function or functor, where `val1`, | |
116 | `val2`, ..., and `valn` are its arguments. The assertion succeeds if the | |
117 | predicate returns `true` when applied to the given arguments, and fails | |
118 | otherwise. When the assertion fails, it prints the value of each argument. In | |
119 | either case, the arguments are evaluated exactly once. | |
120 | ||
121 | Here's an example. Given | |
122 | ||
123 | ```c++ | |
124 | // Returns true if m and n have no common divisors except 1. | |
125 | bool MutuallyPrime(int m, int n) { ... } | |
126 | ||
127 | const int a = 3; | |
128 | const int b = 4; | |
129 | const int c = 10; | |
130 | ``` | |
131 | ||
132 | the assertion | |
133 | ||
134 | ```c++ | |
135 | EXPECT_PRED2(MutuallyPrime, a, b); | |
136 | ``` | |
137 | ||
138 | will succeed, while the assertion | |
139 | ||
140 | ```c++ | |
141 | EXPECT_PRED2(MutuallyPrime, b, c); | |
142 | ``` | |
143 | ||
144 | will fail with the message | |
145 | ||
146 | ```none | |
147 | MutuallyPrime(b, c) is false, where | |
148 | b is 4 | |
149 | c is 10 | |
150 | ``` | |
151 | ||
152 | > NOTE: | |
153 | > | |
154 | > 1. If you see a compiler error "no matching function to call" when using | |
155 | > `ASSERT_PRED*` or `EXPECT_PRED*`, please see | |
156 | > [this](faq.md#the-compiler-complains-no-matching-function-to-call-when-i-use-assert-pred-how-do-i-fix-it) | |
157 | > for how to resolve it. | |
158 | ||
159 | #### Using a Function That Returns an AssertionResult | |
160 | ||
161 | While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not | |
162 | satisfactory: you have to use different macros for different arities, and it | |
163 | feels more like Lisp than C++. The `::testing::AssertionResult` class solves | |
164 | this problem. | |
165 | ||
166 | An `AssertionResult` object represents the result of an assertion (whether it's | |
167 | a success or a failure, and an associated message). You can create an | |
168 | `AssertionResult` using one of these factory functions: | |
169 | ||
170 | ```c++ | |
171 | namespace testing { | |
172 | ||
173 | // Returns an AssertionResult object to indicate that an assertion has | |
174 | // succeeded. | |
175 | AssertionResult AssertionSuccess(); | |
176 | ||
177 | // Returns an AssertionResult object to indicate that an assertion has | |
178 | // failed. | |
179 | AssertionResult AssertionFailure(); | |
180 | ||
181 | } | |
182 | ``` | |
183 | ||
184 | You can then use the `<<` operator to stream messages to the `AssertionResult` | |
185 | object. | |
186 | ||
187 | To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`), | |
188 | write a predicate function that returns `AssertionResult` instead of `bool`. For | |
189 | example, if you define `IsEven()` as: | |
190 | ||
191 | ```c++ | |
192 | ::testing::AssertionResult IsEven(int n) { | |
193 | if ((n % 2) == 0) | |
194 | return ::testing::AssertionSuccess(); | |
195 | else | |
196 | return ::testing::AssertionFailure() << n << " is odd"; | |
197 | } | |
198 | ``` | |
199 | ||
200 | instead of: | |
201 | ||
202 | ```c++ | |
203 | bool IsEven(int n) { | |
204 | return (n % 2) == 0; | |
205 | } | |
206 | ``` | |
207 | ||
208 | the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print: | |
209 | ||
210 | ```none | |
211 | Value of: IsEven(Fib(4)) | |
212 | Actual: false (3 is odd) | |
213 | Expected: true | |
214 | ``` | |
215 | ||
216 | instead of a more opaque | |
217 | ||
218 | ```none | |
219 | Value of: IsEven(Fib(4)) | |
220 | Actual: false | |
221 | Expected: true | |
222 | ``` | |
223 | ||
224 | If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well | |
225 | (one third of Boolean assertions in the Google code base are negative ones), and | |
226 | are fine with making the predicate slower in the success case, you can supply a | |
227 | success message: | |
228 | ||
229 | ```c++ | |
230 | ::testing::AssertionResult IsEven(int n) { | |
231 | if ((n % 2) == 0) | |
232 | return ::testing::AssertionSuccess() << n << " is even"; | |
233 | else | |
234 | return ::testing::AssertionFailure() << n << " is odd"; | |
235 | } | |
236 | ``` | |
237 | ||
238 | Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print | |
239 | ||
240 | ```none | |
241 | Value of: IsEven(Fib(6)) | |
242 | Actual: true (8 is even) | |
243 | Expected: false | |
244 | ``` | |
245 | ||
246 | #### Using a Predicate-Formatter | |
247 | ||
248 | If you find the default message generated by `(ASSERT|EXPECT)_PRED*` and | |
249 | `(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to your | |
250 | predicate do not support streaming to `ostream`, you can instead use the | |
251 | following *predicate-formatter assertions* to *fully* customize how the message | |
252 | is formatted: | |
253 | ||
254 | Fatal assertion | Nonfatal assertion | Verifies | |
255 | ------------------------------------------------ | ------------------------------------------------ | -------- | |
256 | `ASSERT_PRED_FORMAT1(pred_format1, val1);` | `EXPECT_PRED_FORMAT1(pred_format1, val1);` | `pred_format1(val1)` is successful | |
257 | `ASSERT_PRED_FORMAT2(pred_format2, val1, val2);` | `EXPECT_PRED_FORMAT2(pred_format2, val1, val2);` | `pred_format2(val1, val2)` is successful | |
258 | `...` | `...` | ... | |
259 | ||
260 | The difference between this and the previous group of macros is that instead of | |
261 | a predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a *predicate-formatter* | |
262 | (`pred_formatn`), which is a function or functor with the signature: | |
263 | ||
264 | ```c++ | |
265 | ::testing::AssertionResult PredicateFormattern(const char* expr1, | |
266 | const char* expr2, | |
267 | ... | |
268 | const char* exprn, | |
269 | T1 val1, | |
270 | T2 val2, | |
271 | ... | |
272 | Tn valn); | |
273 | ``` | |
274 | ||
275 | where `val1`, `val2`, ..., and `valn` are the values of the predicate arguments, | |
276 | and `expr1`, `expr2`, ..., and `exprn` are the corresponding expressions as they | |
277 | appear in the source code. The types `T1`, `T2`, ..., and `Tn` can be either | |
278 | value types or reference types. For example, if an argument has type `Foo`, you | |
279 | can declare it as either `Foo` or `const Foo&`, whichever is appropriate. | |
280 | ||
281 | As an example, let's improve the failure message in `MutuallyPrime()`, which was | |
282 | used with `EXPECT_PRED2()`: | |
283 | ||
284 | ```c++ | |
285 | // Returns the smallest prime common divisor of m and n, | |
286 | // or 1 when m and n are mutually prime. | |
287 | int SmallestPrimeCommonDivisor(int m, int n) { ... } | |
288 | ||
289 | // A predicate-formatter for asserting that two integers are mutually prime. | |
290 | ::testing::AssertionResult AssertMutuallyPrime(const char* m_expr, | |
291 | const char* n_expr, | |
292 | int m, | |
293 | int n) { | |
294 | if (MutuallyPrime(m, n)) return ::testing::AssertionSuccess(); | |
295 | ||
296 | return ::testing::AssertionFailure() << m_expr << " and " << n_expr | |
297 | << " (" << m << " and " << n << ") are not mutually prime, " | |
298 | << "as they have a common divisor " << SmallestPrimeCommonDivisor(m, n); | |
299 | } | |
300 | ``` | |
301 | ||
302 | With this predicate-formatter, we can use | |
303 | ||
304 | ```c++ | |
305 | EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c); | |
306 | ``` | |
307 | ||
308 | to generate the message | |
309 | ||
310 | ```none | |
311 | b and c (4 and 10) are not mutually prime, as they have a common divisor 2. | |
312 | ``` | |
313 | ||
314 | As you may have realized, many of the built-in assertions we introduced earlier | |
315 | are special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them are | |
316 | indeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`. | |
317 | ||
318 | ### Floating-Point Comparison | |
319 | ||
320 | Comparing floating-point numbers is tricky. Due to round-off errors, it is very | |
321 | unlikely that two floating-points will match exactly. Therefore, `ASSERT_EQ` 's | |
322 | naive comparison usually doesn't work. And since floating-points can have a wide | |
323 | value range, no single fixed error bound works. It's better to compare by a | |
324 | fixed relative error bound, except for values close to 0 due to the loss of | |
325 | precision there. | |
326 | ||
327 | In general, for floating-point comparison to make sense, the user needs to | |
328 | carefully choose the error bound. If they don't want or care to, comparing in | |
329 | terms of Units in the Last Place (ULPs) is a good default, and googletest | |
330 | provides assertions to do this. Full details about ULPs are quite long; if you | |
331 | want to learn more, see | |
332 | [here](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/). | |
333 | ||
334 | #### Floating-Point Macros | |
335 | ||
336 | <!-- mdformat off(github rendering does not support multiline tables) --> | |
337 | ||
338 | | Fatal assertion | Nonfatal assertion | Verifies | | |
339 | | ------------------------------- | ------------------------------- | ---------------------------------------- | | |
340 | | `ASSERT_FLOAT_EQ(val1, val2);` | `EXPECT_FLOAT_EQ(val1, val2);` | the two `float` values are almost equal | | |
341 | | `ASSERT_DOUBLE_EQ(val1, val2);` | `EXPECT_DOUBLE_EQ(val1, val2);` | the two `double` values are almost equal | | |
342 | ||
343 | <!-- mdformat on--> | |
344 | ||
345 | By "almost equal" we mean the values are within 4 ULP's from each other. | |
346 | ||
347 | The following assertions allow you to choose the acceptable error bound: | |
348 | ||
349 | <!-- mdformat off(github rendering does not support multiline tables) --> | |
350 | ||
351 | | Fatal assertion | Nonfatal assertion | Verifies | | |
352 | | ------------------------------------- | ------------------------------------- | -------------------------------------------------------------------------------- | | |
353 | | `ASSERT_NEAR(val1, val2, abs_error);` | `EXPECT_NEAR(val1, val2, abs_error);` | the difference between `val1` and `val2` doesn't exceed the given absolute error | | |
354 | ||
355 | <!-- mdformat on--> | |
356 | ||
357 | #### Floating-Point Predicate-Format Functions | |
358 | ||
359 | Some floating-point operations are useful, but not that often used. In order to | |
360 | avoid an explosion of new macros, we provide them as predicate-format functions | |
361 | that can be used in predicate assertion macros (e.g. `EXPECT_PRED_FORMAT2`, | |
362 | etc). | |
363 | ||
364 | ```c++ | |
365 | EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2); | |
366 | EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2); | |
367 | ``` | |
368 | ||
369 | Verifies that `val1` is less than, or almost equal to, `val2`. You can replace | |
370 | `EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`. | |
371 | ||
372 | ### Asserting Using gMock Matchers | |
373 | ||
374 | [gMock](../../googlemock) comes with | |
375 | [a library of matchers](../../googlemock/docs/cheat_sheet.md#MatcherList) for | |
376 | validating arguments passed to mock objects. A gMock *matcher* is basically a | |
377 | predicate that knows how to describe itself. It can be used in these assertion | |
378 | macros: | |
379 | ||
380 | <!-- mdformat off(github rendering does not support multiline tables) --> | |
381 | ||
382 | | Fatal assertion | Nonfatal assertion | Verifies | | |
383 | | ------------------------------ | ------------------------------ | --------------------- | | |
384 | | `ASSERT_THAT(value, matcher);` | `EXPECT_THAT(value, matcher);` | value matches matcher | | |
385 | ||
386 | <!-- mdformat on--> | |
387 | ||
388 | For example, `StartsWith(prefix)` is a matcher that matches a string starting | |
389 | with `prefix`, and you can write: | |
390 | ||
391 | ```c++ | |
392 | using ::testing::StartsWith; | |
393 | ... | |
394 | // Verifies that Foo() returns a string starting with "Hello". | |
395 | EXPECT_THAT(Foo(), StartsWith("Hello")); | |
396 | ``` | |
397 | ||
398 | Read this | |
399 | [recipe](../../googlemock/docs/cook_book.md#using-matchers-in-googletest-assertions) | |
400 | in the gMock Cookbook for more details. | |
401 | ||
402 | gMock has a rich set of matchers. You can do many things googletest cannot do | |
403 | alone with them. For a list of matchers gMock provides, read | |
404 | [this](../../googlemock/docs/cook_book.md##using-matchers). It's easy to write | |
405 | your [own matchers](../../googlemock/docs/cook_book.md#NewMatchers) too. | |
406 | ||
407 | gMock is bundled with googletest, so you don't need to add any build dependency | |
408 | in order to take advantage of this. Just include `"gmock/gmock.h"` | |
409 | and you're ready to go. | |
410 | ||
411 | ### More String Assertions | |
412 | ||
413 | (Please read the [previous](#asserting-using-gmock-matchers) section first if | |
414 | you haven't.) | |
415 | ||
416 | You can use the gMock | |
417 | [string matchers](../../googlemock/docs/cheat_sheet.md#string-matchers) with | |
418 | `EXPECT_THAT()` or `ASSERT_THAT()` to do more string comparison tricks | |
419 | (sub-string, prefix, suffix, regular expression, and etc). For example, | |
420 | ||
421 | ```c++ | |
422 | using ::testing::HasSubstr; | |
423 | using ::testing::MatchesRegex; | |
424 | ... | |
425 | ASSERT_THAT(foo_string, HasSubstr("needle")); | |
426 | EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+")); | |
427 | ``` | |
428 | ||
429 | If the string contains a well-formed HTML or XML document, you can check whether | |
430 | its DOM tree matches an | |
431 | [XPath expression](http://www.w3.org/TR/xpath/#contents): | |
432 | ||
433 | ```c++ | |
434 | // Currently still in //template/prototemplate/testing:xpath_matcher | |
435 | #include "template/prototemplate/testing/xpath_matcher.h" | |
436 | using prototemplate::testing::MatchesXPath; | |
437 | EXPECT_THAT(html_string, MatchesXPath("//a[text()='click here']")); | |
438 | ``` | |
439 | ||
440 | ### Windows HRESULT assertions | |
441 | ||
442 | These assertions test for `HRESULT` success or failure. | |
443 | ||
444 | Fatal assertion | Nonfatal assertion | Verifies | |
445 | -------------------------------------- | -------------------------------------- | -------- | |
446 | `ASSERT_HRESULT_SUCCEEDED(expression)` | `EXPECT_HRESULT_SUCCEEDED(expression)` | `expression` is a success `HRESULT` | |
447 | `ASSERT_HRESULT_FAILED(expression)` | `EXPECT_HRESULT_FAILED(expression)` | `expression` is a failure `HRESULT` | |
448 | ||
449 | The generated output contains the human-readable error message associated with | |
450 | the `HRESULT` code returned by `expression`. | |
451 | ||
452 | You might use them like this: | |
453 | ||
454 | ```c++ | |
455 | CComPtr<IShellDispatch2> shell; | |
456 | ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application")); | |
457 | CComVariant empty; | |
458 | ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty)); | |
459 | ``` | |
460 | ||
461 | ### Type Assertions | |
462 | ||
463 | You can call the function | |
464 | ||
465 | ```c++ | |
466 | ::testing::StaticAssertTypeEq<T1, T2>(); | |
467 | ``` | |
468 | ||
469 | to assert that types `T1` and `T2` are the same. The function does nothing if | |
470 | the assertion is satisfied. If the types are different, the function call will | |
471 | fail to compile, the compiler error message will say that | |
472 | `T1 and T2 are not the same type` and most likely (depending on the compiler) | |
473 | show you the actual values of `T1` and `T2`. This is mainly useful inside | |
474 | template code. | |
475 | ||
476 | **Caveat**: When used inside a member function of a class template or a function | |
477 | template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is | |
478 | instantiated. For example, given: | |
479 | ||
480 | ```c++ | |
481 | template <typename T> class Foo { | |
482 | public: | |
483 | void Bar() { ::testing::StaticAssertTypeEq<int, T>(); } | |
484 | }; | |
485 | ``` | |
486 | ||
487 | the code: | |
488 | ||
489 | ```c++ | |
490 | void Test1() { Foo<bool> foo; } | |
491 | ``` | |
492 | ||
493 | will not generate a compiler error, as `Foo<bool>::Bar()` is never actually | |
494 | instantiated. Instead, you need: | |
495 | ||
496 | ```c++ | |
497 | void Test2() { Foo<bool> foo; foo.Bar(); } | |
498 | ``` | |
499 | ||
500 | to cause a compiler error. | |
501 | ||
502 | ### Assertion Placement | |
503 | ||
504 | You can use assertions in any C++ function. In particular, it doesn't have to be | |
505 | a method of the test fixture class. The one constraint is that assertions that | |
506 | generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in | |
507 | void-returning functions. This is a consequence of Google's not using | |
508 | exceptions. By placing it in a non-void function you'll get a confusing compile | |
509 | error like `"error: void value not ignored as it ought to be"` or `"cannot | |
510 | initialize return object of type 'bool' with an rvalue of type 'void'"` or | |
511 | `"error: no viable conversion from 'void' to 'string'"`. | |
512 | ||
513 | If you need to use fatal assertions in a function that returns non-void, one | |
514 | option is to make the function return the value in an out parameter instead. For | |
515 | example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You | |
516 | need to make sure that `*result` contains some sensible value even when the | |
517 | function returns prematurely. As the function now returns `void`, you can use | |
518 | any assertion inside of it. | |
519 | ||
520 | If changing the function's type is not an option, you should just use assertions | |
521 | that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`. | |
522 | ||
523 | NOTE: Constructors and destructors are not considered void-returning functions, | |
524 | according to the C++ language specification, and so you may not use fatal | |
525 | assertions in them; you'll get a compilation error if you try. Instead, either | |
526 | call `abort` and crash the entire test executable, or put the fatal assertion in | |
527 | a `SetUp`/`TearDown` function; see | |
528 | [constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp) | |
529 | ||
530 | WARNING: A fatal assertion in a helper function (private void-returning method) | |
531 | called from a constructor or destructor does not terminate the current test, as | |
532 | your intuition might suggest: it merely returns from the constructor or | |
533 | destructor early, possibly leaving your object in a partially-constructed or | |
534 | partially-destructed state! You almost certainly want to `abort` or use | |
535 | `SetUp`/`TearDown` instead. | |
536 | ||
537 | ## Teaching googletest How to Print Your Values | |
538 | ||
539 | When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument | |
540 | values to help you debug. It does this using a user-extensible value printer. | |
541 | ||
542 | This printer knows how to print built-in C++ types, native arrays, STL | |
543 | containers, and any type that supports the `<<` operator. For other types, it | |
544 | prints the raw bytes in the value and hopes that you the user can figure it out. | |
545 | ||
546 | As mentioned earlier, the printer is *extensible*. That means you can teach it | |
547 | to do a better job at printing your particular type than to dump the bytes. To | |
548 | do that, define `<<` for your type: | |
549 | ||
550 | ```c++ | |
551 | #include <ostream> | |
552 | ||
553 | namespace foo { | |
554 | ||
555 | class Bar { // We want googletest to be able to print instances of this. | |
556 | ... | |
557 | // Create a free inline friend function. | |
558 | friend std::ostream& operator<<(std::ostream& os, const Bar& bar) { | |
559 | return os << bar.DebugString(); // whatever needed to print bar to os | |
560 | } | |
561 | }; | |
562 | ||
563 | // If you can't declare the function in the class it's important that the | |
564 | // << operator is defined in the SAME namespace that defines Bar. C++'s look-up | |
565 | // rules rely on that. | |
566 | std::ostream& operator<<(std::ostream& os, const Bar& bar) { | |
567 | return os << bar.DebugString(); // whatever needed to print bar to os | |
568 | } | |
569 | ||
570 | } // namespace foo | |
571 | ``` | |
572 | ||
573 | Sometimes, this might not be an option: your team may consider it bad style to | |
574 | have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that | |
575 | doesn't do what you want (and you cannot change it). If so, you can instead | |
576 | define a `PrintTo()` function like this: | |
577 | ||
578 | ```c++ | |
579 | #include <ostream> | |
580 | ||
581 | namespace foo { | |
582 | ||
583 | class Bar { | |
584 | ... | |
585 | friend void PrintTo(const Bar& bar, std::ostream* os) { | |
586 | *os << bar.DebugString(); // whatever needed to print bar to os | |
587 | } | |
588 | }; | |
589 | ||
590 | // If you can't declare the function in the class it's important that PrintTo() | |
591 | // is defined in the SAME namespace that defines Bar. C++'s look-up rules rely | |
592 | // on that. | |
593 | void PrintTo(const Bar& bar, std::ostream* os) { | |
594 | *os << bar.DebugString(); // whatever needed to print bar to os | |
595 | } | |
596 | ||
597 | } // namespace foo | |
598 | ``` | |
599 | ||
600 | If you have defined both `<<` and `PrintTo()`, the latter will be used when | |
601 | googletest is concerned. This allows you to customize how the value appears in | |
602 | googletest's output without affecting code that relies on the behavior of its | |
603 | `<<` operator. | |
604 | ||
605 | If you want to print a value `x` using googletest's value printer yourself, just | |
606 | call `::testing::PrintToString(x)`, which returns an `std::string`: | |
607 | ||
608 | ```c++ | |
609 | vector<pair<Bar, int> > bar_ints = GetBarIntVector(); | |
610 | ||
611 | EXPECT_TRUE(IsCorrectBarIntVector(bar_ints)) | |
612 | << "bar_ints = " << ::testing::PrintToString(bar_ints); | |
613 | ``` | |
614 | ||
615 | ## Death Tests | |
616 | ||
617 | In many applications, there are assertions that can cause application failure if | |
618 | a condition is not met. These sanity checks, which ensure that the program is in | |
619 | a known good state, are there to fail at the earliest possible time after some | |
620 | program state is corrupted. If the assertion checks the wrong condition, then | |
621 | the program may proceed in an erroneous state, which could lead to memory | |
622 | corruption, security holes, or worse. Hence it is vitally important to test that | |
623 | such assertion statements work as expected. | |
624 | ||
625 | Since these precondition checks cause the processes to die, we call such tests | |
626 | _death tests_. More generally, any test that checks that a program terminates | |
627 | (except by throwing an exception) in an expected fashion is also a death test. | |
628 | ||
629 | Note that if a piece of code throws an exception, we don't consider it "death" | |
630 | for the purpose of death tests, as the caller of the code could catch the | |
631 | exception and avoid the crash. If you want to verify exceptions thrown by your | |
632 | code, see [Exception Assertions](#ExceptionAssertions). | |
633 | ||
634 | If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see | |
635 | Catching Failures | |
636 | ||
637 | ### How to Write a Death Test | |
638 | ||
639 | googletest has the following macros to support death tests: | |
640 | ||
641 | Fatal assertion | Nonfatal assertion | Verifies | |
642 | ------------------------------------------------ | ------------------------------------------------ | -------- | |
643 | `ASSERT_DEATH(statement, matcher);` | `EXPECT_DEATH(statement, matcher);` | `statement` crashes with the given error | |
644 | `ASSERT_DEATH_IF_SUPPORTED(statement, matcher);` | `EXPECT_DEATH_IF_SUPPORTED(statement, matcher);` | if death tests are supported, verifies that `statement` crashes with the given error; otherwise verifies nothing | |
645 | `ASSERT_DEBUG_DEATH(statement, matcher);` | `EXPECT_DEBUG_DEATH(statement, matcher);` | `statement` crashes with the given error **in debug mode**. When not in debug (i.e. `NDEBUG` is defined), this just executes `statement` | |
646 | `ASSERT_EXIT(statement, predicate, matcher);` | `EXPECT_EXIT(statement, predicate, matcher);` | `statement` exits with the given error, and its exit code matches `predicate` | |
647 | ||
648 | where `statement` is a statement that is expected to cause the process to die, | |
649 | `predicate` is a function or function object that evaluates an integer exit | |
650 | status, and `matcher` is either a gMock matcher matching a `const std::string&` | |
651 | or a (Perl) regular expression - either of which is matched against the stderr | |
652 | output of `statement`. For legacy reasons, a bare string (i.e. with no matcher) | |
653 | is interpreted as `ContainsRegex(str)`, **not** `Eq(str)`. Note that `statement` | |
654 | can be *any valid statement* (including *compound statement*) and doesn't have | |
655 | to be an expression. | |
656 | ||
657 | As usual, the `ASSERT` variants abort the current test function, while the | |
658 | `EXPECT` variants do not. | |
659 | ||
660 | > NOTE: We use the word "crash" here to mean that the process terminates with a | |
661 | > *non-zero* exit status code. There are two possibilities: either the process | |
662 | > has called `exit()` or `_exit()` with a non-zero value, or it may be killed by | |
663 | > a signal. | |
664 | > | |
665 | > This means that if *`statement`* terminates the process with a 0 exit code, it | |
666 | > is *not* considered a crash by `EXPECT_DEATH`. Use `EXPECT_EXIT` instead if | |
667 | > this is the case, or if you want to restrict the exit code more precisely. | |
668 | ||
669 | A predicate here must accept an `int` and return a `bool`. The death test | |
670 | succeeds only if the predicate returns `true`. googletest defines a few | |
671 | predicates that handle the most common cases: | |
672 | ||
673 | ```c++ | |
674 | ::testing::ExitedWithCode(exit_code) | |
675 | ``` | |
676 | ||
677 | This expression is `true` if the program exited normally with the given exit | |
678 | code. | |
679 | ||
680 | ```c++ | |
681 | ::testing::KilledBySignal(signal_number) // Not available on Windows. | |
682 | ``` | |
683 | ||
684 | This expression is `true` if the program was killed by the given signal. | |
685 | ||
686 | The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate | |
687 | that verifies the process' exit code is non-zero. | |
688 | ||
689 | Note that a death test only cares about three things: | |
690 | ||
691 | 1. does `statement` abort or exit the process? | |
692 | 2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status | |
693 | satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`) | |
694 | is the exit status non-zero? And | |
695 | 3. does the stderr output match `matcher`? | |
696 | ||
697 | In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it | |
698 | will **not** cause the death test to fail, as googletest assertions don't abort | |
699 | the process. | |
700 | ||
701 | To write a death test, simply use one of the above macros inside your test | |
702 | function. For example, | |
703 | ||
704 | ```c++ | |
705 | TEST(MyDeathTest, Foo) { | |
706 | // This death test uses a compound statement. | |
707 | ASSERT_DEATH({ | |
708 | int n = 5; | |
709 | Foo(&n); | |
710 | }, "Error on line .* of Foo()"); | |
711 | } | |
712 | ||
713 | TEST(MyDeathTest, NormalExit) { | |
714 | EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success"); | |
715 | } | |
716 | ||
717 | TEST(MyDeathTest, KillMyself) { | |
718 | EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL), | |
719 | "Sending myself unblockable signal"); | |
720 | } | |
721 | ``` | |
722 | ||
723 | verifies that: | |
724 | ||
725 | * calling `Foo(5)` causes the process to die with the given error message, | |
726 | * calling `NormalExit()` causes the process to print `"Success"` to stderr and | |
727 | exit with exit code 0, and | |
728 | * calling `KillMyself()` kills the process with signal `SIGKILL`. | |
729 | ||
730 | The test function body may contain other assertions and statements as well, if | |
731 | necessary. | |
732 | ||
733 | ### Death Test Naming | |
734 | ||
735 | IMPORTANT: We strongly recommend you to follow the convention of naming your | |
736 | **test suite** (not test) `*DeathTest` when it contains a death test, as | |
737 | demonstrated in the above example. The | |
738 | [Death Tests And Threads](#death-tests-and-threads) section below explains why. | |
739 | ||
740 | If a test fixture class is shared by normal tests and death tests, you can use | |
741 | `using` or `typedef` to introduce an alias for the fixture class and avoid | |
742 | duplicating its code: | |
743 | ||
744 | ```c++ | |
745 | class FooTest : public ::testing::Test { ... }; | |
746 | ||
747 | using FooDeathTest = FooTest; | |
748 | ||
749 | TEST_F(FooTest, DoesThis) { | |
750 | // normal test | |
751 | } | |
752 | ||
753 | TEST_F(FooDeathTest, DoesThat) { | |
754 | // death test | |
755 | } | |
756 | ``` | |
757 | ||
758 | ### Regular Expression Syntax | |
759 | ||
760 | On POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the | |
761 | [POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04) | |
762 | syntax. To learn about this syntax, you may want to read this | |
763 | [Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions). | |
764 | ||
765 | On Windows, googletest uses its own simple regular expression implementation. It | |
766 | lacks many features. For example, we don't support union (`"x|y"`), grouping | |
767 | (`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among | |
768 | others. Below is what we do support (`A` denotes a literal character, period | |
769 | (`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular | |
770 | expressions.): | |
771 | ||
772 | Expression | Meaning | |
773 | ---------- | -------------------------------------------------------------- | |
774 | `c` | matches any literal character `c` | |
775 | `\\d` | matches any decimal digit | |
776 | `\\D` | matches any character that's not a decimal digit | |
777 | `\\f` | matches `\f` | |
778 | `\\n` | matches `\n` | |
779 | `\\r` | matches `\r` | |
780 | `\\s` | matches any ASCII whitespace, including `\n` | |
781 | `\\S` | matches any character that's not a whitespace | |
782 | `\\t` | matches `\t` | |
783 | `\\v` | matches `\v` | |
784 | `\\w` | matches any letter, `_`, or decimal digit | |
785 | `\\W` | matches any character that `\\w` doesn't match | |
786 | `\\c` | matches any literal character `c`, which must be a punctuation | |
787 | `.` | matches any single character except `\n` | |
788 | `A?` | matches 0 or 1 occurrences of `A` | |
789 | `A*` | matches 0 or many occurrences of `A` | |
790 | `A+` | matches 1 or many occurrences of `A` | |
791 | `^` | matches the beginning of a string (not that of each line) | |
792 | `$` | matches the end of a string (not that of each line) | |
793 | `xy` | matches `x` followed by `y` | |
794 | ||
795 | To help you determine which capability is available on your system, googletest | |
796 | defines macros to govern which regular expression it is using. The macros are: | |
797 | `GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death | |
798 | tests to work in all cases, you can either `#if` on these macros or use the more | |
799 | limited syntax only. | |
800 | ||
801 | ### How It Works | |
802 | ||
803 | Under the hood, `ASSERT_EXIT()` spawns a new process and executes the death test | |
804 | statement in that process. The details of how precisely that happens depend on | |
805 | the platform and the variable ::testing::GTEST_FLAG(death_test_style) (which is | |
806 | initialized from the command-line flag `--gtest_death_test_style`). | |
807 | ||
808 | * On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the | |
809 | child, after which: | |
810 | * If the variable's value is `"fast"`, the death test statement is | |
811 | immediately executed. | |
812 | * If the variable's value is `"threadsafe"`, the child process re-executes | |
813 | the unit test binary just as it was originally invoked, but with some | |
814 | extra flags to cause just the single death test under consideration to | |
815 | be run. | |
816 | * On Windows, the child is spawned using the `CreateProcess()` API, and | |
817 | re-executes the binary to cause just the single death test under | |
818 | consideration to be run - much like the `threadsafe` mode on POSIX. | |
819 | ||
820 | Other values for the variable are illegal and will cause the death test to fail. | |
821 | Currently, the flag's default value is **"fast"** | |
822 | ||
823 | 1. the child's exit status satisfies the predicate, and | |
824 | 2. the child's stderr matches the regular expression. | |
825 | ||
826 | If the death test statement runs to completion without dying, the child process | |
827 | will nonetheless terminate, and the assertion fails. | |
828 | ||
829 | ### Death Tests And Threads | |
830 | ||
831 | The reason for the two death test styles has to do with thread safety. Due to | |
832 | well-known problems with forking in the presence of threads, death tests should | |
833 | be run in a single-threaded context. Sometimes, however, it isn't feasible to | |
834 | arrange that kind of environment. For example, statically-initialized modules | |
835 | may start threads before main is ever reached. Once threads have been created, | |
836 | it may be difficult or impossible to clean them up. | |
837 | ||
838 | googletest has three features intended to raise awareness of threading issues. | |
839 | ||
840 | 1. A warning is emitted if multiple threads are running when a death test is | |
841 | encountered. | |
842 | 2. Test suites with a name ending in "DeathTest" are run before all other | |
843 | tests. | |
844 | 3. It uses `clone()` instead of `fork()` to spawn the child process on Linux | |
845 | (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely | |
846 | to cause the child to hang when the parent process has multiple threads. | |
847 | ||
848 | It's perfectly fine to create threads inside a death test statement; they are | |
849 | executed in a separate process and cannot affect the parent. | |
850 | ||
851 | ### Death Test Styles | |
852 | ||
853 | The "threadsafe" death test style was introduced in order to help mitigate the | |
854 | risks of testing in a possibly multithreaded environment. It trades increased | |
855 | test execution time (potentially dramatically so) for improved thread safety. | |
856 | ||
857 | The automated testing framework does not set the style flag. You can choose a | |
858 | particular style of death tests by setting the flag programmatically: | |
859 | ||
860 | ```c++ | |
861 | testing::FLAGS_gtest_death_test_style="threadsafe" | |
862 | ``` | |
863 | ||
864 | You can do this in `main()` to set the style for all death tests in the binary, | |
865 | or in individual tests. Recall that flags are saved before running each test and | |
866 | restored afterwards, so you need not do that yourself. For example: | |
867 | ||
868 | ```c++ | |
869 | int main(int argc, char** argv) { | |
870 | ::testing::InitGoogleTest(&argc, argv); | |
871 | ::testing::FLAGS_gtest_death_test_style = "fast"; | |
872 | return RUN_ALL_TESTS(); | |
873 | } | |
874 | ||
875 | TEST(MyDeathTest, TestOne) { | |
876 | ::testing::FLAGS_gtest_death_test_style = "threadsafe"; | |
877 | // This test is run in the "threadsafe" style: | |
878 | ASSERT_DEATH(ThisShouldDie(), ""); | |
879 | } | |
880 | ||
881 | TEST(MyDeathTest, TestTwo) { | |
882 | // This test is run in the "fast" style: | |
883 | ASSERT_DEATH(ThisShouldDie(), ""); | |
884 | } | |
885 | ``` | |
886 | ||
887 | ### Caveats | |
888 | ||
889 | The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If | |
890 | it leaves the current function via a `return` statement or by throwing an | |
891 | exception, the death test is considered to have failed. Some googletest macros | |
892 | may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid | |
893 | them in `statement`. | |
894 | ||
895 | Since `statement` runs in the child process, any in-memory side effect (e.g. | |
896 | modifying a variable, releasing memory, etc) it causes will *not* be observable | |
897 | in the parent process. In particular, if you release memory in a death test, | |
898 | your program will fail the heap check as the parent process will never see the | |
899 | memory reclaimed. To solve this problem, you can | |
900 | ||
901 | 1. try not to free memory in a death test; | |
902 | 2. free the memory again in the parent process; or | |
903 | 3. do not use the heap checker in your program. | |
904 | ||
905 | Due to an implementation detail, you cannot place multiple death test assertions | |
906 | on the same line; otherwise, compilation will fail with an unobvious error | |
907 | message. | |
908 | ||
909 | Despite the improved thread safety afforded by the "threadsafe" style of death | |
910 | test, thread problems such as deadlock are still possible in the presence of | |
911 | handlers registered with `pthread_atfork(3)`. | |
912 | ||
913 | ||
914 | ## Using Assertions in Sub-routines | |
915 | ||
916 | Note: If you want to put a series of test assertions in a subroutine to check | |
917 | for a complex condition, consider using | |
918 | [a custom GMock matcher](../../googlemock/docs/cook_book.md#NewMatchers) | |
919 | instead. This lets you provide a more readable error message in case of failure | |
920 | and avoid all of the issues described below. | |
921 | ||
922 | ### Adding Traces to Assertions | |
923 | ||
924 | If a test sub-routine is called from several places, when an assertion inside it | |
925 | fails, it can be hard to tell which invocation of the sub-routine the failure is | |
926 | from. You can alleviate this problem using extra logging or custom failure | |
927 | messages, but that usually clutters up your tests. A better solution is to use | |
928 | the `SCOPED_TRACE` macro or the `ScopedTrace` utility: | |
929 | ||
930 | ```c++ | |
931 | SCOPED_TRACE(message); | |
932 | ``` | |
933 | ```c++ | |
934 | ScopedTrace trace("file_path", line_number, message); | |
935 | ``` | |
936 | ||
937 | where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE` | |
938 | macro will cause the current file name, line number, and the given message to be | |
939 | added in every failure message. `ScopedTrace` accepts explicit file name and | |
940 | line number in arguments, which is useful for writing test helpers. The effect | |
941 | will be undone when the control leaves the current lexical scope. | |
942 | ||
943 | For example, | |
944 | ||
945 | ```c++ | |
946 | 10: void Sub1(int n) { | |
947 | 11: EXPECT_EQ(Bar(n), 1); | |
948 | 12: EXPECT_EQ(Bar(n + 1), 2); | |
949 | 13: } | |
950 | 14: | |
951 | 15: TEST(FooTest, Bar) { | |
952 | 16: { | |
953 | 17: SCOPED_TRACE("A"); // This trace point will be included in | |
954 | 18: // every failure in this scope. | |
955 | 19: Sub1(1); | |
956 | 20: } | |
957 | 21: // Now it won't. | |
958 | 22: Sub1(9); | |
959 | 23: } | |
960 | ``` | |
961 | ||
962 | could result in messages like these: | |
963 | ||
964 | ```none | |
965 | path/to/foo_test.cc:11: Failure | |
966 | Value of: Bar(n) | |
967 | Expected: 1 | |
968 | Actual: 2 | |
969 | Google Test trace: | |
970 | path/to/foo_test.cc:17: A | |
971 | ||
972 | path/to/foo_test.cc:12: Failure | |
973 | Value of: Bar(n + 1) | |
974 | Expected: 2 | |
975 | Actual: 3 | |
976 | ``` | |
977 | ||
978 | Without the trace, it would've been difficult to know which invocation of | |
979 | `Sub1()` the two failures come from respectively. (You could add an extra | |
980 | message to each assertion in `Sub1()` to indicate the value of `n`, but that's | |
981 | tedious.) | |
982 | ||
983 | Some tips on using `SCOPED_TRACE`: | |
984 | ||
985 | 1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the | |
986 | beginning of a sub-routine, instead of at each call site. | |
987 | 2. When calling sub-routines inside a loop, make the loop iterator part of the | |
988 | message in `SCOPED_TRACE` such that you can know which iteration the failure | |
989 | is from. | |
990 | 3. Sometimes the line number of the trace point is enough for identifying the | |
991 | particular invocation of a sub-routine. In this case, you don't have to | |
992 | choose a unique message for `SCOPED_TRACE`. You can simply use `""`. | |
993 | 4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer | |
994 | scope. In this case, all active trace points will be included in the failure | |
995 | messages, in reverse order they are encountered. | |
996 | 5. The trace dump is clickable in Emacs - hit `return` on a line number and | |
997 | you'll be taken to that line in the source file! | |
998 | ||
999 | ### Propagating Fatal Failures | |
1000 | ||
1001 | A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that | |
1002 | when they fail they only abort the _current function_, not the entire test. For | |
1003 | example, the following test will segfault: | |
1004 | ||
1005 | ```c++ | |
1006 | void Subroutine() { | |
1007 | // Generates a fatal failure and aborts the current function. | |
1008 | ASSERT_EQ(1, 2); | |
1009 | ||
1010 | // The following won't be executed. | |
1011 | ... | |
1012 | } | |
1013 | ||
1014 | TEST(FooTest, Bar) { | |
1015 | Subroutine(); // The intended behavior is for the fatal failure | |
1016 | // in Subroutine() to abort the entire test. | |
1017 | ||
1018 | // The actual behavior: the function goes on after Subroutine() returns. | |
1019 | int* p = nullptr; | |
1020 | *p = 3; // Segfault! | |
1021 | } | |
1022 | ``` | |
1023 | ||
1024 | To alleviate this, googletest provides three different solutions. You could use | |
1025 | either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the | |
1026 | `HasFatalFailure()` function. They are described in the following two | |
1027 | subsections. | |
1028 | ||
1029 | #### Asserting on Subroutines with an exception | |
1030 | ||
1031 | The following code can turn ASSERT-failure into an exception: | |
1032 | ||
1033 | ```c++ | |
1034 | class ThrowListener : public testing::EmptyTestEventListener { | |
1035 | void OnTestPartResult(const testing::TestPartResult& result) override { | |
1036 | if (result.type() == testing::TestPartResult::kFatalFailure) { | |
1037 | throw testing::AssertionException(result); | |
1038 | } | |
1039 | } | |
1040 | }; | |
1041 | int main(int argc, char** argv) { | |
1042 | ... | |
1043 | testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener); | |
1044 | return RUN_ALL_TESTS(); | |
1045 | } | |
1046 | ``` | |
1047 | ||
1048 | This listener should be added after other listeners if you have any, otherwise | |
1049 | they won't see failed `OnTestPartResult`. | |
1050 | ||
1051 | #### Asserting on Subroutines | |
1052 | ||
1053 | As shown above, if your test calls a subroutine that has an `ASSERT_*` failure | |
1054 | in it, the test will continue after the subroutine returns. This may not be what | |
1055 | you want. | |
1056 | ||
1057 | Often people want fatal failures to propagate like exceptions. For that | |
1058 | googletest offers the following macros: | |
1059 | ||
1060 | Fatal assertion | Nonfatal assertion | Verifies | |
1061 | ------------------------------------- | ------------------------------------- | -------- | |
1062 | `ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread. | |
1063 | ||
1064 | Only failures in the thread that executes the assertion are checked to determine | |
1065 | the result of this type of assertions. If `statement` creates new threads, | |
1066 | failures in these threads are ignored. | |
1067 | ||
1068 | Examples: | |
1069 | ||
1070 | ```c++ | |
1071 | ASSERT_NO_FATAL_FAILURE(Foo()); | |
1072 | ||
1073 | int i; | |
1074 | EXPECT_NO_FATAL_FAILURE({ | |
1075 | i = Bar(); | |
1076 | }); | |
1077 | ``` | |
1078 | ||
1079 | Assertions from multiple threads are currently not supported on Windows. | |
1080 | ||
1081 | #### Checking for Failures in the Current Test | |
1082 | ||
1083 | `HasFatalFailure()` in the `::testing::Test` class returns `true` if an | |
1084 | assertion in the current test has suffered a fatal failure. This allows | |
1085 | functions to catch fatal failures in a sub-routine and return early. | |
1086 | ||
1087 | ```c++ | |
1088 | class Test { | |
1089 | public: | |
1090 | ... | |
1091 | static bool HasFatalFailure(); | |
1092 | }; | |
1093 | ``` | |
1094 | ||
1095 | The typical usage, which basically simulates the behavior of a thrown exception, | |
1096 | is: | |
1097 | ||
1098 | ```c++ | |
1099 | TEST(FooTest, Bar) { | |
1100 | Subroutine(); | |
1101 | // Aborts if Subroutine() had a fatal failure. | |
1102 | if (HasFatalFailure()) return; | |
1103 | ||
1104 | // The following won't be executed. | |
1105 | ... | |
1106 | } | |
1107 | ``` | |
1108 | ||
1109 | If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test | |
1110 | fixture, you must add the `::testing::Test::` prefix, as in: | |
1111 | ||
1112 | ```c++ | |
1113 | if (::testing::Test::HasFatalFailure()) return; | |
1114 | ``` | |
1115 | ||
1116 | Similarly, `HasNonfatalFailure()` returns `true` if the current test has at | |
1117 | least one non-fatal failure, and `HasFailure()` returns `true` if the current | |
1118 | test has at least one failure of either kind. | |
1119 | ||
1120 | ## Logging Additional Information | |
1121 | ||
1122 | In your test code, you can call `RecordProperty("key", value)` to log additional | |
1123 | information, where `value` can be either a string or an `int`. The *last* value | |
1124 | recorded for a key will be emitted to the | |
1125 | [XML output](#generating-an-xml-report) if you specify one. For example, the | |
1126 | test | |
1127 | ||
1128 | ```c++ | |
1129 | TEST_F(WidgetUsageTest, MinAndMaxWidgets) { | |
1130 | RecordProperty("MaximumWidgets", ComputeMaxUsage()); | |
1131 | RecordProperty("MinimumWidgets", ComputeMinUsage()); | |
1132 | } | |
1133 | ``` | |
1134 | ||
1135 | will output XML like this: | |
1136 | ||
1137 | ```xml | |
1138 | ... | |
1139 | <testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" /> | |
1140 | ... | |
1141 | ``` | |
1142 | ||
1143 | > NOTE: | |
1144 | > | |
1145 | > * `RecordProperty()` is a static member of the `Test` class. Therefore it | |
1146 | > needs to be prefixed with `::testing::Test::` if used outside of the | |
1147 | > `TEST` body and the test fixture class. | |
1148 | > * *`key`* must be a valid XML attribute name, and cannot conflict with the | |
1149 | > ones already used by googletest (`name`, `status`, `time`, `classname`, | |
1150 | > `type_param`, and `value_param`). | |
1151 | > * Calling `RecordProperty()` outside of the lifespan of a test is allowed. | |
1152 | > If it's called outside of a test but between a test suite's | |
1153 | > `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be | |
1154 | > attributed to the XML element for the test suite. If it's called outside | |
1155 | > of all test suites (e.g. in a test environment), it will be attributed to | |
1156 | > the top-level XML element. | |
1157 | ||
1158 | ## Sharing Resources Between Tests in the Same Test Suite | |
1159 | ||
1160 | googletest creates a new test fixture object for each test in order to make | |
1161 | tests independent and easier to debug. However, sometimes tests use resources | |
1162 | that are expensive to set up, making the one-copy-per-test model prohibitively | |
1163 | expensive. | |
1164 | ||
1165 | If the tests don't change the resource, there's no harm in their sharing a | |
1166 | single resource copy. So, in addition to per-test set-up/tear-down, googletest | |
1167 | also supports per-test-suite set-up/tear-down. To use it: | |
1168 | ||
1169 | 1. In your test fixture class (say `FooTest` ), declare as `static` some member | |
1170 | variables to hold the shared resources. | |
1171 | 2. Outside your test fixture class (typically just below it), define those | |
1172 | member variables, optionally giving them initial values. | |
1173 | 3. In the same test fixture class, define a `static void SetUpTestSuite()` | |
1174 | function (remember not to spell it as **`SetupTestSuite`** with a small | |
1175 | `u`!) to set up the shared resources and a `static void TearDownTestSuite()` | |
1176 | function to tear them down. | |
1177 | ||
1178 | That's it! googletest automatically calls `SetUpTestSuite()` before running the | |
1179 | *first test* in the `FooTest` test suite (i.e. before creating the first | |
1180 | `FooTest` object), and calls `TearDownTestSuite()` after running the *last test* | |
1181 | in it (i.e. after deleting the last `FooTest` object). In between, the tests can | |
1182 | use the shared resources. | |
1183 | ||
1184 | Remember that the test order is undefined, so your code can't depend on a test | |
1185 | preceding or following another. Also, the tests must either not modify the state | |
1186 | of any shared resource, or, if they do modify the state, they must restore the | |
1187 | state to its original value before passing control to the next test. | |
1188 | ||
1189 | Here's an example of per-test-suite set-up and tear-down: | |
1190 | ||
1191 | ```c++ | |
1192 | class FooTest : public ::testing::Test { | |
1193 | protected: | |
1194 | // Per-test-suite set-up. | |
1195 | // Called before the first test in this test suite. | |
1196 | // Can be omitted if not needed. | |
1197 | static void SetUpTestSuite() { | |
1198 | shared_resource_ = new ...; | |
1199 | } | |
1200 | ||
1201 | // Per-test-suite tear-down. | |
1202 | // Called after the last test in this test suite. | |
1203 | // Can be omitted if not needed. | |
1204 | static void TearDownTestSuite() { | |
1205 | delete shared_resource_; | |
1206 | shared_resource_ = nullptr; | |
1207 | } | |
1208 | ||
1209 | // You can define per-test set-up logic as usual. | |
1210 | virtual void SetUp() { ... } | |
1211 | ||
1212 | // You can define per-test tear-down logic as usual. | |
1213 | virtual void TearDown() { ... } | |
1214 | ||
1215 | // Some expensive resource shared by all tests. | |
1216 | static T* shared_resource_; | |
1217 | }; | |
1218 | ||
1219 | T* FooTest::shared_resource_ = nullptr; | |
1220 | ||
1221 | TEST_F(FooTest, Test1) { | |
1222 | ... you can refer to shared_resource_ here ... | |
1223 | } | |
1224 | ||
1225 | TEST_F(FooTest, Test2) { | |
1226 | ... you can refer to shared_resource_ here ... | |
1227 | } | |
1228 | ``` | |
1229 | ||
1230 | NOTE: Though the above code declares `SetUpTestSuite()` protected, it may | |
1231 | sometimes be necessary to declare it public, such as when using it with | |
1232 | `TEST_P`. | |
1233 | ||
1234 | ## Global Set-Up and Tear-Down | |
1235 | ||
1236 | Just as you can do set-up and tear-down at the test level and the test suite | |
1237 | level, you can also do it at the test program level. Here's how. | |
1238 | ||
1239 | First, you subclass the `::testing::Environment` class to define a test | |
1240 | environment, which knows how to set-up and tear-down: | |
1241 | ||
1242 | ```c++ | |
1243 | class Environment : public ::testing::Environment { | |
1244 | public: | |
1245 | ~Environment() override {} | |
1246 | ||
1247 | // Override this to define how to set up the environment. | |
1248 | void SetUp() override {} | |
1249 | ||
1250 | // Override this to define how to tear down the environment. | |
1251 | void TearDown() override {} | |
1252 | }; | |
1253 | ``` | |
1254 | ||
1255 | Then, you register an instance of your environment class with googletest by | |
1256 | calling the `::testing::AddGlobalTestEnvironment()` function: | |
1257 | ||
1258 | ```c++ | |
1259 | Environment* AddGlobalTestEnvironment(Environment* env); | |
1260 | ``` | |
1261 | ||
1262 | Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of | |
1263 | each environment object, then runs the tests if none of the environments | |
1264 | reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()` | |
1265 | always calls `TearDown()` with each environment object, regardless of whether or | |
1266 | not the tests were run. | |
1267 | ||
1268 | It's OK to register multiple environment objects. In this suite, their `SetUp()` | |
1269 | will be called in the order they are registered, and their `TearDown()` will be | |
1270 | called in the reverse order. | |
1271 | ||
1272 | Note that googletest takes ownership of the registered environment objects. | |
1273 | Therefore **do not delete them** by yourself. | |
1274 | ||
1275 | You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called, | |
1276 | probably in `main()`. If you use `gtest_main`, you need to call this before | |
1277 | `main()` starts for it to take effect. One way to do this is to define a global | |
1278 | variable like this: | |
1279 | ||
1280 | ```c++ | |
1281 | ::testing::Environment* const foo_env = | |
1282 | ::testing::AddGlobalTestEnvironment(new FooEnvironment); | |
1283 | ``` | |
1284 | ||
1285 | However, we strongly recommend you to write your own `main()` and call | |
1286 | `AddGlobalTestEnvironment()` there, as relying on initialization of global | |
1287 | variables makes the code harder to read and may cause problems when you register | |
1288 | multiple environments from different translation units and the environments have | |
1289 | dependencies among them (remember that the compiler doesn't guarantee the order | |
1290 | in which global variables from different translation units are initialized). | |
1291 | ||
1292 | ## Value-Parameterized Tests | |
1293 | ||
1294 | *Value-parameterized tests* allow you to test your code with different | |
1295 | parameters without writing multiple copies of the same test. This is useful in a | |
1296 | number of situations, for example: | |
1297 | ||
1298 | * You have a piece of code whose behavior is affected by one or more | |
1299 | command-line flags. You want to make sure your code performs correctly for | |
1300 | various values of those flags. | |
1301 | * You want to test different implementations of an OO interface. | |
1302 | * You want to test your code over various inputs (a.k.a. data-driven testing). | |
1303 | This feature is easy to abuse, so please exercise your good sense when doing | |
1304 | it! | |
1305 | ||
1306 | ### How to Write Value-Parameterized Tests | |
1307 | ||
1308 | To write value-parameterized tests, first you should define a fixture class. It | |
1309 | must be derived from both `testing::Test` and `testing::WithParamInterface<T>` | |
1310 | (the latter is a pure interface), where `T` is the type of your parameter | |
1311 | values. For convenience, you can just derive the fixture class from | |
1312 | `testing::TestWithParam<T>`, which itself is derived from both `testing::Test` | |
1313 | and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a | |
1314 | raw pointer, you are responsible for managing the lifespan of the pointed | |
1315 | values. | |
1316 | ||
1317 | NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()` | |
1318 | they must be declared **public** rather than **protected** in order to use | |
1319 | `TEST_P`. | |
1320 | ||
1321 | ```c++ | |
1322 | class FooTest : | |
1323 | public testing::TestWithParam<const char*> { | |
1324 | // You can implement all the usual fixture class members here. | |
1325 | // To access the test parameter, call GetParam() from class | |
1326 | // TestWithParam<T>. | |
1327 | }; | |
1328 | ||
1329 | // Or, when you want to add parameters to a pre-existing fixture class: | |
1330 | class BaseTest : public testing::Test { | |
1331 | ... | |
1332 | }; | |
1333 | class BarTest : public BaseTest, | |
1334 | public testing::WithParamInterface<const char*> { | |
1335 | ... | |
1336 | }; | |
1337 | ``` | |
1338 | ||
1339 | Then, use the `TEST_P` macro to define as many test patterns using this fixture | |
1340 | as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you | |
1341 | prefer to think. | |
1342 | ||
1343 | ```c++ | |
1344 | TEST_P(FooTest, DoesBlah) { | |
1345 | // Inside a test, access the test parameter with the GetParam() method | |
1346 | // of the TestWithParam<T> class: | |
1347 | EXPECT_TRUE(foo.Blah(GetParam())); | |
1348 | ... | |
1349 | } | |
1350 | ||
1351 | TEST_P(FooTest, HasBlahBlah) { | |
1352 | ... | |
1353 | } | |
1354 | ``` | |
1355 | ||
1356 | Finally, you can use `INSTANTIATE_TEST_SUITE_P` to instantiate the test suite | |
1357 | with any set of parameters you want. googletest defines a number of functions | |
1358 | for generating test parameters. They return what we call (surprise!) *parameter | |
1359 | generators*. Here is a summary of them, which are all in the `testing` | |
1360 | namespace: | |
1361 | ||
1362 | <!-- mdformat off(github rendering does not support multiline tables) --> | |
1363 | ||
1364 | | Parameter Generator | Behavior | | |
1365 | | ----------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | | |
1366 | | `Range(begin, end [, step])` | Yields values `{begin, begin+step, begin+step+step, ...}`. The values do not include `end`. `step` defaults to 1. | | |
1367 | | `Values(v1, v2, ..., vN)` | Yields values `{v1, v2, ..., vN}`. | | |
1368 | | `ValuesIn(container)` and `ValuesIn(begin,end)` | Yields values from a C-style array, an STL-style container, or an iterator range `[begin, end)` | | |
1369 | | `Bool()` | Yields sequence `{false, true}`. | | |
1370 | | `Combine(g1, g2, ..., gN)` | Yields all combinations (Cartesian product) as std\:\:tuples of the values generated by the `N` generators. | | |
1371 | ||
1372 | <!-- mdformat on--> | |
1373 | ||
1374 | For more details, see the comments at the definitions of these functions. | |
1375 | ||
1376 | The following statement will instantiate tests from the `FooTest` test suite | |
1377 | each with parameter values `"meeny"`, `"miny"`, and `"moe"`. | |
1378 | ||
1379 | ```c++ | |
1380 | INSTANTIATE_TEST_SUITE_P(InstantiationName, | |
1381 | FooTest, | |
1382 | testing::Values("meeny", "miny", "moe")); | |
1383 | ``` | |
1384 | ||
1385 | NOTE: The code above must be placed at global or namespace scope, not at | |
1386 | function scope. | |
1387 | ||
1388 | Per default, every `TEST_P` without a corresponding `INSTANTIATE_TEST_SUITE_P` | |
1389 | causes a failing test in test suite `GoogleTestVerification`. If you have a test | |
1390 | suite where that omission is not an error, for example it is in a library that | |
1391 | may be linked in for other reason or where the list of test cases is dynamic and | |
1392 | may be empty, then this check can be suppressed by tagging the test suite: | |
1393 | ||
1394 | ```c++ | |
1395 | GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(FooTest); | |
1396 | ``` | |
1397 | ||
1398 | To distinguish different instances of the pattern (yes, you can instantiate it | |
1399 | more than once), the first argument to `INSTANTIATE_TEST_SUITE_P` is a prefix | |
1400 | that will be added to the actual test suite name. Remember to pick unique | |
1401 | prefixes for different instantiations. The tests from the instantiation above | |
1402 | will have these names: | |
1403 | ||
1404 | * `InstantiationName/FooTest.DoesBlah/0` for `"meeny"` | |
1405 | * `InstantiationName/FooTest.DoesBlah/1` for `"miny"` | |
1406 | * `InstantiationName/FooTest.DoesBlah/2` for `"moe"` | |
1407 | * `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"` | |
1408 | * `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"` | |
1409 | * `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"` | |
1410 | ||
1411 | You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests). | |
1412 | ||
1413 | This statement will instantiate all tests from `FooTest` again, each with | |
1414 | parameter values `"cat"` and `"dog"`: | |
1415 | ||
1416 | ```c++ | |
1417 | const char* pets[] = {"cat", "dog"}; | |
1418 | INSTANTIATE_TEST_SUITE_P(AnotherInstantiationName, FooTest, | |
1419 | testing::ValuesIn(pets)); | |
1420 | ``` | |
1421 | ||
1422 | The tests from the instantiation above will have these names: | |
1423 | ||
1424 | * `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"` | |
1425 | * `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"` | |
1426 | * `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"` | |
1427 | * `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"` | |
1428 | ||
1429 | Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the | |
1430 | given test suite, whether their definitions come before or *after* the | |
1431 | `INSTANTIATE_TEST_SUITE_P` statement. | |
1432 | ||
1433 | You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples. | |
1434 | ||
1435 | [sample7_unittest.cc]: ../samples/sample7_unittest.cc "Parameterized Test example" | |
1436 | [sample8_unittest.cc]: ../samples/sample8_unittest.cc "Parameterized Test example with multiple parameters" | |
1437 | ||
1438 | ### Creating Value-Parameterized Abstract Tests | |
1439 | ||
1440 | In the above, we define and instantiate `FooTest` in the *same* source file. | |
1441 | Sometimes you may want to define value-parameterized tests in a library and let | |
1442 | other people instantiate them later. This pattern is known as *abstract tests*. | |
1443 | As an example of its application, when you are designing an interface you can | |
1444 | write a standard suite of abstract tests (perhaps using a factory function as | |
1445 | the test parameter) that all implementations of the interface are expected to | |
1446 | pass. When someone implements the interface, they can instantiate your suite to | |
1447 | get all the interface-conformance tests for free. | |
1448 | ||
1449 | To define abstract tests, you should organize your code like this: | |
1450 | ||
1451 | 1. Put the definition of the parameterized test fixture class (e.g. `FooTest`) | |
1452 | in a header file, say `foo_param_test.h`. Think of this as *declaring* your | |
1453 | abstract tests. | |
1454 | 2. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes | |
1455 | `foo_param_test.h`. Think of this as *implementing* your abstract tests. | |
1456 | ||
1457 | Once they are defined, you can instantiate them by including `foo_param_test.h`, | |
1458 | invoking `INSTANTIATE_TEST_SUITE_P()`, and depending on the library target that | |
1459 | contains `foo_param_test.cc`. You can instantiate the same abstract test suite | |
1460 | multiple times, possibly in different source files. | |
1461 | ||
1462 | ### Specifying Names for Value-Parameterized Test Parameters | |
1463 | ||
1464 | The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to | |
1465 | specify a function or functor that generates custom test name suffixes based on | |
1466 | the test parameters. The function should accept one argument of type | |
1467 | `testing::TestParamInfo<class ParamType>`, and return `std::string`. | |
1468 | ||
1469 | `testing::PrintToStringParamName` is a builtin test suffix generator that | |
1470 | returns the value of `testing::PrintToString(GetParam())`. It does not work for | |
1471 | `std::string` or C strings. | |
1472 | ||
1473 | NOTE: test names must be non-empty, unique, and may only contain ASCII | |
1474 | alphanumeric characters. In particular, they | |
1475 | [should not contain underscores](faq.md#why-should-test-suite-names-and-test-names-not-contain-underscore) | |
1476 | ||
1477 | ```c++ | |
1478 | class MyTestSuite : public testing::TestWithParam<int> {}; | |
1479 | ||
1480 | TEST_P(MyTestSuite, MyTest) | |
1481 | { | |
1482 | std::cout << "Example Test Param: " << GetParam() << std::endl; | |
1483 | } | |
1484 | ||
1485 | INSTANTIATE_TEST_SUITE_P(MyGroup, MyTestSuite, testing::Range(0, 10), | |
1486 | testing::PrintToStringParamName()); | |
1487 | ``` | |
1488 | ||
1489 | Providing a custom functor allows for more control over test parameter name | |
1490 | generation, especially for types where the automatic conversion does not | |
1491 | generate helpful parameter names (e.g. strings as demonstrated above). The | |
1492 | following example illustrates this for multiple parameters, an enumeration type | |
1493 | and a string, and also demonstrates how to combine generators. It uses a lambda | |
1494 | for conciseness: | |
1495 | ||
1496 | ```c++ | |
1497 | enum class MyType { MY_FOO = 0, MY_BAR = 1 }; | |
1498 | ||
1499 | class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, std::string>> { | |
1500 | }; | |
1501 | ||
1502 | INSTANTIATE_TEST_SUITE_P( | |
1503 | MyGroup, MyTestSuite, | |
1504 | testing::Combine( | |
1505 | testing::Values(MyType::VALUE_0, MyType::VALUE_1), | |
1506 | testing::ValuesIn("", "")), | |
1507 | [](const testing::TestParamInfo<MyTestSuite::ParamType>& info) { | |
1508 | std::string name = absl::StrCat( | |
1509 | std::get<0>(info.param) == MY_FOO ? "Foo" : "Bar", "_", | |
1510 | std::get<1>(info.param)); | |
1511 | absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_'); | |
1512 | return name; | |
1513 | }); | |
1514 | ``` | |
1515 | ||
1516 | ## Typed Tests | |
1517 | ||
1518 | Suppose you have multiple implementations of the same interface and want to make | |
1519 | sure that all of them satisfy some common requirements. Or, you may have defined | |
1520 | several types that are supposed to conform to the same "concept" and you want to | |
1521 | verify it. In both cases, you want the same test logic repeated for different | |
1522 | types. | |
1523 | ||
1524 | While you can write one `TEST` or `TEST_F` for each type you want to test (and | |
1525 | you may even factor the test logic into a function template that you invoke from | |
1526 | the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n` | |
1527 | types, you'll end up writing `m*n` `TEST`s. | |
1528 | ||
1529 | *Typed tests* allow you to repeat the same test logic over a list of types. You | |
1530 | only need to write the test logic once, although you must know the type list | |
1531 | when writing typed tests. Here's how you do it: | |
1532 | ||
1533 | First, define a fixture class template. It should be parameterized by a type. | |
1534 | Remember to derive it from `::testing::Test`: | |
1535 | ||
1536 | ```c++ | |
1537 | template <typename T> | |
1538 | class FooTest : public ::testing::Test { | |
1539 | public: | |
1540 | ... | |
1541 | using List = std::list<T>; | |
1542 | static T shared_; | |
1543 | T value_; | |
1544 | }; | |
1545 | ``` | |
1546 | ||
1547 | Next, associate a list of types with the test suite, which will be repeated for | |
1548 | each type in the list: | |
1549 | ||
1550 | ```c++ | |
1551 | using MyTypes = ::testing::Types<char, int, unsigned int>; | |
1552 | TYPED_TEST_SUITE(FooTest, MyTypes); | |
1553 | ``` | |
1554 | ||
1555 | The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_SUITE` | |
1556 | macro to parse correctly. Otherwise the compiler will think that each comma in | |
1557 | the type list introduces a new macro argument. | |
1558 | ||
1559 | Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this | |
1560 | test suite. You can repeat this as many times as you want: | |
1561 | ||
1562 | ```c++ | |
1563 | TYPED_TEST(FooTest, DoesBlah) { | |
1564 | // Inside a test, refer to the special name TypeParam to get the type | |
1565 | // parameter. Since we are inside a derived class template, C++ requires | |
1566 | // us to visit the members of FooTest via 'this'. | |
1567 | TypeParam n = this->value_; | |
1568 | ||
1569 | // To visit static members of the fixture, add the 'TestFixture::' | |
1570 | // prefix. | |
1571 | n += TestFixture::shared_; | |
1572 | ||
1573 | // To refer to typedefs in the fixture, add the 'typename TestFixture::' | |
1574 | // prefix. The 'typename' is required to satisfy the compiler. | |
1575 | typename TestFixture::List values; | |
1576 | ||
1577 | values.push_back(n); | |
1578 | ... | |
1579 | } | |
1580 | ||
1581 | TYPED_TEST(FooTest, HasPropertyA) { ... } | |
1582 | ``` | |
1583 | ||
1584 | You can see [sample6_unittest.cc] for a complete example. | |
1585 | ||
1586 | [sample6_unittest.cc]: ../samples/sample6_unittest.cc "Typed Test example" | |
1587 | ||
1588 | ## Type-Parameterized Tests | |
1589 | ||
1590 | *Type-parameterized tests* are like typed tests, except that they don't require | |
1591 | you to know the list of types ahead of time. Instead, you can define the test | |
1592 | logic first and instantiate it with different type lists later. You can even | |
1593 | instantiate it more than once in the same program. | |
1594 | ||
1595 | If you are designing an interface or concept, you can define a suite of | |
1596 | type-parameterized tests to verify properties that any valid implementation of | |
1597 | the interface/concept should have. Then, the author of each implementation can | |
1598 | just instantiate the test suite with their type to verify that it conforms to | |
1599 | the requirements, without having to write similar tests repeatedly. Here's an | |
1600 | example: | |
1601 | ||
1602 | First, define a fixture class template, as we did with typed tests: | |
1603 | ||
1604 | ```c++ | |
1605 | template <typename T> | |
1606 | class FooTest : public ::testing::Test { | |
1607 | ... | |
1608 | }; | |
1609 | ``` | |
1610 | ||
1611 | Next, declare that you will define a type-parameterized test suite: | |
1612 | ||
1613 | ```c++ | |
1614 | TYPED_TEST_SUITE_P(FooTest); | |
1615 | ``` | |
1616 | ||
1617 | Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat | |
1618 | this as many times as you want: | |
1619 | ||
1620 | ```c++ | |
1621 | TYPED_TEST_P(FooTest, DoesBlah) { | |
1622 | // Inside a test, refer to TypeParam to get the type parameter. | |
1623 | TypeParam n = 0; | |
1624 | ... | |
1625 | } | |
1626 | ||
1627 | TYPED_TEST_P(FooTest, HasPropertyA) { ... } | |
1628 | ``` | |
1629 | ||
1630 | Now the tricky part: you need to register all test patterns using the | |
1631 | `REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first | |
1632 | argument of the macro is the test suite name; the rest are the names of the | |
1633 | tests in this test suite: | |
1634 | ||
1635 | ```c++ | |
1636 | REGISTER_TYPED_TEST_SUITE_P(FooTest, | |
1637 | DoesBlah, HasPropertyA); | |
1638 | ``` | |
1639 | ||
1640 | Finally, you are free to instantiate the pattern with the types you want. If you | |
1641 | put the above code in a header file, you can `#include` it in multiple C++ | |
1642 | source files and instantiate it multiple times. | |
1643 | ||
1644 | ```c++ | |
1645 | using MyTypes = ::testing::Types<char, int, unsigned int>; | |
1646 | INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes); | |
1647 | ``` | |
1648 | ||
1649 | To distinguish different instances of the pattern, the first argument to the | |
1650 | `INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the | |
1651 | actual test suite name. Remember to pick unique prefixes for different | |
1652 | instances. | |
1653 | ||
1654 | In the special case where the type list contains only one type, you can write | |
1655 | that type directly without `::testing::Types<...>`, like this: | |
1656 | ||
1657 | ```c++ | |
1658 | INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, int); | |
1659 | ``` | |
1660 | ||
1661 | You can see [sample6_unittest.cc] for a complete example. | |
1662 | ||
1663 | ## Testing Private Code | |
1664 | ||
1665 | If you change your software's internal implementation, your tests should not | |
1666 | break as long as the change is not observable by users. Therefore, **per the | |
1667 | black-box testing principle, most of the time you should test your code through | |
1668 | its public interfaces.** | |
1669 | ||
1670 | **If you still find yourself needing to test internal implementation code, | |
1671 | consider if there's a better design.** The desire to test internal | |
1672 | implementation is often a sign that the class is doing too much. Consider | |
1673 | extracting an implementation class, and testing it. Then use that implementation | |
1674 | class in the original class. | |
1675 | ||
1676 | If you absolutely have to test non-public interface code though, you can. There | |
1677 | are two cases to consider: | |
1678 | ||
1679 | * Static functions ( *not* the same as static member functions!) or unnamed | |
1680 | namespaces, and | |
1681 | * Private or protected class members | |
1682 | ||
1683 | To test them, we use the following special techniques: | |
1684 | ||
1685 | * Both static functions and definitions/declarations in an unnamed namespace | |
1686 | are only visible within the same translation unit. To test them, you can | |
1687 | `#include` the entire `.cc` file being tested in your `*_test.cc` file. | |
1688 | (#including `.cc` files is not a good way to reuse code - you should not do | |
1689 | this in production code!) | |
1690 | ||
1691 | However, a better approach is to move the private code into the | |
1692 | `foo::internal` namespace, where `foo` is the namespace your project | |
1693 | normally uses, and put the private declarations in a `*-internal.h` file. | |
1694 | Your production `.cc` files and your tests are allowed to include this | |
1695 | internal header, but your clients are not. This way, you can fully test your | |
1696 | internal implementation without leaking it to your clients. | |
1697 | ||
1698 | * Private class members are only accessible from within the class or by | |
1699 | friends. To access a class' private members, you can declare your test | |
1700 | fixture as a friend to the class and define accessors in your fixture. Tests | |
1701 | using the fixture can then access the private members of your production | |
1702 | class via the accessors in the fixture. Note that even though your fixture | |
1703 | is a friend to your production class, your tests are not automatically | |
1704 | friends to it, as they are technically defined in sub-classes of the | |
1705 | fixture. | |
1706 | ||
1707 | Another way to test private members is to refactor them into an | |
1708 | implementation class, which is then declared in a `*-internal.h` file. Your | |
1709 | clients aren't allowed to include this header but your tests can. Such is | |
1710 | called the | |
1711 | [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/) | |
1712 | (Private Implementation) idiom. | |
1713 | ||
1714 | Or, you can declare an individual test as a friend of your class by adding | |
1715 | this line in the class body: | |
1716 | ||
1717 | ```c++ | |
1718 | FRIEND_TEST(TestSuiteName, TestName); | |
1719 | ``` | |
1720 | ||
1721 | For example, | |
1722 | ||
1723 | ```c++ | |
1724 | // foo.h | |
1725 | class Foo { | |
1726 | ... | |
1727 | private: | |
1728 | FRIEND_TEST(FooTest, BarReturnsZeroOnNull); | |
1729 | ||
1730 | int Bar(void* x); | |
1731 | }; | |
1732 | ||
1733 | // foo_test.cc | |
1734 | ... | |
1735 | TEST(FooTest, BarReturnsZeroOnNull) { | |
1736 | Foo foo; | |
1737 | EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar(). | |
1738 | } | |
1739 | ``` | |
1740 | ||
1741 | Pay special attention when your class is defined in a namespace, as you | |
1742 | should define your test fixtures and tests in the same namespace if you want | |
1743 | them to be friends of your class. For example, if the code to be tested | |
1744 | looks like: | |
1745 | ||
1746 | ```c++ | |
1747 | namespace my_namespace { | |
1748 | ||
1749 | class Foo { | |
1750 | friend class FooTest; | |
1751 | FRIEND_TEST(FooTest, Bar); | |
1752 | FRIEND_TEST(FooTest, Baz); | |
1753 | ... definition of the class Foo ... | |
1754 | }; | |
1755 | ||
1756 | } // namespace my_namespace | |
1757 | ``` | |
1758 | ||
1759 | Your test code should be something like: | |
1760 | ||
1761 | ```c++ | |
1762 | namespace my_namespace { | |
1763 | ||
1764 | class FooTest : public ::testing::Test { | |
1765 | protected: | |
1766 | ... | |
1767 | }; | |
1768 | ||
1769 | TEST_F(FooTest, Bar) { ... } | |
1770 | TEST_F(FooTest, Baz) { ... } | |
1771 | ||
1772 | } // namespace my_namespace | |
1773 | ``` | |
1774 | ||
1775 | ## "Catching" Failures | |
1776 | ||
1777 | If you are building a testing utility on top of googletest, you'll want to test | |
1778 | your utility. What framework would you use to test it? googletest, of course. | |
1779 | ||
1780 | The challenge is to verify that your testing utility reports failures correctly. | |
1781 | In frameworks that report a failure by throwing an exception, you could catch | |
1782 | the exception and assert on it. But googletest doesn't use exceptions, so how do | |
1783 | we test that a piece of code generates an expected failure? | |
1784 | ||
1785 | `"gtest/gtest-spi.h"` contains some constructs to do this. After #including this header, | |
1786 | you can use | |
1787 | ||
1788 | ```c++ | |
1789 | EXPECT_FATAL_FAILURE(statement, substring); | |
1790 | ``` | |
1791 | ||
1792 | to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the | |
1793 | current thread whose message contains the given `substring`, or use | |
1794 | ||
1795 | ```c++ | |
1796 | EXPECT_NONFATAL_FAILURE(statement, substring); | |
1797 | ``` | |
1798 | ||
1799 | if you are expecting a non-fatal (e.g. `EXPECT_*`) failure. | |
1800 | ||
1801 | Only failures in the current thread are checked to determine the result of this | |
1802 | type of expectations. If `statement` creates new threads, failures in these | |
1803 | threads are also ignored. If you want to catch failures in other threads as | |
1804 | well, use one of the following macros instead: | |
1805 | ||
1806 | ```c++ | |
1807 | EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring); | |
1808 | EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring); | |
1809 | ``` | |
1810 | ||
1811 | NOTE: Assertions from multiple threads are currently not supported on Windows. | |
1812 | ||
1813 | For technical reasons, there are some caveats: | |
1814 | ||
1815 | 1. You cannot stream a failure message to either macro. | |
1816 | ||
1817 | 2. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference | |
1818 | local non-static variables or non-static members of `this` object. | |
1819 | ||
1820 | 3. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a | |
1821 | value. | |
1822 | ||
1823 | ## Registering tests programmatically | |
1824 | ||
1825 | The `TEST` macros handle the vast majority of all use cases, but there are few | |
1826 | where runtime registration logic is required. For those cases, the framework | |
1827 | provides the `::testing::RegisterTest` that allows callers to register arbitrary | |
1828 | tests dynamically. | |
1829 | ||
1830 | This is an advanced API only to be used when the `TEST` macros are insufficient. | |
1831 | The macros should be preferred when possible, as they avoid most of the | |
1832 | complexity of calling this function. | |
1833 | ||
1834 | It provides the following signature: | |
1835 | ||
1836 | ```c++ | |
1837 | template <typename Factory> | |
1838 | TestInfo* RegisterTest(const char* test_suite_name, const char* test_name, | |
1839 | const char* type_param, const char* value_param, | |
1840 | const char* file, int line, Factory factory); | |
1841 | ``` | |
1842 | ||
1843 | The `factory` argument is a factory callable (move-constructible) object or | |
1844 | function pointer that creates a new instance of the Test object. It handles | |
1845 | ownership to the caller. The signature of the callable is `Fixture*()`, where | |
1846 | `Fixture` is the test fixture class for the test. All tests registered with the | |
1847 | same `test_suite_name` must return the same fixture type. This is checked at | |
1848 | runtime. | |
1849 | ||
1850 | The framework will infer the fixture class from the factory and will call the | |
1851 | `SetUpTestSuite` and `TearDownTestSuite` for it. | |
1852 | ||
1853 | Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is | |
1854 | undefined. | |
1855 | ||
1856 | Use case example: | |
1857 | ||
1858 | ```c++ | |
1859 | class MyFixture : public ::testing::Test { | |
1860 | public: | |
1861 | // All of these optional, just like in regular macro usage. | |
1862 | static void SetUpTestSuite() { ... } | |
1863 | static void TearDownTestSuite() { ... } | |
1864 | void SetUp() override { ... } | |
1865 | void TearDown() override { ... } | |
1866 | }; | |
1867 | ||
1868 | class MyTest : public MyFixture { | |
1869 | public: | |
1870 | explicit MyTest(int data) : data_(data) {} | |
1871 | void TestBody() override { ... } | |
1872 | ||
1873 | private: | |
1874 | int data_; | |
1875 | }; | |
1876 | ||
1877 | void RegisterMyTests(const std::vector<int>& values) { | |
1878 | for (int v : values) { | |
1879 | ::testing::RegisterTest( | |
1880 | "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr, | |
1881 | std::to_string(v).c_str(), | |
1882 | __FILE__, __LINE__, | |
1883 | // Important to use the fixture type as the return type here. | |
1884 | [=]() -> MyFixture* { return new MyTest(v); }); | |
1885 | } | |
1886 | } | |
1887 | ... | |
1888 | int main(int argc, char** argv) { | |
1889 | std::vector<int> values_to_test = LoadValuesFromConfig(); | |
1890 | RegisterMyTests(values_to_test); | |
1891 | ... | |
1892 | return RUN_ALL_TESTS(); | |
1893 | } | |
1894 | ``` | |
1895 | ## Getting the Current Test's Name | |
1896 | ||
1897 | Sometimes a function may need to know the name of the currently running test. | |
1898 | For example, you may be using the `SetUp()` method of your test fixture to set | |
1899 | the golden file name based on which test is running. The `::testing::TestInfo` | |
1900 | class has this information: | |
1901 | ||
1902 | ```c++ | |
1903 | namespace testing { | |
1904 | ||
1905 | class TestInfo { | |
1906 | public: | |
1907 | // Returns the test suite name and the test name, respectively. | |
1908 | // | |
1909 | // Do NOT delete or free the return value - it's managed by the | |
1910 | // TestInfo class. | |
1911 | const char* test_suite_name() const; | |
1912 | const char* name() const; | |
1913 | }; | |
1914 | ||
1915 | } | |
1916 | ``` | |
1917 | ||
1918 | To obtain a `TestInfo` object for the currently running test, call | |
1919 | `current_test_info()` on the `UnitTest` singleton object: | |
1920 | ||
1921 | ```c++ | |
1922 | // Gets information about the currently running test. | |
1923 | // Do NOT delete the returned object - it's managed by the UnitTest class. | |
1924 | const ::testing::TestInfo* const test_info = | |
1925 | ::testing::UnitTest::GetInstance()->current_test_info(); | |
1926 | ||
1927 | printf("We are in test %s of test suite %s.\n", | |
1928 | test_info->name(), | |
1929 | test_info->test_suite_name()); | |
1930 | ``` | |
1931 | ||
1932 | `current_test_info()` returns a null pointer if no test is running. In | |
1933 | particular, you cannot find the test suite name in `SetUpTestSuite()`, | |
1934 | `TearDownTestSuite()` (where you know the test suite name implicitly), or | |
1935 | functions called from them. | |
1936 | ||
1937 | ## Extending googletest by Handling Test Events | |
1938 | ||
1939 | googletest provides an **event listener API** to let you receive notifications | |
1940 | about the progress of a test program and test failures. The events you can | |
1941 | listen to include the start and end of the test program, a test suite, or a test | |
1942 | method, among others. You may use this API to augment or replace the standard | |
1943 | console output, replace the XML output, or provide a completely different form | |
1944 | of output, such as a GUI or a database. You can also use test events as | |
1945 | checkpoints to implement a resource leak checker, for example. | |
1946 | ||
1947 | ### Defining Event Listeners | |
1948 | ||
1949 | To define a event listener, you subclass either testing::TestEventListener or | |
1950 | testing::EmptyTestEventListener The former is an (abstract) interface, where | |
1951 | *each pure virtual method can be overridden to handle a test event* (For | |
1952 | example, when a test starts, the `OnTestStart()` method will be called.). The | |
1953 | latter provides an empty implementation of all methods in the interface, such | |
1954 | that a subclass only needs to override the methods it cares about. | |
1955 | ||
1956 | When an event is fired, its context is passed to the handler function as an | |
1957 | argument. The following argument types are used: | |
1958 | ||
1959 | * UnitTest reflects the state of the entire test program, | |
1960 | * TestSuite has information about a test suite, which can contain one or more | |
1961 | tests, | |
1962 | * TestInfo contains the state of a test, and | |
1963 | * TestPartResult represents the result of a test assertion. | |
1964 | ||
1965 | An event handler function can examine the argument it receives to find out | |
1966 | interesting information about the event and the test program's state. | |
1967 | ||
1968 | Here's an example: | |
1969 | ||
1970 | ```c++ | |
1971 | class MinimalistPrinter : public ::testing::EmptyTestEventListener { | |
1972 | // Called before a test starts. | |
1973 | virtual void OnTestStart(const ::testing::TestInfo& test_info) { | |
1974 | printf("*** Test %s.%s starting.\n", | |
1975 | test_info.test_suite_name(), test_info.name()); | |
1976 | } | |
1977 | ||
1978 | // Called after a failed assertion or a SUCCESS(). | |
1979 | virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_result) { | |
1980 | printf("%s in %s:%d\n%s\n", | |
1981 | test_part_result.failed() ? "*** Failure" : "Success", | |
1982 | test_part_result.file_name(), | |
1983 | test_part_result.line_number(), | |
1984 | test_part_result.summary()); | |
1985 | } | |
1986 | ||
1987 | // Called after a test ends. | |
1988 | virtual void OnTestEnd(const ::testing::TestInfo& test_info) { | |
1989 | printf("*** Test %s.%s ending.\n", | |
1990 | test_info.test_suite_name(), test_info.name()); | |
1991 | } | |
1992 | }; | |
1993 | ``` | |
1994 | ||
1995 | ### Using Event Listeners | |
1996 | ||
1997 | To use the event listener you have defined, add an instance of it to the | |
1998 | googletest event listener list (represented by class TestEventListeners - note | |
1999 | the "s" at the end of the name) in your `main()` function, before calling | |
2000 | `RUN_ALL_TESTS()`: | |
2001 | ||
2002 | ```c++ | |
2003 | int main(int argc, char** argv) { | |
2004 | ::testing::InitGoogleTest(&argc, argv); | |
2005 | // Gets hold of the event listener list. | |
2006 | ::testing::TestEventListeners& listeners = | |
2007 | ::testing::UnitTest::GetInstance()->listeners(); | |
2008 | // Adds a listener to the end. googletest takes the ownership. | |
2009 | listeners.Append(new MinimalistPrinter); | |
2010 | return RUN_ALL_TESTS(); | |
2011 | } | |
2012 | ``` | |
2013 | ||
2014 | There's only one problem: the default test result printer is still in effect, so | |
2015 | its output will mingle with the output from your minimalist printer. To suppress | |
2016 | the default printer, just release it from the event listener list and delete it. | |
2017 | You can do so by adding one line: | |
2018 | ||
2019 | ```c++ | |
2020 | ... | |
2021 | delete listeners.Release(listeners.default_result_printer()); | |
2022 | listeners.Append(new MinimalistPrinter); | |
2023 | return RUN_ALL_TESTS(); | |
2024 | ``` | |
2025 | ||
2026 | Now, sit back and enjoy a completely different output from your tests. For more | |
2027 | details, see [sample9_unittest.cc]. | |
2028 | ||
2029 | [sample9_unittest.cc]: ../samples/sample9_unittest.cc "Event listener example" | |
2030 | ||
2031 | You may append more than one listener to the list. When an `On*Start()` or | |
2032 | `OnTestPartResult()` event is fired, the listeners will receive it in the order | |
2033 | they appear in the list (since new listeners are added to the end of the list, | |
2034 | the default text printer and the default XML generator will receive the event | |
2035 | first). An `On*End()` event will be received by the listeners in the *reverse* | |
2036 | order. This allows output by listeners added later to be framed by output from | |
2037 | listeners added earlier. | |
2038 | ||
2039 | ### Generating Failures in Listeners | |
2040 | ||
2041 | You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc) | |
2042 | when processing an event. There are some restrictions: | |
2043 | ||
2044 | 1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will | |
2045 | cause `OnTestPartResult()` to be called recursively). | |
2046 | 2. A listener that handles `OnTestPartResult()` is not allowed to generate any | |
2047 | failure. | |
2048 | ||
2049 | When you add listeners to the listener list, you should put listeners that | |
2050 | handle `OnTestPartResult()` *before* listeners that can generate failures. This | |
2051 | ensures that failures generated by the latter are attributed to the right test | |
2052 | by the former. | |
2053 | ||
2054 | See [sample10_unittest.cc] for an example of a failure-raising listener. | |
2055 | ||
2056 | [sample10_unittest.cc]: ../samples/sample10_unittest.cc "Failure-raising listener example" | |
2057 | ||
2058 | ## Running Test Programs: Advanced Options | |
2059 | ||
2060 | googletest test programs are ordinary executables. Once built, you can run them | |
2061 | directly and affect their behavior via the following environment variables | |
2062 | and/or command line flags. For the flags to work, your programs must call | |
2063 | `::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`. | |
2064 | ||
2065 | To see a list of supported flags and their usage, please run your test program | |
2066 | with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short. | |
2067 | ||
2068 | If an option is specified both by an environment variable and by a flag, the | |
2069 | latter takes precedence. | |
2070 | ||
2071 | ### Selecting Tests | |
2072 | ||
2073 | #### Listing Test Names | |
2074 | ||
2075 | Sometimes it is necessary to list the available tests in a program before | |
2076 | running them so that a filter may be applied if needed. Including the flag | |
2077 | `--gtest_list_tests` overrides all other flags and lists tests in the following | |
2078 | format: | |
2079 | ||
2080 | ```none | |
2081 | TestSuite1. | |
2082 | TestName1 | |
2083 | TestName2 | |
2084 | TestSuite2. | |
2085 | TestName | |
2086 | ``` | |
2087 | ||
2088 | None of the tests listed are actually run if the flag is provided. There is no | |
2089 | corresponding environment variable for this flag. | |
2090 | ||
2091 | #### Running a Subset of the Tests | |
2092 | ||
2093 | By default, a googletest program runs all tests the user has defined. Sometimes, | |
2094 | you want to run only a subset of the tests (e.g. for debugging or quickly | |
2095 | verifying a change). If you set the `GTEST_FILTER` environment variable or the | |
2096 | `--gtest_filter` flag to a filter string, googletest will only run the tests | |
2097 | whose full names (in the form of `TestSuiteName.TestName`) match the filter. | |
2098 | ||
2099 | The format of a filter is a '`:`'-separated list of wildcard patterns (called | |
2100 | the *positive patterns*) optionally followed by a '`-`' and another | |
2101 | '`:`'-separated pattern list (called the *negative patterns*). A test matches | |
2102 | the filter if and only if it matches any of the positive patterns but does not | |
2103 | match any of the negative patterns. | |
2104 | ||
2105 | A pattern may contain `'*'` (matches any string) or `'?'` (matches any single | |
2106 | character). For convenience, the filter `'*-NegativePatterns'` can be also | |
2107 | written as `'-NegativePatterns'`. | |
2108 | ||
2109 | For example: | |
2110 | ||
2111 | * `./foo_test` Has no flag, and thus runs all its tests. | |
2112 | * `./foo_test --gtest_filter=*` Also runs everything, due to the single | |
2113 | match-everything `*` value. | |
2114 | * `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite | |
2115 | `FooTest` . | |
2116 | * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full | |
2117 | name contains either `"Null"` or `"Constructor"` . | |
2118 | * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests. | |
2119 | * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test | |
2120 | suite `FooTest` except `FooTest.Bar`. | |
2121 | * `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs | |
2122 | everything in test suite `FooTest` except `FooTest.Bar` and everything in | |
2123 | test suite `BarTest` except `BarTest.Foo`. | |
2124 | ||
2125 | #### Stop test execution upon first failure | |
2126 | ||
2127 | By default, a googletest program runs all tests the user has defined. In some | |
2128 | cases (e.g. iterative test development & execution) it may be desirable stop | |
2129 | test execution upon first failure (trading improved latency for completeness). | |
2130 | If `GTEST_FAIL_FAST` environment variable or `--gtest_fail_fast` flag is set, | |
2131 | the test runner will stop execution as soon as the first test failure is | |
2132 | found. | |
2133 | ||
2134 | #### Temporarily Disabling Tests | |
2135 | ||
2136 | If you have a broken test that you cannot fix right away, you can add the | |
2137 | `DISABLED_` prefix to its name. This will exclude it from execution. This is | |
2138 | better than commenting out the code or using `#if 0`, as disabled tests are | |
2139 | still compiled (and thus won't rot). | |
2140 | ||
2141 | If you need to disable all tests in a test suite, you can either add `DISABLED_` | |
2142 | to the front of the name of each test, or alternatively add it to the front of | |
2143 | the test suite name. | |
2144 | ||
2145 | For example, the following tests won't be run by googletest, even though they | |
2146 | will still be compiled: | |
2147 | ||
2148 | ```c++ | |
2149 | // Tests that Foo does Abc. | |
2150 | TEST(FooTest, DISABLED_DoesAbc) { ... } | |
2151 | ||
2152 | class DISABLED_BarTest : public ::testing::Test { ... }; | |
2153 | ||
2154 | // Tests that Bar does Xyz. | |
2155 | TEST_F(DISABLED_BarTest, DoesXyz) { ... } | |
2156 | ``` | |
2157 | ||
2158 | NOTE: This feature should only be used for temporary pain-relief. You still have | |
2159 | to fix the disabled tests at a later date. As a reminder, googletest will print | |
2160 | a banner warning you if a test program contains any disabled tests. | |
2161 | ||
2162 | TIP: You can easily count the number of disabled tests you have using `gsearch` | |
2163 | and/or `grep`. This number can be used as a metric for improving your test | |
2164 | quality. | |
2165 | ||
2166 | #### Temporarily Enabling Disabled Tests | |
2167 | ||
2168 | To include disabled tests in test execution, just invoke the test program with | |
2169 | the `--gtest_also_run_disabled_tests` flag or set the | |
2170 | `GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`. | |
2171 | You can combine this with the `--gtest_filter` flag to further select which | |
2172 | disabled tests to run. | |
2173 | ||
2174 | ### Repeating the Tests | |
2175 | ||
2176 | Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it | |
2177 | will fail only 1% of the time, making it rather hard to reproduce the bug under | |
2178 | a debugger. This can be a major source of frustration. | |
2179 | ||
2180 | The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in | |
2181 | a program many times. Hopefully, a flaky test will eventually fail and give you | |
2182 | a chance to debug. Here's how to use it: | |
2183 | ||
2184 | ```none | |
2185 | $ foo_test --gtest_repeat=1000 | |
2186 | Repeat foo_test 1000 times and don't stop at failures. | |
2187 | ||
2188 | $ foo_test --gtest_repeat=-1 | |
2189 | A negative count means repeating forever. | |
2190 | ||
2191 | $ foo_test --gtest_repeat=1000 --gtest_break_on_failure | |
2192 | Repeat foo_test 1000 times, stopping at the first failure. This | |
2193 | is especially useful when running under a debugger: when the test | |
2194 | fails, it will drop into the debugger and you can then inspect | |
2195 | variables and stacks. | |
2196 | ||
2197 | $ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.* | |
2198 | Repeat the tests whose name matches the filter 1000 times. | |
2199 | ``` | |
2200 | ||
2201 | If your test program contains | |
2202 | [global set-up/tear-down](#global-set-up-and-tear-down) code, it will be | |
2203 | repeated in each iteration as well, as the flakiness may be in it. You can also | |
2204 | specify the repeat count by setting the `GTEST_REPEAT` environment variable. | |
2205 | ||
2206 | ### Shuffling the Tests | |
2207 | ||
2208 | You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE` | |
2209 | environment variable to `1`) to run the tests in a program in a random order. | |
2210 | This helps to reveal bad dependencies between tests. | |
2211 | ||
2212 | By default, googletest uses a random seed calculated from the current time. | |
2213 | Therefore you'll get a different order every time. The console output includes | |
2214 | the random seed value, such that you can reproduce an order-related test failure | |
2215 | later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED` | |
2216 | flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an | |
2217 | integer in the range [0, 99999]. The seed value 0 is special: it tells | |
2218 | googletest to do the default behavior of calculating the seed from the current | |
2219 | time. | |
2220 | ||
2221 | If you combine this with `--gtest_repeat=N`, googletest will pick a different | |
2222 | random seed and re-shuffle the tests in each iteration. | |
2223 | ||
2224 | ### Controlling Test Output | |
2225 | ||
2226 | #### Colored Terminal Output | |
2227 | ||
2228 | googletest can use colors in its terminal output to make it easier to spot the | |
2229 | important information: | |
2230 | ||
2231 | <code> | |
2232 | ...<br/> | |
2233 | <font color="green">[----------]</font><font color="black"> 1 test from | |
2234 | FooTest</font><br/> | |
2235 | <font color="green">[ RUN ]</font><font color="black"> | |
2236 | FooTest.DoesAbc</font><br/> | |
2237 | <font color="green">[ OK ]</font><font color="black"> | |
2238 | FooTest.DoesAbc </font><br/> | |
2239 | <font color="green">[----------]</font><font color="black"> | |
2240 | 2 tests from BarTest</font><br/> | |
2241 | <font color="green">[ RUN ]</font><font color="black"> | |
2242 | BarTest.HasXyzProperty </font><br/> | |
2243 | <font color="green">[ OK ]</font><font color="black"> | |
2244 | BarTest.HasXyzProperty</font><br/> | |
2245 | <font color="green">[ RUN ]</font><font color="black"> | |
2246 | BarTest.ReturnsTrueOnSuccess ... some error messages ...</font><br/> | |
2247 | <font color="red">[ FAILED ]</font><font color="black"> | |
2248 | BarTest.ReturnsTrueOnSuccess ...</font><br/> | |
2249 | <font color="green">[==========]</font><font color="black"> | |
2250 | 30 tests from 14 test suites ran.</font><br/> | |
2251 | <font color="green">[ PASSED ]</font><font color="black"> | |
2252 | 28 tests.</font><br/> | |
2253 | <font color="red">[ FAILED ]</font><font color="black"> | |
2254 | 2 tests, listed below:</font><br/> | |
2255 | <font color="red">[ FAILED ]</font><font color="black"> | |
2256 | BarTest.ReturnsTrueOnSuccess</font><br/> | |
2257 | <font color="red">[ FAILED ]</font><font color="black"> | |
2258 | AnotherTest.DoesXyz<br/> | |
2259 | <br/> | |
2260 | 2 FAILED TESTS | |
2261 | </font> | |
2262 | </code> | |
2263 | ||
2264 | You can set the `GTEST_COLOR` environment variable or the `--gtest_color` | |
2265 | command line flag to `yes`, `no`, or `auto` (the default) to enable colors, | |
2266 | disable colors, or let googletest decide. When the value is `auto`, googletest | |
2267 | will use colors if and only if the output goes to a terminal and (on non-Windows | |
2268 | platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`. | |
2269 | ||
2270 | #### Suppressing test passes | |
2271 | ||
2272 | By default, googletest prints 1 line of output for each test, indicating if it | |
2273 | passed or failed. To show only test failures, run the test program with | |
2274 | `--gtest_brief=1`, or set the GTEST_BRIEF environment variable to `1`. | |
2275 | ||
2276 | #### Suppressing the Elapsed Time | |
2277 | ||
2278 | By default, googletest prints the time it takes to run each test. To disable | |
2279 | that, run the test program with the `--gtest_print_time=0` command line flag, or | |
2280 | set the GTEST_PRINT_TIME environment variable to `0`. | |
2281 | ||
2282 | #### Suppressing UTF-8 Text Output | |
2283 | ||
2284 | In case of assertion failures, googletest prints expected and actual values of | |
2285 | type `string` both as hex-encoded strings as well as in readable UTF-8 text if | |
2286 | they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8 | |
2287 | text because, for example, you don't have an UTF-8 compatible output medium, run | |
2288 | the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8` | |
2289 | environment variable to `0`. | |
2290 | ||
2291 | ||
2292 | ||
2293 | #### Generating an XML Report | |
2294 | ||
2295 | googletest can emit a detailed XML report to a file in addition to its normal | |
2296 | textual output. The report contains the duration of each test, and thus can help | |
2297 | you identify slow tests. | |
2298 | ||
2299 | To generate the XML report, set the `GTEST_OUTPUT` environment variable or the | |
2300 | `--gtest_output` flag to the string `"xml:path_to_output_file"`, which will | |
2301 | create the file at the given location. You can also just use the string `"xml"`, | |
2302 | in which case the output can be found in the `test_detail.xml` file in the | |
2303 | current directory. | |
2304 | ||
2305 | If you specify a directory (for example, `"xml:output/directory/"` on Linux or | |
2306 | `"xml:output\directory\"` on Windows), googletest will create the XML file in | |
2307 | that directory, named after the test executable (e.g. `foo_test.xml` for test | |
2308 | program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left | |
2309 | over from a previous run), googletest will pick a different name (e.g. | |
2310 | `foo_test_1.xml`) to avoid overwriting it. | |
2311 | ||
2312 | The report is based on the `junitreport` Ant task. Since that format was | |
2313 | originally intended for Java, a little interpretation is required to make it | |
2314 | apply to googletest tests, as shown here: | |
2315 | ||
2316 | ```xml | |
2317 | <testsuites name="AllTests" ...> | |
2318 | <testsuite name="test_case_name" ...> | |
2319 | <testcase name="test_name" ...> | |
2320 | <failure message="..."/> | |
2321 | <failure message="..."/> | |
2322 | <failure message="..."/> | |
2323 | </testcase> | |
2324 | </testsuite> | |
2325 | </testsuites> | |
2326 | ``` | |
2327 | ||
2328 | * The root `<testsuites>` element corresponds to the entire test program. | |
2329 | * `<testsuite>` elements correspond to googletest test suites. | |
2330 | * `<testcase>` elements correspond to googletest test functions. | |
2331 | ||
2332 | For instance, the following program | |
2333 | ||
2334 | ```c++ | |
2335 | TEST(MathTest, Addition) { ... } | |
2336 | TEST(MathTest, Subtraction) { ... } | |
2337 | TEST(LogicTest, NonContradiction) { ... } | |
2338 | ``` | |
2339 | ||
2340 | could generate this report: | |
2341 | ||
2342 | ```xml | |
2343 | <?xml version="1.0" encoding="UTF-8"?> | |
2344 | <testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests"> | |
2345 | <testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015"> | |
2346 | <testcase name="Addition" status="run" time="0.007" classname=""> | |
2347 | <failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="">...</failure> | |
2348 | <failure message="Value of: add(1, -1)
 Actual: 1
Expected: 0" type="">...</failure> | |
2349 | </testcase> | |
2350 | <testcase name="Subtraction" status="run" time="0.005" classname=""> | |
2351 | </testcase> | |
2352 | </testsuite> | |
2353 | <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005"> | |
2354 | <testcase name="NonContradiction" status="run" time="0.005" classname=""> | |
2355 | </testcase> | |
2356 | </testsuite> | |
2357 | </testsuites> | |
2358 | ``` | |
2359 | ||
2360 | Things to note: | |
2361 | ||
2362 | * The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how | |
2363 | many test functions the googletest program or test suite contains, while the | |
2364 | `failures` attribute tells how many of them failed. | |
2365 | ||
2366 | * The `time` attribute expresses the duration of the test, test suite, or | |
2367 | entire test program in seconds. | |
2368 | ||
2369 | * The `timestamp` attribute records the local date and time of the test | |
2370 | execution. | |
2371 | ||
2372 | * Each `<failure>` element corresponds to a single failed googletest | |
2373 | assertion. | |
2374 | ||
2375 | #### Generating a JSON Report | |
2376 | ||
2377 | googletest can also emit a JSON report as an alternative format to XML. To | |
2378 | generate the JSON report, set the `GTEST_OUTPUT` environment variable or the | |
2379 | `--gtest_output` flag to the string `"json:path_to_output_file"`, which will | |
2380 | create the file at the given location. You can also just use the string | |
2381 | `"json"`, in which case the output can be found in the `test_detail.json` file | |
2382 | in the current directory. | |
2383 | ||
2384 | The report format conforms to the following JSON Schema: | |
2385 | ||
2386 | ```json | |
2387 | { | |
2388 | "$schema": "http://json-schema.org/schema#", | |
2389 | "type": "object", | |
2390 | "definitions": { | |
2391 | "TestCase": { | |
2392 | "type": "object", | |
2393 | "properties": { | |
2394 | "name": { "type": "string" }, | |
2395 | "tests": { "type": "integer" }, | |
2396 | "failures": { "type": "integer" }, | |
2397 | "disabled": { "type": "integer" }, | |
2398 | "time": { "type": "string" }, | |
2399 | "testsuite": { | |
2400 | "type": "array", | |
2401 | "items": { | |
2402 | "$ref": "#/definitions/TestInfo" | |
2403 | } | |
2404 | } | |
2405 | } | |
2406 | }, | |
2407 | "TestInfo": { | |
2408 | "type": "object", | |
2409 | "properties": { | |
2410 | "name": { "type": "string" }, | |
2411 | "status": { | |
2412 | "type": "string", | |
2413 | "enum": ["RUN", "NOTRUN"] | |
2414 | }, | |
2415 | "time": { "type": "string" }, | |
2416 | "classname": { "type": "string" }, | |
2417 | "failures": { | |
2418 | "type": "array", | |
2419 | "items": { | |
2420 | "$ref": "#/definitions/Failure" | |
2421 | } | |
2422 | } | |
2423 | } | |
2424 | }, | |
2425 | "Failure": { | |
2426 | "type": "object", | |
2427 | "properties": { | |
2428 | "failures": { "type": "string" }, | |
2429 | "type": { "type": "string" } | |
2430 | } | |
2431 | } | |
2432 | }, | |
2433 | "properties": { | |
2434 | "tests": { "type": "integer" }, | |
2435 | "failures": { "type": "integer" }, | |
2436 | "disabled": { "type": "integer" }, | |
2437 | "errors": { "type": "integer" }, | |
2438 | "timestamp": { | |
2439 | "type": "string", | |
2440 | "format": "date-time" | |
2441 | }, | |
2442 | "time": { "type": "string" }, | |
2443 | "name": { "type": "string" }, | |
2444 | "testsuites": { | |
2445 | "type": "array", | |
2446 | "items": { | |
2447 | "$ref": "#/definitions/TestCase" | |
2448 | } | |
2449 | } | |
2450 | } | |
2451 | } | |
2452 | ``` | |
2453 | ||
2454 | The report uses the format that conforms to the following Proto3 using the | |
2455 | [JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json): | |
2456 | ||
2457 | ```proto | |
2458 | syntax = "proto3"; | |
2459 | ||
2460 | package googletest; | |
2461 | ||
2462 | import "google/protobuf/timestamp.proto"; | |
2463 | import "google/protobuf/duration.proto"; | |
2464 | ||
2465 | message UnitTest { | |
2466 | int32 tests = 1; | |
2467 | int32 failures = 2; | |
2468 | int32 disabled = 3; | |
2469 | int32 errors = 4; | |
2470 | google.protobuf.Timestamp timestamp = 5; | |
2471 | google.protobuf.Duration time = 6; | |
2472 | string name = 7; | |
2473 | repeated TestCase testsuites = 8; | |
2474 | } | |
2475 | ||
2476 | message TestCase { | |
2477 | string name = 1; | |
2478 | int32 tests = 2; | |
2479 | int32 failures = 3; | |
2480 | int32 disabled = 4; | |
2481 | int32 errors = 5; | |
2482 | google.protobuf.Duration time = 6; | |
2483 | repeated TestInfo testsuite = 7; | |
2484 | } | |
2485 | ||
2486 | message TestInfo { | |
2487 | string name = 1; | |
2488 | enum Status { | |
2489 | RUN = 0; | |
2490 | NOTRUN = 1; | |
2491 | } | |
2492 | Status status = 2; | |
2493 | google.protobuf.Duration time = 3; | |
2494 | string classname = 4; | |
2495 | message Failure { | |
2496 | string failures = 1; | |
2497 | string type = 2; | |
2498 | } | |
2499 | repeated Failure failures = 5; | |
2500 | } | |
2501 | ``` | |
2502 | ||
2503 | For instance, the following program | |
2504 | ||
2505 | ```c++ | |
2506 | TEST(MathTest, Addition) { ... } | |
2507 | TEST(MathTest, Subtraction) { ... } | |
2508 | TEST(LogicTest, NonContradiction) { ... } | |
2509 | ``` | |
2510 | ||
2511 | could generate this report: | |
2512 | ||
2513 | ```json | |
2514 | { | |
2515 | "tests": 3, | |
2516 | "failures": 1, | |
2517 | "errors": 0, | |
2518 | "time": "0.035s", | |
2519 | "timestamp": "2011-10-31T18:52:42Z", | |
2520 | "name": "AllTests", | |
2521 | "testsuites": [ | |
2522 | { | |
2523 | "name": "MathTest", | |
2524 | "tests": 2, | |
2525 | "failures": 1, | |
2526 | "errors": 0, | |
2527 | "time": "0.015s", | |
2528 | "testsuite": [ | |
2529 | { | |
2530 | "name": "Addition", | |
2531 | "status": "RUN", | |
2532 | "time": "0.007s", | |
2533 | "classname": "", | |
2534 | "failures": [ | |
2535 | { | |
2536 | "message": "Value of: add(1, 1)\n Actual: 3\nExpected: 2", | |
2537 | "type": "" | |
2538 | }, | |
2539 | { | |
2540 | "message": "Value of: add(1, -1)\n Actual: 1\nExpected: 0", | |
2541 | "type": "" | |
2542 | } | |
2543 | ] | |
2544 | }, | |
2545 | { | |
2546 | "name": "Subtraction", | |
2547 | "status": "RUN", | |
2548 | "time": "0.005s", | |
2549 | "classname": "" | |
2550 | } | |
2551 | ] | |
2552 | }, | |
2553 | { | |
2554 | "name": "LogicTest", | |
2555 | "tests": 1, | |
2556 | "failures": 0, | |
2557 | "errors": 0, | |
2558 | "time": "0.005s", | |
2559 | "testsuite": [ | |
2560 | { | |
2561 | "name": "NonContradiction", | |
2562 | "status": "RUN", | |
2563 | "time": "0.005s", | |
2564 | "classname": "" | |
2565 | } | |
2566 | ] | |
2567 | } | |
2568 | ] | |
2569 | } | |
2570 | ``` | |
2571 | ||
2572 | IMPORTANT: The exact format of the JSON document is subject to change. | |
2573 | ||
2574 | ### Controlling How Failures Are Reported | |
2575 | ||
2576 | #### Detecting Test Premature Exit | |
2577 | ||
2578 | Google Test implements the _premature-exit-file_ protocol for test runners | |
2579 | to catch any kind of unexpected exits of test programs. Upon start, | |
2580 | Google Test creates the file which will be automatically deleted after | |
2581 | all work has been finished. Then, the test runner can check if this file | |
2582 | exists. In case the file remains undeleted, the inspected test has exited | |
2583 | prematurely. | |
2584 | ||
2585 | This feature is enabled only if the `TEST_PREMATURE_EXIT_FILE` environment | |
2586 | variable has been set. | |
2587 | ||
2588 | #### Turning Assertion Failures into Break-Points | |
2589 | ||
2590 | When running test programs under a debugger, it's very convenient if the | |
2591 | debugger can catch an assertion failure and automatically drop into interactive | |
2592 | mode. googletest's *break-on-failure* mode supports this behavior. | |
2593 | ||
2594 | To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value | |
2595 | other than `0`. Alternatively, you can use the `--gtest_break_on_failure` | |
2596 | command line flag. | |
2597 | ||
2598 | #### Disabling Catching Test-Thrown Exceptions | |
2599 | ||
2600 | googletest can be used either with or without exceptions enabled. If a test | |
2601 | throws a C++ exception or (on Windows) a structured exception (SEH), by default | |
2602 | googletest catches it, reports it as a test failure, and continues with the next | |
2603 | test method. This maximizes the coverage of a test run. Also, on Windows an | |
2604 | uncaught exception will cause a pop-up window, so catching the exceptions allows | |
2605 | you to run the tests automatically. | |
2606 | ||
2607 | When debugging the test failures, however, you may instead want the exceptions | |
2608 | to be handled by the debugger, such that you can examine the call stack when an | |
2609 | exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS` | |
2610 | environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when | |
2611 | running the tests. |