1 # Advanced googletest Topics
6 Now that you have read the [googletest Primer](primer.md) and learned how to write
7 tests using googletest, it's time to learn some new tricks. This document will
8 show you more assertions as well as how to construct complex failure messages,
9 propagate fatal failures, reuse and speed up your test fixtures, and use various
10 flags with your tests.
14 This section covers some less frequently used, but still significant,
17 ### Explicit Success and Failure
19 These three assertions do not actually test a value or expression. Instead, they
20 generate a success or failure directly. Like the macros that actually perform a
21 test, you may stream a custom failure message into them.
27 Generates a success. This does **NOT** make the overall test succeed. A test is
28 considered successful only if none of its assertions fail during its execution.
30 NOTE: `SUCCEED()` is purely documentary and currently doesn't generate any
31 user-visible output. However, we may add `SUCCEED()` messages to googletest's
37 ADD_FAILURE_AT("file_path", line_number);
40 `FAIL()` generates a fatal failure, while `ADD_FAILURE()` and `ADD_FAILURE_AT()`
41 generate a nonfatal failure. These are useful when control flow, rather than a
42 Boolean expression, determines the test's success or failure. For example, you
43 might want to write something like:
50 ... some other checks ...
52 FAIL() << "We shouldn't get here.";
56 NOTE: you can only use `FAIL()` in functions that return `void`. See the
57 [Assertion Placement section](#assertion-placement) for more information.
59 **Availability**: Linux, Windows, Mac.
61 ### Exception Assertions
63 These are for verifying that a piece of code throws (or does not throw) an
64 exception of the given type:
66 Fatal assertion | Nonfatal assertion | Verifies
67 ------------------------------------------ | ------------------------------------------ | --------
68 `ASSERT_THROW(statement, exception_type);` | `EXPECT_THROW(statement, exception_type);` | `statement` throws an exception of the given type
69 `ASSERT_ANY_THROW(statement);` | `EXPECT_ANY_THROW(statement);` | `statement` throws an exception of any type
70 `ASSERT_NO_THROW(statement);` | `EXPECT_NO_THROW(statement);` | `statement` doesn't throw any exception
75 ASSERT_THROW(Foo(5), bar_exception);
83 **Availability**: Linux, Windows, Mac; requires exceptions to be enabled in the
84 build environment (note that `google3` **disables** exceptions).
86 ### Predicate Assertions for Better Error Messages
88 Even though googletest has a rich set of assertions, they can never be complete,
89 as it's impossible (nor a good idea) to anticipate all scenarios a user might
90 run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a
91 complex expression, for lack of a better macro. This has the problem of not
92 showing you the values of the parts of the expression, making it hard to
93 understand what went wrong. As a workaround, some users choose to construct the
94 failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
95 is awkward especially when the expression has side-effects or is expensive to
98 googletest gives you three different options to solve this problem:
100 #### Using an Existing Boolean Function
102 If you already have a function or functor that returns `bool` (or a type that
103 can be implicitly converted to `bool`), you can use it in a *predicate
104 assertion* to get the function arguments printed for free:
106 | Fatal assertion | Nonfatal assertion | Verifies |
107 | ---------------------------------- | ---------------------------------- | --------------------------- |
108 | `ASSERT_PRED1(pred1, val1);` | `EXPECT_PRED1(pred1, val1);` | `pred1(val1)` is true |
109 | `ASSERT_PRED2(pred2, val1, val2);` | `EXPECT_PRED2(pred2, val1, val2);` | `pred2(val1, val2)` is true |
110 | `...` | `...` | ... |
112 In the above, `predn` is an `n`-ary predicate function or functor, where `val1`,
113 `val2`, ..., and `valn` are its arguments. The assertion succeeds if the
114 predicate returns `true` when applied to the given arguments, and fails
115 otherwise. When the assertion fails, it prints the value of each argument. In
116 either case, the arguments are evaluated exactly once.
118 Here's an example. Given
121 // Returns true if m and n have no common divisors except 1.
122 bool MutuallyPrime(int m, int n) { ... }
132 EXPECT_PRED2(MutuallyPrime, a, b);
135 will succeed, while the assertion
138 EXPECT_PRED2(MutuallyPrime, b, c);
141 will fail with the message
144 MutuallyPrime(b, c) is false, where
151 > 1. If you see a compiler error "no matching function to call" when using
152 > `ASSERT_PRED*` or `EXPECT_PRED*`, please see
153 > [this](faq.md#OverloadedPredicate) for how to resolve it.
154 > 1. Currently we only provide predicate assertions of arity <= 5. If you need
155 > a higher-arity assertion, let [us](https://github.com/google/googletest/issues) know.
157 **Availability**: Linux, Windows, Mac.
159 #### Using a Function That Returns an AssertionResult
161 While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not
162 satisfactory: you have to use different macros for different arities, and it
163 feels more like Lisp than C++. The `::testing::AssertionResult` class solves
166 An `AssertionResult` object represents the result of an assertion (whether it's
167 a success or a failure, and an associated message). You can create an
168 `AssertionResult` using one of these factory functions:
173 // Returns an AssertionResult object to indicate that an assertion has
175 AssertionResult AssertionSuccess();
177 // Returns an AssertionResult object to indicate that an assertion has
179 AssertionResult AssertionFailure();
184 You can then use the `<<` operator to stream messages to the `AssertionResult`
187 To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),
188 write a predicate function that returns `AssertionResult` instead of `bool`. For
189 example, if you define `IsEven()` as:
192 ::testing::AssertionResult IsEven(int n) {
194 return ::testing::AssertionSuccess();
196 return ::testing::AssertionFailure() << n << " is odd";
208 the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:
211 Value of: IsEven(Fib(4))
212 Actual: false (3 is odd)
216 instead of a more opaque
219 Value of: IsEven(Fib(4))
224 If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well
225 (one third of Boolean assertions in the Google code base are negative ones), and
226 are fine with making the predicate slower in the success case, you can supply a
230 ::testing::AssertionResult IsEven(int n) {
232 return ::testing::AssertionSuccess() << n << " is even";
234 return ::testing::AssertionFailure() << n << " is odd";
238 Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
241 Value of: IsEven(Fib(6))
242 Actual: true (8 is even)
246 **Availability**: Linux, Windows, Mac.
248 #### Using a Predicate-Formatter
250 If you find the default message generated by `(ASSERT|EXPECT)_PRED*` and
251 `(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to your
252 predicate do not support streaming to `ostream`, you can instead use the
253 following *predicate-formatter assertions* to *fully* customize how the message
256 Fatal assertion | Nonfatal assertion | Verifies
257 ------------------------------------------------ | ------------------------------------------------ | --------
258 `ASSERT_PRED_FORMAT1(pred_format1, val1);` | `EXPECT_PRED_FORMAT1(pred_format1, val1);` | `pred_format1(val1)` is successful
259 `ASSERT_PRED_FORMAT2(pred_format2, val1, val2);` | `EXPECT_PRED_FORMAT2(pred_format2, val1, val2);` | `pred_format2(val1, val2)` is successful
262 The difference between this and the previous group of macros is that instead of
263 a predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a *predicate-formatter*
264 (`pred_formatn`), which is a function or functor with the signature:
267 ::testing::AssertionResult PredicateFormattern(const char* expr1,
277 where `val1`, `val2`, ..., and `valn` are the values of the predicate arguments,
278 and `expr1`, `expr2`, ..., and `exprn` are the corresponding expressions as they
279 appear in the source code. The types `T1`, `T2`, ..., and `Tn` can be either
280 value types or reference types. For example, if an argument has type `Foo`, you
281 can declare it as either `Foo` or `const Foo&`, whichever is appropriate.
283 As an example, let's improve the failure message in `MutuallyPrime()`, which was
284 used with `EXPECT_PRED2()`:
287 // Returns the smallest prime common divisor of m and n,
288 // or 1 when m and n are mutually prime.
289 int SmallestPrimeCommonDivisor(int m, int n) { ... }
291 // A predicate-formatter for asserting that two integers are mutually prime.
292 ::testing::AssertionResult AssertMutuallyPrime(const char* m_expr,
296 if (MutuallyPrime(m, n)) return ::testing::AssertionSuccess();
298 return ::testing::AssertionFailure() << m_expr << " and " << n_expr
299 << " (" << m << " and " << n << ") are not mutually prime, "
300 << "as they have a common divisor " << SmallestPrimeCommonDivisor(m, n);
304 With this predicate-formatter, we can use
307 EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c);
310 to generate the message
313 b and c (4 and 10) are not mutually prime, as they have a common divisor 2.
316 As you may have realized, many of the built-in assertions we introduced earlier
317 are special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them are
318 indeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`.
320 **Availability**: Linux, Windows, Mac.
322 ### Floating-Point Comparison
324 Comparing floating-point numbers is tricky. Due to round-off errors, it is very
325 unlikely that two floating-points will match exactly. Therefore, `ASSERT_EQ` 's
326 naive comparison usually doesn't work. And since floating-points can have a wide
327 value range, no single fixed error bound works. It's better to compare by a
328 fixed relative error bound, except for values close to 0 due to the loss of
331 In general, for floating-point comparison to make sense, the user needs to
332 carefully choose the error bound. If they don't want or care to, comparing in
333 terms of Units in the Last Place (ULPs) is a good default, and googletest
334 provides assertions to do this. Full details about ULPs are quite long; if you
335 want to learn more, see
336 [here](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/).
338 #### Floating-Point Macros
340 | Fatal assertion | Nonfatal assertion | Verifies |
341 | ------------------------------- | ------------------------------ | ---------------------------------------- |
342 | `ASSERT_FLOAT_EQ(val1, val2);` | `EXPECT_FLOAT_EQ(val1,val2);` | the two `float` values are almost equal |
343 | `ASSERT_DOUBLE_EQ(val1, val2);` | `EXPECT_DOUBLE_EQ(val1, val2);`| the two `double` values are almost equal |
345 By "almost equal" we mean the values are within 4 ULP's from each other.
347 NOTE: `CHECK_DOUBLE_EQ()` in `base/logging.h` uses a fixed absolute error bound,
348 so its result may differ from that of the googletest macros. That macro is
349 unsafe and has been deprecated. Please don't use it any more.
351 The following assertions allow you to choose the acceptable error bound:
353 | Fatal assertion | Nonfatal assertion | Verifies |
354 | ------------------------------------- | ------------------------------------- | ------------------------- |
355 | `ASSERT_NEAR(val1, val2, abs_error);` | `EXPECT_NEAR(val1, val2, abs_error);` | the difference between `val1` and `val2` doesn't exceed the given absolute error |
357 **Availability**: Linux, Windows, Mac.
359 #### Floating-Point Predicate-Format Functions
361 Some floating-point operations are useful, but not that often used. In order to
362 avoid an explosion of new macros, we provide them as predicate-format functions
363 that can be used in predicate assertion macros (e.g. `EXPECT_PRED_FORMAT2`,
367 EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2);
368 EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2);
371 Verifies that `val1` is less than, or almost equal to, `val2`. You can replace
372 `EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`.
374 **Availability**: Linux, Windows, Mac.
376 ### Asserting Using gMock Matchers
378 Google-developed C++ mocking framework [gMock](../../googlemock) comes with a
379 library of matchers for validating arguments passed to mock objects. A gMock
380 *matcher* is basically a predicate that knows how to describe itself. It can be
381 used in these assertion macros:
383 | Fatal assertion | Nonfatal assertion | Verifies |
384 | ------------------------------ | ------------------------------ | --------------------- |
385 | `ASSERT_THAT(value, matcher);` | `EXPECT_THAT(value, matcher);` | value matches matcher |
387 For example, `StartsWith(prefix)` is a matcher that matches a string starting
388 with `prefix`, and you can write:
391 using ::testing::StartsWith;
393 // Verifies that Foo() returns a string starting with "Hello".
394 EXPECT_THAT(Foo(), StartsWith("Hello"));
397 Read this [recipe](../../googlemock/docs/CookBook.md#using-matchers-in-google-test-assertions) in
398 the gMock Cookbook for more details.
400 gMock has a rich set of matchers. You can do many things googletest cannot do
401 alone with them. For a list of matchers gMock provides, read
402 [this](../../googlemock/docs/CookBook.md#using-matchers). Especially useful among them are
403 some [protocol buffer matchers](https://github.com/google/nucleus/blob/master/nucleus/testing/protocol-buffer-matchers.h). It's easy to write
404 your [own matchers](../../googlemock/docs/CookBook.md#writing-new-matchers-quickly) too.
406 For example, you can use gMock's
407 [EqualsProto](https://github.com/google/nucleus/blob/master/nucleus/testing/protocol-buffer-matchers.h)
408 to compare protos in your tests:
411 #include "testing/base/public/gmock.h"
412 using ::testing::EqualsProto;
414 EXPECT_THAT(actual_proto, EqualsProto("foo: 123 bar: 'xyz'"));
415 EXPECT_THAT(*actual_proto_ptr, EqualsProto(expected_proto));
418 gMock is bundled with googletest, so you don't need to add any build dependency
419 in order to take advantage of this. Just include `"testing/base/public/gmock.h"`
420 and you're ready to go.
422 **Availability**: Linux, Windows, and Mac.
424 ### More String Assertions
426 (Please read the [previous](#AssertThat) section first if you haven't.)
428 You can use the gMock [string matchers](../../googlemock/docs/CheatSheet.md#string-matchers)
429 with `EXPECT_THAT()` or `ASSERT_THAT()` to do more string comparison tricks
430 (sub-string, prefix, suffix, regular expression, and etc). For example,
433 using ::testing::HasSubstr;
434 using ::testing::MatchesRegex;
436 ASSERT_THAT(foo_string, HasSubstr("needle"));
437 EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+"));
440 **Availability**: Linux, Windows, Mac.
442 If the string contains a well-formed HTML or XML document, you can check whether
443 its DOM tree matches an [XPath
444 expression](http://www.w3.org/TR/xpath/#contents):
447 // Currently still in //template/prototemplate/testing:xpath_matcher
448 #include "template/prototemplate/testing/xpath_matcher.h"
449 using prototemplate::testing::MatchesXPath;
450 EXPECT_THAT(html_string, MatchesXPath("//a[text()='click here']"));
453 **Availability**: Linux.
455 ### Windows HRESULT assertions
457 These assertions test for `HRESULT` success or failure.
459 Fatal assertion | Nonfatal assertion | Verifies
460 -------------------------------------- | -------------------------------------- | --------
461 `ASSERT_HRESULT_SUCCEEDED(expression)` | `EXPECT_HRESULT_SUCCEEDED(expression)` | `expression` is a success `HRESULT`
462 `ASSERT_HRESULT_FAILED(expression)` | `EXPECT_HRESULT_FAILED(expression)` | `expression` is a failure `HRESULT`
464 The generated output contains the human-readable error message associated with
465 the `HRESULT` code returned by `expression`.
467 You might use them like this:
470 CComPtr<IShellDispatch2> shell;
471 ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application"));
473 ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty));
476 **Availability**: Windows.
480 You can call the function
483 ::testing::StaticAssertTypeEq<T1, T2>();
486 to assert that types `T1` and `T2` are the same. The function does nothing if
487 the assertion is satisfied. If the types are different, the function call will
488 fail to compile, and the compiler error message will likely (depending on the
489 compiler) show you the actual values of `T1` and `T2`. This is mainly useful
490 inside template code.
492 **Caveat**: When used inside a member function of a class template or a function
493 template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is
494 instantiated. For example, given:
497 template <typename T> class Foo {
499 void Bar() { ::testing::StaticAssertTypeEq<int, T>(); }
506 void Test1() { Foo<bool> foo; }
509 will not generate a compiler error, as `Foo<bool>::Bar()` is never actually
510 instantiated. Instead, you need:
513 void Test2() { Foo<bool> foo; foo.Bar(); }
516 to cause a compiler error.
518 **Availability**: Linux, Windows, Mac.
520 ### Assertion Placement
522 You can use assertions in any C++ function. In particular, it doesn't have to be
523 a method of the test fixture class. The one constraint is that assertions that
524 generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in
525 void-returning functions. This is a consequence of Google's not using
526 exceptions. By placing it in a non-void function you'll get a confusing compile
527 error like `"error: void value not ignored as it ought to be"` or `"cannot
528 initialize return object of type 'bool' with an rvalue of type 'void'"` or
529 `"error: no viable conversion from 'void' to 'string'"`.
531 If you need to use fatal assertions in a function that returns non-void, one
532 option is to make the function return the value in an out parameter instead. For
533 example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
534 need to make sure that `*result` contains some sensible value even when the
535 function returns prematurely. As the function now returns `void`, you can use
536 any assertion inside of it.
538 If changing the function's type is not an option, you should just use assertions
539 that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.
541 NOTE: Constructors and destructors are not considered void-returning functions,
542 according to the C++ language specification, and so you may not use fatal
543 assertions in them. You'll get a compilation error if you try. A simple
544 workaround is to transfer the entire body of the constructor or destructor to a
545 private void-returning method. However, you should be aware that a fatal
546 assertion failure in a constructor does not terminate the current test, as your
547 intuition might suggest; it merely returns from the constructor early, possibly
548 leaving your object in a partially-constructed state. Likewise, a fatal
549 assertion failure in a destructor may leave your object in a
550 partially-destructed state. Use assertions carefully in these situations!
552 ## Teaching googletest How to Print Your Values
554 When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument
555 values to help you debug. It does this using a user-extensible value printer.
557 This printer knows how to print built-in C++ types, native arrays, STL
558 containers, and any type that supports the `<<` operator. For other types, it
559 prints the raw bytes in the value and hopes that you the user can figure it out.
561 As mentioned earlier, the printer is *extensible*. That means you can teach it
562 to do a better job at printing your particular type than to dump the bytes. To
563 do that, define `<<` for your type:
566 // Streams are allowed only for logging. Don't include this for
567 // any other purpose.
572 class Bar { // We want googletest to be able to print instances of this.
574 // Create a free inline friend function.
575 friend std::ostream& operator<<(std::ostream& os, const Bar& bar) {
576 return os << bar.DebugString(); // whatever needed to print bar to os
580 // If you can't declare the function in the class it's important that the
581 // << operator is defined in the SAME namespace that defines Bar. C++'s look-up
582 // rules rely on that.
583 std::ostream& operator<<(std::ostream& os, const Bar& bar) {
584 return os << bar.DebugString(); // whatever needed to print bar to os
590 Sometimes, this might not be an option: your team may consider it bad style to
591 have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that
592 doesn't do what you want (and you cannot change it). If so, you can instead
593 define a `PrintTo()` function like this:
596 // Streams are allowed only for logging. Don't include this for
597 // any other purpose.
604 friend void PrintTo(const Bar& bar, std::ostream* os) {
605 *os << bar.DebugString(); // whatever needed to print bar to os
609 // If you can't declare the function in the class it's important that PrintTo()
610 // is defined in the SAME namespace that defines Bar. C++'s look-up rules rely
612 void PrintTo(const Bar& bar, std::ostream* os) {
613 *os << bar.DebugString(); // whatever needed to print bar to os
619 If you have defined both `<<` and `PrintTo()`, the latter will be used when
620 googletest is concerned. This allows you to customize how the value appears in
621 googletest's output without affecting code that relies on the behavior of its
624 If you want to print a value `x` using googletest's value printer yourself, just
625 call `::testing::PrintToString(x)`, which returns an `std::string`:
628 vector<pair<Bar, int> > bar_ints = GetBarIntVector();
630 EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))
631 << "bar_ints = " << ::testing::PrintToString(bar_ints);
636 In many applications, there are assertions that can cause application failure if
637 a condition is not met. These sanity checks, which ensure that the program is in
638 a known good state, are there to fail at the earliest possible time after some
639 program state is corrupted. If the assertion checks the wrong condition, then
640 the program may proceed in an erroneous state, which could lead to memory
641 corruption, security holes, or worse. Hence it is vitally important to test that
642 such assertion statements work as expected.
644 Since these precondition checks cause the processes to die, we call such tests
645 _death tests_. More generally, any test that checks that a program terminates
646 (except by throwing an exception) in an expected fashion is also a death test.
649 Note that if a piece of code throws an exception, we don't consider it "death"
650 for the purpose of death tests, as the caller of the code could catch the
651 exception and avoid the crash. If you want to verify exceptions thrown by your
652 code, see [Exception Assertions](#exception-assertions).
654 If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see
657 ### How to Write a Death Test
659 googletest has the following macros to support death tests:
661 Fatal assertion | Nonfatal assertion | Verifies
662 ---------------------------------------------- | ---------------------------------------------- | --------
663 `ASSERT_DEATH(statement, regex);` | `EXPECT_DEATH(statement, regex);` | `statement` crashes with the given error
664 `ASSERT_DEATH_IF_SUPPORTED(statement, regex);` | `EXPECT_DEATH_IF_SUPPORTED(statement, regex);` | if death tests are supported, verifies that `statement` crashes with the given error; otherwise verifies nothing
665 `ASSERT_EXIT(statement, predicate, regex);` | `EXPECT_EXIT(statement, predicate, regex);` | `statement` exits with the given error, and its exit code matches `predicate`
667 where `statement` is a statement that is expected to cause the process to die,
668 `predicate` is a function or function object that evaluates an integer exit
669 status, and `regex` is a (Perl) regular expression that the stderr output of
670 `statement` is expected to match. Note that `statement` can be *any valid
671 statement* (including *compound statement*) and doesn't have to be an
675 As usual, the `ASSERT` variants abort the current test function, while the
676 `EXPECT` variants do not.
678 > NOTE: We use the word "crash" here to mean that the process terminates with a
679 > *non-zero* exit status code. There are two possibilities: either the process
680 > has called `exit()` or `_exit()` with a non-zero value, or it may be killed by
683 > This means that if `*statement*` terminates the process with a 0 exit code, it
684 > is *not* considered a crash by `EXPECT_DEATH`. Use `EXPECT_EXIT` instead if
685 > this is the case, or if you want to restrict the exit code more precisely.
687 A predicate here must accept an `int` and return a `bool`. The death test
688 succeeds only if the predicate returns `true`. googletest defines a few
689 predicates that handle the most common cases:
692 ::testing::ExitedWithCode(exit_code)
695 This expression is `true` if the program exited normally with the given exit
699 ::testing::KilledBySignal(signal_number) // Not available on Windows.
702 This expression is `true` if the program was killed by the given signal.
704 The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate
705 that verifies the process' exit code is non-zero.
707 Note that a death test only cares about three things:
709 1. does `statement` abort or exit the process?
710 2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
711 satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)
712 is the exit status non-zero? And
713 3. does the stderr output match `regex`?
715 In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it
716 will **not** cause the death test to fail, as googletest assertions don't abort
719 To write a death test, simply use one of the above macros inside your test
720 function. For example,
723 TEST(MyDeathTest, Foo) {
724 // This death test uses a compound statement.
728 }, "Error on line .* of Foo()");
731 TEST(MyDeathTest, NormalExit) {
732 EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success");
735 TEST(MyDeathTest, KillMyself) {
736 EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL),
737 "Sending myself unblockable signal");
743 * calling `Foo(5)` causes the process to die with the given error message,
744 * calling `NormalExit()` causes the process to print `"Success"` to stderr and
745 exit with exit code 0, and
746 * calling `KillMyself()` kills the process with signal `SIGKILL`.
748 The test function body may contain other assertions and statements as well, if
751 ### Death Test Naming
753 IMPORTANT: We strongly recommend you to follow the convention of naming your
754 **test case** (not test) `*DeathTest` when it contains a death test, as
755 demonstrated in the above example. The [Death Tests And
756 Threads](#death-tests-and-threads) section below explains why.
758 If a test fixture class is shared by normal tests and death tests, you can use
759 `using` or `typedef` to introduce an alias for the fixture class and avoid
760 duplicating its code:
763 class FooTest : public ::testing::Test { ... };
765 using FooDeathTest = FooTest;
767 TEST_F(FooTest, DoesThis) {
771 TEST_F(FooDeathTest, DoesThat) {
776 **Availability**: Linux, Windows (requires MSVC 8.0 or above), Cygwin, and Mac
778 ### Regular Expression Syntax
781 On POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the
782 [POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)
783 syntax. To learn about this syntax, you may want to read this
784 [Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).
786 On Windows, googletest uses its own simple regular expression implementation. It
787 lacks many features. For example, we don't support union (`"x|y"`), grouping
788 (`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among
789 others. Below is what we do support (`A` denotes a literal character, period
790 (`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular
794 ---------- | --------------------------------------------------------------
795 `c` | matches any literal character `c`
796 `\\d` | matches any decimal digit
797 `\\D` | matches any character that's not a decimal digit
801 `\\s` | matches any ASCII whitespace, including `\n`
802 `\\S` | matches any character that's not a whitespace
805 `\\w` | matches any letter, `_`, or decimal digit
806 `\\W` | matches any character that `\\w` doesn't match
807 `\\c` | matches any literal character `c`, which must be a punctuation
808 `.` | matches any single character except `\n`
809 `A?` | matches 0 or 1 occurrences of `A`
810 `A*` | matches 0 or many occurrences of `A`
811 `A+` | matches 1 or many occurrences of `A`
812 `^` | matches the beginning of a string (not that of each line)
813 `$` | matches the end of a string (not that of each line)
814 `xy` | matches `x` followed by `y`
816 To help you determine which capability is available on your system, googletest
817 defines macros to govern which regular expression it is using. The macros are:
818 <!--absl:google3-begin(google3-only)-->`GTEST_USES_PCRE=1`, or
819 <!--absl:google3-end--> `GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If
820 you want your death tests to work in all cases, you can either `#if` on these
821 macros or use the more limited syntax only.
825 Under the hood, `ASSERT_EXIT()` spawns a new process and executes the death test
826 statement in that process. The details of how precisely that happens depend on
827 the platform and the variable ::testing::GTEST_FLAG(death_test_style) (which is
828 initialized from the command-line flag `--gtest_death_test_style`).
830 * On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the
832 * If the variable's value is `"fast"`, the death test statement is
833 immediately executed.
834 * If the variable's value is `"threadsafe"`, the child process re-executes
835 the unit test binary just as it was originally invoked, but with some
836 extra flags to cause just the single death test under consideration to
838 * On Windows, the child is spawned using the `CreateProcess()` API, and
839 re-executes the binary to cause just the single death test under
840 consideration to be run - much like the `threadsafe` mode on POSIX.
842 Other values for the variable are illegal and will cause the death test to fail.
843 Currently, the flag's default value is
844 "fast". However, we reserve
845 the right to change it in the future. Therefore, your tests should not depend on
846 this. In either case, the parent process waits for the child process to
847 complete, and checks that
849 1. the child's exit status satisfies the predicate, and
850 2. the child's stderr matches the regular expression.
852 If the death test statement runs to completion without dying, the child process
853 will nonetheless terminate, and the assertion fails.
855 ### Death Tests And Threads
857 The reason for the two death test styles has to do with thread safety. Due to
858 well-known problems with forking in the presence of threads, death tests should
859 be run in a single-threaded context. Sometimes, however, it isn't feasible to
860 arrange that kind of environment. For example, statically-initialized modules
861 may start threads before main is ever reached. Once threads have been created,
862 it may be difficult or impossible to clean them up.
864 googletest has three features intended to raise awareness of threading issues.
866 1. A warning is emitted if multiple threads are running when a death test is
868 2. Test cases with a name ending in "DeathTest" are run before all other tests.
869 3. It uses `clone()` instead of `fork()` to spawn the child process on Linux
870 (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely
871 to cause the child to hang when the parent process has multiple threads.
873 It's perfectly fine to create threads inside a death test statement; they are
874 executed in a separate process and cannot affect the parent.
876 ### Death Test Styles
879 The "threadsafe" death test style was introduced in order to help mitigate the
880 risks of testing in a possibly multithreaded environment. It trades increased
881 test execution time (potentially dramatically so) for improved thread safety.
883 The automated testing framework does not set the style flag. You can choose a
884 particular style of death tests by setting the flag programmatically:
887 testing::FLAGS_gtest_death_test_style="threadsafe"
890 You can do this in `main()` to set the style for all death tests in the binary,
891 or in individual tests. Recall that flags are saved before running each test and
892 restored afterwards, so you need not do that yourself. For example:
895 int main(int argc, char** argv) {
896 InitGoogle(argv[0], &argc, &argv, true);
897 ::testing::FLAGS_gtest_death_test_style = "fast";
898 return RUN_ALL_TESTS();
901 TEST(MyDeathTest, TestOne) {
902 ::testing::FLAGS_gtest_death_test_style = "threadsafe";
903 // This test is run in the "threadsafe" style:
904 ASSERT_DEATH(ThisShouldDie(), "");
907 TEST(MyDeathTest, TestTwo) {
908 // This test is run in the "fast" style:
909 ASSERT_DEATH(ThisShouldDie(), "");
916 The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If
917 it leaves the current function via a `return` statement or by throwing an
918 exception, the death test is considered to have failed. Some googletest macros
919 may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid
922 Since `statement` runs in the child process, any in-memory side effect (e.g.
923 modifying a variable, releasing memory, etc) it causes will *not* be observable
924 in the parent process. In particular, if you release memory in a death test,
925 your program will fail the heap check as the parent process will never see the
926 memory reclaimed. To solve this problem, you can
928 1. try not to free memory in a death test;
929 2. free the memory again in the parent process; or
930 3. do not use the heap checker in your program.
932 Due to an implementation detail, you cannot place multiple death test assertions
933 on the same line; otherwise, compilation will fail with an unobvious error
936 Despite the improved thread safety afforded by the "threadsafe" style of death
937 test, thread problems such as deadlock are still possible in the presence of
938 handlers registered with `pthread_atfork(3)`.
941 ## Using Assertions in Sub-routines
943 ### Adding Traces to Assertions
945 If a test sub-routine is called from several places, when an assertion inside it
946 fails, it can be hard to tell which invocation of the sub-routine the failure is
948 You can alleviate this problem using extra logging or custom failure messages,
949 but that usually clutters up your tests. A better solution is to use the
950 `SCOPED_TRACE` macro or the `ScopedTrace` utility:
953 SCOPED_TRACE(message);
954 ScopedTrace trace("file_path", line_number, message);
957 where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`
958 macro will cause the current file name, line number, and the given message to be
959 added in every failure message. `ScopedTrace` accepts explicit file name and
960 line number in arguments, which is useful for writing test helpers. The effect
961 will be undone when the control leaves the current lexical scope.
966 10: void Sub1(int n) {
967 11: EXPECT_EQ(1, Bar(n));
968 12: EXPECT_EQ(2, Bar(n + 1));
971 15: TEST(FooTest, Bar) {
973 17: SCOPED_TRACE("A"); // This trace point will be included in
974 18: // every failure in this scope.
982 could result in messages like these:
985 path/to/foo_test.cc:11: Failure
990 path/to/foo_test.cc:17: A
992 path/to/foo_test.cc:12: Failure
998 Without the trace, it would've been difficult to know which invocation of
999 `Sub1()` the two failures come from respectively. (You could add
1001 an extra message to each assertion in `Sub1()` to indicate the value of `n`, but
1004 Some tips on using `SCOPED_TRACE`:
1006 1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the
1007 beginning of a sub-routine, instead of at each call site.
1008 2. When calling sub-routines inside a loop, make the loop iterator part of the
1009 message in `SCOPED_TRACE` such that you can know which iteration the failure
1011 3. Sometimes the line number of the trace point is enough for identifying the
1012 particular invocation of a sub-routine. In this case, you don't have to
1013 choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
1014 4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer
1015 scope. In this case, all active trace points will be included in the failure
1016 messages, in reverse order they are encountered.
1017 5. The trace dump is clickable in Emacs - hit `return` on a line number and
1018 you'll be taken to that line in the source file!
1020 **Availability**: Linux, Windows, Mac.
1022 ### Propagating Fatal Failures
1024 A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
1025 when they fail they only abort the _current function_, not the entire test. For
1026 example, the following test will segfault:
1030 // Generates a fatal failure and aborts the current function.
1033 // The following won't be executed.
1037 TEST(FooTest, Bar) {
1038 Subroutine(); // The intended behavior is for the fatal failure
1039 // in Subroutine() to abort the entire test.
1041 // The actual behavior: the function goes on after Subroutine() returns.
1043 *p = 3; // Segfault!
1047 To alleviate this, googletest provides three different solutions. You could use
1048 either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
1049 `HasFatalFailure()` function. They are described in the following two
1052 #### Asserting on Subroutines with an exception
1054 The following code can turn ASSERT-failure into an exception:
1057 class ThrowListener : public testing::EmptyTestEventListener {
1058 void OnTestPartResult(const testing::TestPartResult& result) override {
1059 if (result.type() == testing::TestPartResult::kFatalFailure) {
1060 throw testing::AssertionException(result);
1064 int main(int argc, char** argv) {
1066 testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener);
1067 return RUN_ALL_TESTS();
1071 This listener should be added after other listeners if you have any, otherwise
1072 they won't see failed `OnTestPartResult`.
1074 #### Asserting on Subroutines
1076 As shown above, if your test calls a subroutine that has an `ASSERT_*` failure
1077 in it, the test will continue after the subroutine returns. This may not be what
1080 Often people want fatal failures to propagate like exceptions. For that
1081 googletest offers the following macros:
1083 Fatal assertion | Nonfatal assertion | Verifies
1084 ------------------------------------- | ------------------------------------- | --------
1085 `ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread.
1087 Only failures in the thread that executes the assertion are checked to determine
1088 the result of this type of assertions. If `statement` creates new threads,
1089 failures in these threads are ignored.
1094 ASSERT_NO_FATAL_FAILURE(Foo());
1097 EXPECT_NO_FATAL_FAILURE({
1102 **Availability**: Linux, Windows, Mac. Assertions from multiple threads are
1103 currently not supported on Windows.
1105 #### Checking for Failures in the Current Test
1107 `HasFatalFailure()` in the `::testing::Test` class returns `true` if an
1108 assertion in the current test has suffered a fatal failure. This allows
1109 functions to catch fatal failures in a sub-routine and return early.
1115 static bool HasFatalFailure();
1119 The typical usage, which basically simulates the behavior of a thrown exception,
1123 TEST(FooTest, Bar) {
1125 // Aborts if Subroutine() had a fatal failure.
1126 if (HasFatalFailure()) return;
1128 // The following won't be executed.
1133 If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
1134 fixture, you must add the `::testing::Test::` prefix, as in:
1137 if (::testing::Test::HasFatalFailure()) return;
1140 Similarly, `HasNonfatalFailure()` returns `true` if the current test has at
1141 least one non-fatal failure, and `HasFailure()` returns `true` if the current
1142 test has at least one failure of either kind.
1144 **Availability**: Linux, Windows, Mac.
1146 ## Logging Additional Information
1148 In your test code, you can call `RecordProperty("key", value)` to log additional
1149 information, where `value` can be either a string or an `int`. The *last* value
1150 recorded for a key will be emitted to the [XML output](#generating-an-xml-report) if you
1151 specify one. For example, the test
1154 TEST_F(WidgetUsageTest, MinAndMaxWidgets) {
1155 RecordProperty("MaximumWidgets", ComputeMaxUsage());
1156 RecordProperty("MinimumWidgets", ComputeMinUsage());
1160 will output XML like this:
1164 <testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" />
1170 > * `RecordProperty()` is a static member of the `Test` class. Therefore it
1171 > needs to be prefixed with `::testing::Test::` if used outside of the
1172 > `TEST` body and the test fixture class.
1173 > * `*key*` must be a valid XML attribute name, and cannot conflict with the
1174 > ones already used by googletest (`name`, `status`, `time`, `classname`,
1175 > `type_param`, and `value_param`).
1176 > * Calling `RecordProperty()` outside of the lifespan of a test is allowed.
1177 > If it's called outside of a test but between a test case's
1178 > `SetUpTestCase()` and `TearDownTestCase()` methods, it will be attributed
1179 > to the XML element for the test case. If it's called outside of all test
1180 > cases (e.g. in a test environment), it will be attributed to the top-level
1183 **Availability**: Linux, Windows, Mac.
1185 ## Sharing Resources Between Tests in the Same Test Case
1187 googletest creates a new test fixture object for each test in order to make
1188 tests independent and easier to debug. However, sometimes tests use resources
1189 that are expensive to set up, making the one-copy-per-test model prohibitively
1192 If the tests don't change the resource, there's no harm in their sharing a
1193 single resource copy. So, in addition to per-test set-up/tear-down, googletest
1194 also supports per-test-case set-up/tear-down. To use it:
1196 1. In your test fixture class (say `FooTest` ), declare as `static` some member
1197 variables to hold the shared resources.
1198 1. Outside your test fixture class (typically just below it), define those
1199 member variables, optionally giving them initial values.
1200 1. In the same test fixture class, define a `static void SetUpTestCase()`
1201 function (remember not to spell it as **`SetupTestCase`** with a small `u`!)
1202 to set up the shared resources and a `static void TearDownTestCase()`
1203 function to tear them down.
1205 That's it! googletest automatically calls `SetUpTestCase()` before running the
1206 *first test* in the `FooTest` test case (i.e. before creating the first
1207 `FooTest` object), and calls `TearDownTestCase()` after running the *last test*
1208 in it (i.e. after deleting the last `FooTest` object). In between, the tests can
1209 use the shared resources.
1211 Remember that the test order is undefined, so your code can't depend on a test
1212 preceding or following another. Also, the tests must either not modify the state
1213 of any shared resource, or, if they do modify the state, they must restore the
1214 state to its original value before passing control to the next test.
1216 Here's an example of per-test-case set-up and tear-down:
1219 class FooTest : public ::testing::Test {
1221 // Per-test-case set-up.
1222 // Called before the first test in this test case.
1223 // Can be omitted if not needed.
1224 static void SetUpTestCase() {
1225 shared_resource_ = new ...;
1228 // Per-test-case tear-down.
1229 // Called after the last test in this test case.
1230 // Can be omitted if not needed.
1231 static void TearDownTestCase() {
1232 delete shared_resource_;
1233 shared_resource_ = NULL;
1236 // You can define per-test set-up logic as usual.
1237 virtual void SetUp() { ... }
1239 // You can define per-test tear-down logic as usual.
1240 virtual void TearDown() { ... }
1242 // Some expensive resource shared by all tests.
1243 static T* shared_resource_;
1246 T* FooTest::shared_resource_ = NULL;
1248 TEST_F(FooTest, Test1) {
1249 ... you can refer to shared_resource_ here ...
1252 TEST_F(FooTest, Test2) {
1253 ... you can refer to shared_resource_ here ...
1257 NOTE: Though the above code declares `SetUpTestCase()` protected, it may
1258 sometimes be necessary to declare it public, such as when using it with
1261 **Availability**: Linux, Windows, Mac.
1263 ## Global Set-Up and Tear-Down
1265 Just as you can do set-up and tear-down at the test level and the test case
1266 level, you can also do it at the test program level. Here's how.
1268 First, you subclass the `::testing::Environment` class to define a test
1269 environment, which knows how to set-up and tear-down:
1274 virtual ~Environment() {}
1276 // Override this to define how to set up the environment.
1277 virtual void SetUp() {}
1279 // Override this to define how to tear down the environment.
1280 virtual void TearDown() {}
1284 Then, you register an instance of your environment class with googletest by
1285 calling the `::testing::AddGlobalTestEnvironment()` function:
1288 Environment* AddGlobalTestEnvironment(Environment* env);
1291 Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of
1292 the environment object, then runs the tests if there was no fatal failures, and
1293 finally calls `TearDown()` of the environment object.
1295 It's OK to register multiple environment objects. In this case, their `SetUp()`
1296 will be called in the order they are registered, and their `TearDown()` will be
1297 called in the reverse order.
1299 Note that googletest takes ownership of the registered environment objects.
1300 Therefore **do not delete them** by yourself.
1302 You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called,
1303 probably in `main()`. If you use `gtest_main`, you need to call this before
1304 `main()` starts for it to take effect. One way to do this is to define a global
1308 ::testing::Environment* const foo_env =
1309 ::testing::AddGlobalTestEnvironment(new FooEnvironment);
1312 However, we strongly recommend you to write your own `main()` and call
1313 `AddGlobalTestEnvironment()` there, as relying on initialization of global
1314 variables makes the code harder to read and may cause problems when you register
1315 multiple environments from different translation units and the environments have
1316 dependencies among them (remember that the compiler doesn't guarantee the order
1317 in which global variables from different translation units are initialized).
1319 ## Value-Parameterized Tests
1321 *Value-parameterized tests* allow you to test your code with different
1322 parameters without writing multiple copies of the same test. This is useful in a
1323 number of situations, for example:
1325 * You have a piece of code whose behavior is affected by one or more
1326 command-line flags. You want to make sure your code performs correctly for
1327 various values of those flags.
1328 * You want to test different implementations of an OO interface.
1329 * You want to test your code over various inputs (a.k.a. data-driven testing).
1330 This feature is easy to abuse, so please exercise your good sense when doing
1333 ### How to Write Value-Parameterized Tests
1335 To write value-parameterized tests, first you should define a fixture class. It
1336 must be derived from both `::testing::Test` and
1337 `::testing::WithParamInterface<T>` (the latter is a pure interface), where `T`
1338 is the type of your parameter values. For convenience, you can just derive the
1339 fixture class from `::testing::TestWithParam<T>`, which itself is derived from
1340 both `::testing::Test` and `::testing::WithParamInterface<T>`. `T` can be any
1341 copyable type. If it's a raw pointer, you are responsible for managing the
1342 lifespan of the pointed values.
1344 NOTE: If your test fixture defines `SetUpTestCase()` or `TearDownTestCase()`
1345 they must be declared **public** rather than **protected** in order to use
1350 public ::testing::TestWithParam<const char*> {
1351 // You can implement all the usual fixture class members here.
1352 // To access the test parameter, call GetParam() from class
1353 // TestWithParam<T>.
1356 // Or, when you want to add parameters to a pre-existing fixture class:
1357 class BaseTest : public ::testing::Test {
1360 class BarTest : public BaseTest,
1361 public ::testing::WithParamInterface<const char*> {
1366 Then, use the `TEST_P` macro to define as many test patterns using this fixture
1367 as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you
1371 TEST_P(FooTest, DoesBlah) {
1372 // Inside a test, access the test parameter with the GetParam() method
1373 // of the TestWithParam<T> class:
1374 EXPECT_TRUE(foo.Blah(GetParam()));
1378 TEST_P(FooTest, HasBlahBlah) {
1383 Finally, you can use `INSTANTIATE_TEST_CASE_P` to instantiate the test case with
1384 any set of parameters you want. googletest defines a number of functions for
1385 generating test parameters. They return what we call (surprise!) *parameter
1386 generators*. Here is a summary of them, which are all in the `testing`
1389 | Parameter Generator | Behavior |
1390 | ---------------------------- | ------------------------------------------- |
1391 | `Range(begin, end [, step])` | Yields values `{begin, begin+step, begin+step+step, ...}`. The values do not include `end`. `step` defaults to 1. |
1392 | `Values(v1, v2, ..., vN)` | Yields values `{v1, v2, ..., vN}`. |
1393 | `ValuesIn(container)` and `ValuesIn(begin,end)` | Yields values from a C-style array, an STL-style container, or an iterator range `[begin, end)`. |
1394 | `Bool()` | Yields sequence `{false, true}`. |
1395 | `Combine(g1, g2, ..., gN)` | Yields all combinations (Cartesian product) as std\:\:tuples of the values generated by the `N` generators. |
1397 For more details, see the comments at the definitions of these functions.
1399 The following statement will instantiate tests from the `FooTest` test case each
1400 with parameter values `"meeny"`, `"miny"`, and `"moe"`.
1403 INSTANTIATE_TEST_CASE_P(InstantiationName,
1405 ::testing::Values("meeny", "miny", "moe"));
1408 NOTE: The code above must be placed at global or namespace scope, not at
1411 NOTE: Don't forget this step! If you do your test will silently pass, but none
1412 of its cases will ever run!
1414 To distinguish different instances of the pattern (yes, you can instantiate it
1415 more than once), the first argument to `INSTANTIATE_TEST_CASE_P` is a prefix
1416 that will be added to the actual test case name. Remember to pick unique
1417 prefixes for different instantiations. The tests from the instantiation above
1418 will have these names:
1420 * `InstantiationName/FooTest.DoesBlah/0` for `"meeny"`
1421 * `InstantiationName/FooTest.DoesBlah/1` for `"miny"`
1422 * `InstantiationName/FooTest.DoesBlah/2` for `"moe"`
1423 * `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"`
1424 * `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"`
1425 * `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"`
1427 You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).
1429 This statement will instantiate all tests from `FooTest` again, each with
1430 parameter values `"cat"` and `"dog"`:
1433 const char* pets[] = {"cat", "dog"};
1434 INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest,
1435 ::testing::ValuesIn(pets));
1438 The tests from the instantiation above will have these names:
1440 * `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"`
1441 * `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"`
1442 * `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"`
1443 * `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"`
1445 Please note that `INSTANTIATE_TEST_CASE_P` will instantiate *all* tests in the
1446 given test case, whether their definitions come before or *after* the
1447 `INSTANTIATE_TEST_CASE_P` statement.
1449 You can see sample7_unittest.cc and sample8_unittest.cc for more examples.
1451 **Availability**: Linux, Windows (requires MSVC 8.0 or above), Mac
1453 ### Creating Value-Parameterized Abstract Tests
1455 In the above, we define and instantiate `FooTest` in the *same* source file.
1456 Sometimes you may want to define value-parameterized tests in a library and let
1457 other people instantiate them later. This pattern is known as *abstract tests*.
1458 As an example of its application, when you are designing an interface you can
1459 write a standard suite of abstract tests (perhaps using a factory function as
1460 the test parameter) that all implementations of the interface are expected to
1461 pass. When someone implements the interface, they can instantiate your suite to
1462 get all the interface-conformance tests for free.
1464 To define abstract tests, you should organize your code like this:
1466 1. Put the definition of the parameterized test fixture class (e.g. `FooTest`)
1467 in a header file, say `foo_param_test.h`. Think of this as *declaring* your
1469 1. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes
1470 `foo_param_test.h`. Think of this as *implementing* your abstract tests.
1472 Once they are defined, you can instantiate them by including `foo_param_test.h`,
1473 invoking `INSTANTIATE_TEST_CASE_P()`, and depending on the library target that
1474 contains `foo_param_test.cc`. You can instantiate the same abstract test case
1475 multiple times, possibly in different source files.
1477 ### Specifying Names for Value-Parameterized Test Parameters
1479 The optional last argument to `INSTANTIATE_TEST_CASE_P()` allows the user to
1480 specify a function or functor that generates custom test name suffixes based on
1481 the test parameters. The function should accept one argument of type
1482 `testing::TestParamInfo<class ParamType>`, and return `std::string`.
1484 `testing::PrintToStringParamName` is a builtin test suffix generator that
1485 returns the value of `testing::PrintToString(GetParam())`. It does not work for
1486 `std::string` or C strings.
1488 NOTE: test names must be non-empty, unique, and may only contain ASCII
1489 alphanumeric characters. In particular, they [should not contain
1490 underscores](https://g3doc.corp.google.com/third_party/googletest/googletest/g3doc/faq.md#no-underscores).
1493 class MyTestCase : public testing::TestWithParam<int> {};
1495 TEST_P(MyTestCase, MyTest)
1497 std::cout << "Example Test Param: " << GetParam() << std::endl;
1500 INSTANTIATE_TEST_CASE_P(MyGroup, MyTestCase, testing::Range(0, 10),
1501 testing::PrintToStringParamName());
1506 Suppose you have multiple implementations of the same interface and want to make
1507 sure that all of them satisfy some common requirements. Or, you may have defined
1508 several types that are supposed to conform to the same "concept" and you want to
1509 verify it. In both cases, you want the same test logic repeated for different
1512 While you can write one `TEST` or `TEST_F` for each type you want to test (and
1513 you may even factor the test logic into a function template that you invoke from
1514 the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n`
1515 types, you'll end up writing `m*n` `TEST`s.
1517 *Typed tests* allow you to repeat the same test logic over a list of types. You
1518 only need to write the test logic once, although you must know the type list
1519 when writing typed tests. Here's how you do it:
1521 First, define a fixture class template. It should be parameterized by a type.
1522 Remember to derive it from `::testing::Test`:
1525 template <typename T>
1526 class FooTest : public ::testing::Test {
1529 typedef std::list<T> List;
1535 Next, associate a list of types with the test case, which will be repeated for
1536 each type in the list:
1539 using MyTypes = ::testing::Types<char, int, unsigned int>;
1540 TYPED_TEST_CASE(FooTest, MyTypes);
1543 The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_CASE`
1544 macro to parse correctly. Otherwise the compiler will think that each comma in
1545 the type list introduces a new macro argument.
1547 Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this
1548 test case. You can repeat this as many times as you want:
1551 TYPED_TEST(FooTest, DoesBlah) {
1552 // Inside a test, refer to the special name TypeParam to get the type
1553 // parameter. Since we are inside a derived class template, C++ requires
1554 // us to visit the members of FooTest via 'this'.
1555 TypeParam n = this->value_;
1557 // To visit static members of the fixture, add the 'TestFixture::'
1559 n += TestFixture::shared_;
1561 // To refer to typedefs in the fixture, add the 'typename TestFixture::'
1562 // prefix. The 'typename' is required to satisfy the compiler.
1563 typename TestFixture::List values;
1565 values.push_back(n);
1569 TYPED_TEST(FooTest, HasPropertyA) { ... }
1572 You can see sample6_unittest.cc
1574 **Availability**: Linux, Windows (requires MSVC 8.0 or above), Mac
1576 ## Type-Parameterized Tests
1578 *Type-parameterized tests* are like typed tests, except that they don't require
1579 you to know the list of types ahead of time. Instead, you can define the test
1580 logic first and instantiate it with different type lists later. You can even
1581 instantiate it more than once in the same program.
1583 If you are designing an interface or concept, you can define a suite of
1584 type-parameterized tests to verify properties that any valid implementation of
1585 the interface/concept should have. Then, the author of each implementation can
1586 just instantiate the test suite with their type to verify that it conforms to
1587 the requirements, without having to write similar tests repeatedly. Here's an
1590 First, define a fixture class template, as we did with typed tests:
1593 template <typename T>
1594 class FooTest : public ::testing::Test {
1599 Next, declare that you will define a type-parameterized test case:
1602 TYPED_TEST_CASE_P(FooTest);
1605 Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat
1606 this as many times as you want:
1609 TYPED_TEST_P(FooTest, DoesBlah) {
1610 // Inside a test, refer to TypeParam to get the type parameter.
1615 TYPED_TEST_P(FooTest, HasPropertyA) { ... }
1618 Now the tricky part: you need to register all test patterns using the
1619 `REGISTER_TYPED_TEST_CASE_P` macro before you can instantiate them. The first
1620 argument of the macro is the test case name; the rest are the names of the tests
1624 REGISTER_TYPED_TEST_CASE_P(FooTest,
1625 DoesBlah, HasPropertyA);
1628 Finally, you are free to instantiate the pattern with the types you want. If you
1629 put the above code in a header file, you can `#include` it in multiple C++
1630 source files and instantiate it multiple times.
1633 typedef ::testing::Types<char, int, unsigned int> MyTypes;
1634 INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes);
1637 To distinguish different instances of the pattern, the first argument to the
1638 `INSTANTIATE_TYPED_TEST_CASE_P` macro is a prefix that will be added to the
1639 actual test case name. Remember to pick unique prefixes for different instances.
1641 In the special case where the type list contains only one type, you can write
1642 that type directly without `::testing::Types<...>`, like this:
1645 INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int);
1648 You can see `sample6_unittest.cc` for a complete example.
1650 **Availability**: Linux, Windows (requires MSVC 8.0 or above), Mac
1652 ## Testing Private Code
1654 If you change your software's internal implementation, your tests should not
1655 break as long as the change is not observable by users. Therefore, **per the
1656 black-box testing principle, most of the time you should test your code through
1657 its public interfaces.**
1659 **If you still find yourself needing to test internal implementation code,
1660 consider if there's a better design.** The desire to test internal
1661 implementation is often a sign that the class is doing too much. Consider
1662 extracting an implementation class, and testing it. Then use that implementation
1663 class in the original class.
1665 If you absolutely have to test non-public interface code though, you can. There
1666 are two cases to consider:
1668 * Static functions ( *not* the same as static member functions!) or unnamed
1670 * Private or protected class members
1672 To test them, we use the following special techniques:
1674 * Both static functions and definitions/declarations in an unnamed namespace
1675 are only visible within the same translation unit. To test them, you can
1676 `#include` the entire `.cc` file being tested in your `*_test.cc` file.
1677 (including `.cc` files is not a good way to reuse code - you should not do
1678 this in production code!)
1680 However, a better approach is to move the private code into the
1681 `foo::internal` namespace, where `foo` is the namespace your project
1682 normally uses, and put the private declarations in a `*-internal.h` file.
1683 Your production `.cc` files and your tests are allowed to include this
1684 internal header, but your clients are not. This way, you can fully test your
1685 internal implementation without leaking it to your clients.
1687 * Private class members are only accessible from within the class or by
1688 friends. To access a class' private members, you can declare your test
1689 fixture as a friend to the class and define accessors in your fixture. Tests
1690 using the fixture can then access the private members of your production
1691 class via the accessors in the fixture. Note that even though your fixture
1692 is a friend to your production class, your tests are not automatically
1693 friends to it, as they are technically defined in sub-classes of the
1696 Another way to test private members is to refactor them into an
1697 implementation class, which is then declared in a `*-internal.h` file. Your
1698 clients aren't allowed to include this header but your tests can. Such is
1700 [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/)
1701 (Private Implementation) idiom.
1703 Or, you can declare an individual test as a friend of your class by adding
1704 this line in the class body:
1707 FRIEND_TEST(TestCaseName, TestName);
1715 #include "gtest/gtest_prod.h"
1720 FRIEND_TEST(FooTest, BarReturnsZeroOnNull);
1727 TEST(FooTest, BarReturnsZeroOnNull) {
1729 EXPECT_EQ(0, foo.Bar(NULL)); // Uses Foo's private member Bar().
1733 Pay special attention when your class is defined in a namespace, as you
1734 should define your test fixtures and tests in the same namespace if you want
1735 them to be friends of your class. For example, if the code to be tested
1739 namespace my_namespace {
1742 friend class FooTest;
1743 FRIEND_TEST(FooTest, Bar);
1744 FRIEND_TEST(FooTest, Baz);
1745 ... definition of the class Foo ...
1748 } // namespace my_namespace
1751 Your test code should be something like:
1754 namespace my_namespace {
1756 class FooTest : public ::testing::Test {
1761 TEST_F(FooTest, Bar) { ... }
1762 TEST_F(FooTest, Baz) { ... }
1764 } // namespace my_namespace
1768 ## "Catching" Failures
1770 If you are building a testing utility on top of googletest, you'll want to test
1771 your utility. What framework would you use to test it? googletest, of course.
1773 The challenge is to verify that your testing utility reports failures correctly.
1774 In frameworks that report a failure by throwing an exception, you could catch
1775 the exception and assert on it. But googletest doesn't use exceptions, so how do
1776 we test that a piece of code generates an expected failure?
1778 gunit-spi.h contains some constructs to do this. After #including this header,
1782 EXPECT_FATAL_FAILURE(statement, substring);
1785 to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the
1786 current thread whose message contains the given `substring`, or use
1789 EXPECT_NONFATAL_FAILURE(statement, substring);
1792 if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
1794 Only failures in the current thread are checked to determine the result of this
1795 type of expectations. If `statement` creates new threads, failures in these
1796 threads are also ignored. If you want to catch failures in other threads as
1797 well, use one of the following macros instead:
1800 EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1801 EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1804 NOTE: Assertions from multiple threads are currently not supported on Windows.
1806 For technical reasons, there are some caveats:
1808 1. You cannot stream a failure message to either macro.
1810 1. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference
1811 local non-static variables or non-static members of `this` object.
1813 1. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()()` cannot return a
1817 ## Getting the Current Test's Name
1819 Sometimes a function may need to know the name of the currently running test.
1820 For example, you may be using the `SetUp()` method of your test fixture to set
1821 the golden file name based on which test is running. The `::testing::TestInfo`
1822 class has this information:
1829 // Returns the test case name and the test name, respectively.
1831 // Do NOT delete or free the return value - it's managed by the
1833 const char* test_case_name() const;
1834 const char* name() const;
1840 To obtain a `TestInfo` object for the currently running test, call
1841 `current_test_info()` on the `UnitTest` singleton object:
1844 // Gets information about the currently running test.
1845 // Do NOT delete the returned object - it's managed by the UnitTest class.
1846 const ::testing::TestInfo* const test_info =
1847 ::testing::UnitTest::GetInstance()->current_test_info();
1851 printf("We are in test %s of test case %s.\n",
1853 test_info->test_case_name());
1856 `current_test_info()` returns a null pointer if no test is running. In
1857 particular, you cannot find the test case name in `TestCaseSetUp()`,
1858 `TestCaseTearDown()` (where you know the test case name implicitly), or
1859 functions called from them.
1861 **Availability**: Linux, Windows, Mac.
1863 ## Extending googletest by Handling Test Events
1865 googletest provides an **event listener API** to let you receive notifications
1866 about the progress of a test program and test failures. The events you can
1867 listen to include the start and end of the test program, a test case, or a test
1868 method, among others. You may use this API to augment or replace the standard
1869 console output, replace the XML output, or provide a completely different form
1870 of output, such as a GUI or a database. You can also use test events as
1871 checkpoints to implement a resource leak checker, for example.
1873 **Availability**: Linux, Windows, Mac.
1875 ### Defining Event Listeners
1877 To define a event listener, you subclass either testing::TestEventListener or
1878 testing::EmptyTestEventListener The former is an (abstract) interface, where
1879 *each pure virtual method can be overridden to handle a test event* (For
1880 example, when a test starts, the `OnTestStart()` method will be called.). The
1881 latter provides an empty implementation of all methods in the interface, such
1882 that a subclass only needs to override the methods it cares about.
1884 When an event is fired, its context is passed to the handler function as an
1885 argument. The following argument types are used:
1887 * UnitTest reflects the state of the entire test program,
1888 * TestCase has information about a test case, which can contain one or more
1890 * TestInfo contains the state of a test, and
1891 * TestPartResult represents the result of a test assertion.
1893 An event handler function can examine the argument it receives to find out
1894 interesting information about the event and the test program's state.
1899 class MinimalistPrinter : public ::testing::EmptyTestEventListener {
1900 // Called before a test starts.
1901 virtual void OnTestStart(const ::testing::TestInfo& test_info) {
1902 printf("*** Test %s.%s starting.\n",
1903 test_info.test_case_name(), test_info.name());
1906 // Called after a failed assertion or a SUCCESS().
1907 virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_result) {
1908 printf("%s in %s:%d\n%s\n",
1909 test_part_result.failed() ? "*** Failure" : "Success",
1910 test_part_result.file_name(),
1911 test_part_result.line_number(),
1912 test_part_result.summary());
1915 // Called after a test ends.
1916 virtual void OnTestEnd(const ::testing::TestInfo& test_info) {
1917 printf("*** Test %s.%s ending.\n",
1918 test_info.test_case_name(), test_info.name());
1923 ### Using Event Listeners
1925 To use the event listener you have defined, add an instance of it to the
1926 googletest event listener list (represented by class TestEventListeners - note
1927 the "s" at the end of the name) in your `main()` function, before calling
1931 int main(int argc, char** argv) {
1932 ::testing::InitGoogleTest(&argc, argv);
1933 // Gets hold of the event listener list.
1934 ::testing::TestEventListeners& listeners =
1935 ::testing::UnitTest::GetInstance()->listeners();
1936 // Adds a listener to the end. googletest takes the ownership.
1937 listeners.Append(new MinimalistPrinter);
1938 return RUN_ALL_TESTS();
1942 There's only one problem: the default test result printer is still in effect, so
1943 its output will mingle with the output from your minimalist printer. To suppress
1944 the default printer, just release it from the event listener list and delete it.
1945 You can do so by adding one line:
1949 delete listeners.Release(listeners.default_result_printer());
1950 listeners.Append(new MinimalistPrinter);
1951 return RUN_ALL_TESTS();
1954 Now, sit back and enjoy a completely different output from your tests. For more
1955 details, you can read this sample9_unittest.cc
1957 You may append more than one listener to the list. When an `On*Start()` or
1958 `OnTestPartResult()` event is fired, the listeners will receive it in the order
1959 they appear in the list (since new listeners are added to the end of the list,
1960 the default text printer and the default XML generator will receive the event
1961 first). An `On*End()` event will be received by the listeners in the *reverse*
1962 order. This allows output by listeners added later to be framed by output from
1963 listeners added earlier.
1965 ### Generating Failures in Listeners
1967 You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc)
1968 when processing an event. There are some restrictions:
1970 1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will
1971 cause `OnTestPartResult()` to be called recursively).
1972 1. A listener that handles `OnTestPartResult()` is not allowed to generate any
1975 When you add listeners to the listener list, you should put listeners that
1976 handle `OnTestPartResult()` *before* listeners that can generate failures. This
1977 ensures that failures generated by the latter are attributed to the right test
1980 We have a sample of failure-raising listener sample10_unittest.cc
1982 ## Running Test Programs: Advanced Options
1984 googletest test programs are ordinary executables. Once built, you can run them
1985 directly and affect their behavior via the following environment variables
1986 and/or command line flags. For the flags to work, your programs must call
1987 `::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
1989 To see a list of supported flags and their usage, please run your test program
1990 with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short.
1992 If an option is specified both by an environment variable and by a flag, the
1993 latter takes precedence.
1997 #### Listing Test Names
1999 Sometimes it is necessary to list the available tests in a program before
2000 running them so that a filter may be applied if needed. Including the flag
2001 `--gtest_list_tests` overrides all other flags and lists tests in the following
2012 None of the tests listed are actually run if the flag is provided. There is no
2013 corresponding environment variable for this flag.
2015 **Availability**: Linux, Windows, Mac.
2017 #### Running a Subset of the Tests
2019 By default, a googletest program runs all tests the user has defined. Sometimes,
2020 you want to run only a subset of the tests (e.g. for debugging or quickly
2021 verifying a change). If you set the `GTEST_FILTER` environment variable or the
2022 `--gtest_filter` flag to a filter string, googletest will only run the tests
2023 whose full names (in the form of `TestCaseName.TestName`) match the filter.
2025 The format of a filter is a '`:`'-separated list of wildcard patterns (called
2026 the *positive patterns*) optionally followed by a '`-`' and another
2027 '`:`'-separated pattern list (called the *negative patterns*). A test matches
2028 the filter if and only if it matches any of the positive patterns but does not
2029 match any of the negative patterns.
2031 A pattern may contain `'*'` (matches any string) or `'?'` (matches any single
2032 character). For convenience, the filter
2034 `'*-NegativePatterns'` can be also written as `'-NegativePatterns'`.
2038 * `./foo_test` Has no flag, and thus runs all its tests.
2039 * `./foo_test --gtest_filter=*` Also runs everything, due to the single
2040 match-everything `*` value.
2041 * `./foo_test --gtest_filter=FooTest.*` Runs everything in test case `FooTest`
2043 * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full
2044 name contains either `"Null"` or `"Constructor"` .
2045 * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
2046 * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test
2047 case `FooTest` except `FooTest.Bar`.
2048 * `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs
2049 everything in test case `FooTest` except `FooTest.Bar` and everything in
2050 test case `BarTest` except `BarTest.Foo`.
2052 #### Temporarily Disabling Tests
2054 If you have a broken test that you cannot fix right away, you can add the
2055 `DISABLED_` prefix to its name. This will exclude it from execution. This is
2056 better than commenting out the code or using `#if 0`, as disabled tests are
2057 still compiled (and thus won't rot).
2059 If you need to disable all tests in a test case, you can either add `DISABLED_`
2060 to the front of the name of each test, or alternatively add it to the front of
2063 For example, the following tests won't be run by googletest, even though they
2064 will still be compiled:
2067 // Tests that Foo does Abc.
2068 TEST(FooTest, DISABLED_DoesAbc) { ... }
2070 class DISABLED_BarTest : public ::testing::Test { ... };
2072 // Tests that Bar does Xyz.
2073 TEST_F(DISABLED_BarTest, DoesXyz) { ... }
2076 NOTE: This feature should only be used for temporary pain-relief. You still have
2077 to fix the disabled tests at a later date. As a reminder, googletest will print
2078 a banner warning you if a test program contains any disabled tests.
2080 TIP: You can easily count the number of disabled tests you have using `gsearch`
2081 and/or `grep`. This number can be used as a metric for improving your test
2084 **Availability**: Linux, Windows, Mac.
2086 #### Temporarily Enabling Disabled Tests
2088 To include disabled tests in test execution, just invoke the test program with
2089 the `--gtest_also_run_disabled_tests` flag or set the
2090 `GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.
2091 You can combine this with the `--gtest_filter` flag to further select which
2092 disabled tests to run.
2094 **Availability**: Linux, Windows, Mac.
2096 ### Repeating the Tests
2098 Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
2099 will fail only 1% of the time, making it rather hard to reproduce the bug under
2100 a debugger. This can be a major source of frustration.
2102 The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in
2103 a program many times. Hopefully, a flaky test will eventually fail and give you
2104 a chance to debug. Here's how to use it:
2107 $ foo_test --gtest_repeat=1000
2108 Repeat foo_test 1000 times and don't stop at failures.
2110 $ foo_test --gtest_repeat=-1
2111 A negative count means repeating forever.
2113 $ foo_test --gtest_repeat=1000 --gtest_break_on_failure
2114 Repeat foo_test 1000 times, stopping at the first failure. This
2115 is especially useful when running under a debugger: when the test
2116 fails, it will drop into the debugger and you can then inspect
2117 variables and stacks.
2119 $ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.*
2120 Repeat the tests whose name matches the filter 1000 times.
2123 If your test program contains [global set-up/tear-down](#global-set-up-and-tear-down) code, it
2124 will be repeated in each iteration as well, as the flakiness may be in it. You
2125 can also specify the repeat count by setting the `GTEST_REPEAT` environment
2128 **Availability**: Linux, Windows, Mac.
2130 ### Shuffling the Tests
2132 You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
2133 environment variable to `1`) to run the tests in a program in a random order.
2134 This helps to reveal bad dependencies between tests.
2136 By default, googletest uses a random seed calculated from the current time.
2137 Therefore you'll get a different order every time. The console output includes
2138 the random seed value, such that you can reproduce an order-related test failure
2139 later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED`
2140 flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an
2141 integer in the range [0, 99999]. The seed value 0 is special: it tells
2142 googletest to do the default behavior of calculating the seed from the current
2145 If you combine this with `--gtest_repeat=N`, googletest will pick a different
2146 random seed and re-shuffle the tests in each iteration.
2148 **Availability**: Linux, Windows, Mac.
2150 ### Controlling Test Output
2152 #### Colored Terminal Output
2154 googletest can use colors in its terminal output to make it easier to spot the
2155 important information:
2158 <span style="color:green">[----------]<span style="color:black"> 1 test from FooTest<br/>
2159 <span style="color:green">[ RUN ]<span style="color:black"> FooTest.DoesAbc<br/>
2160 <span style="color:green">[ OK ]<span style="color:black"> FooTest.DoesAbc<br/>
2161 <span style="color:green">[----------]<span style="color:black"> 2 tests from BarTest<br/>
2162 <span style="color:green">[ RUN ]<span style="color:black"> BarTest.HasXyzProperty<br/>
2163 <span style="color:green">[ OK ]<span style="color:black"> BarTest.HasXyzProperty<br/>
2164 <span style="color:green">[ RUN ]<span style="color:black"> BarTest.ReturnsTrueOnSuccess<br/>
2165 ... some error messages ...<br/>
2166 <span style="color:red">[ FAILED ] <span style="color:black">BarTest.ReturnsTrueOnSuccess<br/>
2168 <span style="color:green">[==========]<span style="color:black"> 30 tests from 14 test cases ran.<br/>
2169 <span style="color:green">[ PASSED ]<span style="color:black"> 28 tests.<br/>
2170 <span style="color:red">[ FAILED ]<span style="color:black"> 2 tests, listed below:<br/>
2171 <span style="color:red">[ FAILED ]<span style="color:black"> BarTest.ReturnsTrueOnSuccess<br/>
2172 <span style="color:red">[ FAILED ]<span style="color:black"> AnotherTest.DoesXyz<br/>
2175 You can set the `GTEST_COLOR` environment variable or the `--gtest_color`
2176 command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
2177 disable colors, or let googletest decide. When the value is `auto`, googletest
2178 will use colors if and only if the output goes to a terminal and (on non-Windows
2179 platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.
2181 **Availability**: Linux, Windows, Mac.
2183 #### Suppressing the Elapsed Time
2185 By default, googletest prints the time it takes to run each test. To disable
2186 that, run the test program with the `--gtest_print_time=0` command line flag, or
2187 set the GTEST_PRINT_TIME environment variable to `0`.
2189 **Availability**: Linux, Windows, Mac.
2191 #### Suppressing UTF-8 Text Output
2193 In case of assertion failures, googletest prints expected and actual values of
2194 type `string` both as hex-encoded strings as well as in readable UTF-8 text if
2195 they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8
2196 text because, for example, you don't have an UTF-8 compatible output medium, run
2197 the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8`
2198 environment variable to `0`.
2200 **Availability**: Linux, Windows, Mac.
2203 #### Generating an XML Report
2205 googletest can emit a detailed XML report to a file in addition to its normal
2206 textual output. The report contains the duration of each test, and thus can help
2207 you identify slow tests. The report is also used by the http://unittest
2208 dashboard to show per-test-method error messages.
2210 To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
2211 `--gtest_output` flag to the string `"xml:path_to_output_file"`, which will
2212 create the file at the given location. You can also just use the string `"xml"`,
2213 in which case the output can be found in the `test_detail.xml` file in the
2216 If you specify a directory (for example, `"xml:output/directory/"` on Linux or
2217 `"xml:output\directory\"` on Windows), googletest will create the XML file in
2218 that directory, named after the test executable (e.g. `foo_test.xml` for test
2219 program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
2220 over from a previous run), googletest will pick a different name (e.g.
2221 `foo_test_1.xml`) to avoid overwriting it.
2224 The report is based on the `junitreport` Ant task. Since that format was
2225 originally intended for Java, a little interpretation is required to make it
2226 apply to googletest tests, as shown here:
2229 <testsuites name="AllTests" ...>
2230 <testsuite name="test_case_name" ...>
2231 <testcase name="test_name" ...>
2232 <failure message="..."/>
2233 <failure message="..."/>
2234 <failure message="..."/>
2240 * The root `<testsuites>` element corresponds to the entire test program.
2241 * `<testsuite>` elements correspond to googletest test cases.
2242 * `<testcase>` elements correspond to googletest test functions.
2244 For instance, the following program
2247 TEST(MathTest, Addition) { ... }
2248 TEST(MathTest, Subtraction) { ... }
2249 TEST(LogicTest, NonContradiction) { ... }
2252 could generate this report:
2255 <?xml version="1.0" encoding="UTF-8"?>
2256 <testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests">
2257 <testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015">
2258 <testcase name="Addition" status="run" time="0.007" classname="">
2259 <failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="">...</failure>
2260 <failure message="Value of: add(1, -1)
 Actual: 1
Expected: 0" type="">...</failure>
2262 <testcase name="Subtraction" status="run" time="0.005" classname="">
2265 <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005">
2266 <testcase name="NonContradiction" status="run" time="0.005" classname="">
2274 * The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how
2275 many test functions the googletest program or test case contains, while the
2276 `failures` attribute tells how many of them failed.
2278 * The `time` attribute expresses the duration of the test, test case, or
2279 entire test program in seconds.
2281 * The `timestamp` attribute records the local date and time of the test
2284 * Each `<failure>` element corresponds to a single failed googletest
2287 **Availability**: Linux, Windows, Mac.
2289 #### Generating an JSON Report
2291 googletest can also emit a JSON report as an alternative format to XML. To
2292 generate the JSON report, set the `GTEST_OUTPUT` environment variable or the
2293 `--gtest_output` flag to the string `"json:path_to_output_file"`, which will
2294 create the file at the given location. You can also just use the string
2295 `"json"`, in which case the output can be found in the `test_detail.json` file
2296 in the current directory.
2298 The report format conforms to the following JSON Schema:
2302 "$schema": "http://json-schema.org/schema#",
2308 "name": { "type": "string" },
2309 "tests": { "type": "integer" },
2310 "failures": { "type": "integer" },
2311 "disabled": { "type": "integer" },
2312 "time": { "type": "string" },
2316 "$ref": "#/definitions/TestInfo"
2324 "name": { "type": "string" },
2327 "enum": ["RUN", "NOTRUN"]
2329 "time": { "type": "string" },
2330 "classname": { "type": "string" },
2334 "$ref": "#/definitions/Failure"
2342 "failures": { "type": "string" },
2343 "type": { "type": "string" }
2348 "tests": { "type": "integer" },
2349 "failures": { "type": "integer" },
2350 "disabled": { "type": "integer" },
2351 "errors": { "type": "integer" },
2354 "format": "date-time"
2356 "time": { "type": "string" },
2357 "name": { "type": "string" },
2361 "$ref": "#/definitions/TestCase"
2368 The report uses the format that conforms to the following Proto3 using the [JSON
2369 encoding](https://developers.google.com/protocol-buffers/docs/proto3#json):
2376 import "google/protobuf/timestamp.proto";
2377 import "google/protobuf/duration.proto";
2384 google.protobuf.Timestamp timestamp = 5;
2385 google.protobuf.Duration time = 6;
2387 repeated TestCase testsuites = 8;
2396 google.protobuf.Duration time = 6;
2397 repeated TestInfo testsuite = 7;
2407 google.protobuf.Duration time = 3;
2408 string classname = 4;
2410 string failures = 1;
2413 repeated Failure failures = 5;
2417 For instance, the following program
2420 TEST(MathTest, Addition) { ... }
2421 TEST(MathTest, Subtraction) { ... }
2422 TEST(LogicTest, NonContradiction) { ... }
2425 could generate this report:
2433 "timestamp": "2011-10-31T18:52:42Z"
2450 "message": "Value of: add(1, 1)\x0A Actual: 3\x0AExpected: 2",
2454 "message": "Value of: add(1, -1)\x0A Actual: 1\x0AExpected: 0",
2460 "name": "Subtraction",
2468 "name": "LogicTest",
2475 "name": "NonContradiction",
2486 IMPORTANT: The exact format of the JSON document is subject to change.
2488 **Availability**: Linux, Windows, Mac.
2490 ### Controlling How Failures Are Reported
2492 #### Turning Assertion Failures into Break-Points
2494 When running test programs under a debugger, it's very convenient if the
2495 debugger can catch an assertion failure and automatically drop into interactive
2496 mode. googletest's *break-on-failure* mode supports this behavior.
2498 To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
2499 other than `0` . Alternatively, you can use the `--gtest_break_on_failure`
2502 **Availability**: Linux, Windows, Mac.
2504 #### Disabling Catching Test-Thrown Exceptions
2506 googletest can be used either with or without exceptions enabled. If a test
2507 throws a C++ exception or (on Windows) a structured exception (SEH), by default
2508 googletest catches it, reports it as a test failure, and continues with the next
2509 test method. This maximizes the coverage of a test run. Also, on Windows an
2510 uncaught exception will cause a pop-up window, so catching the exceptions allows
2511 you to run the tests automatically.
2513 When debugging the test failures, however, you may instead want the exceptions
2514 to be handled by the debugger, such that you can examine the call stack when an
2515 exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`
2516 environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when
2519 **Availability**: Linux, Windows, Mac.