1# Advanced googletest Topics 2 3## Introduction 4 5Now that you have read the [googletest Primer](primer.md) and learned how to 6write tests using googletest, it's time to learn some new tricks. This document 7will show you more assertions as well as how to construct complex failure 8messages, propagate fatal failures, reuse and speed up your test fixtures, and 9use various flags with your tests. 10 11## More Assertions 12 13This section covers some less frequently used, but still significant, 14assertions. 15 16### Explicit Success and Failure 17 18See [Explicit Success and Failure](reference/assertions.md#success-failure) in 19the Assertions Reference. 20 21### Exception Assertions 22 23See [Exception Assertions](reference/assertions.md#exceptions) in the Assertions 24Reference. 25 26### Predicate Assertions for Better Error Messages 27 28Even though googletest has a rich set of assertions, they can never be complete, 29as it's impossible (nor a good idea) to anticipate all scenarios a user might 30run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a 31complex expression, for lack of a better macro. This has the problem of not 32showing you the values of the parts of the expression, making it hard to 33understand what went wrong. As a workaround, some users choose to construct the 34failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this 35is awkward especially when the expression has side-effects or is expensive to 36evaluate. 37 38googletest gives you three different options to solve this problem: 39 40#### Using an Existing Boolean Function 41 42If you already have a function or functor that returns `bool` (or a type that 43can be implicitly converted to `bool`), you can use it in a *predicate 44assertion* to get the function arguments printed for free. See 45[`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) in the Assertions 46Reference for details. 47 48#### Using a Function That Returns an AssertionResult 49 50While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not 51satisfactory: you have to use different macros for different arities, and it 52feels more like Lisp than C++. The `::testing::AssertionResult` class solves 53this problem. 54 55An `AssertionResult` object represents the result of an assertion (whether it's 56a success or a failure, and an associated message). You can create an 57`AssertionResult` using one of these factory functions: 58 59```c++ 60namespace testing { 61 62// Returns an AssertionResult object to indicate that an assertion has 63// succeeded. 64AssertionResult AssertionSuccess(); 65 66// Returns an AssertionResult object to indicate that an assertion has 67// failed. 68AssertionResult AssertionFailure(); 69 70} 71``` 72 73You can then use the `<<` operator to stream messages to the `AssertionResult` 74object. 75 76To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`), 77write a predicate function that returns `AssertionResult` instead of `bool`. For 78example, if you define `IsEven()` as: 79 80```c++ 81testing::AssertionResult IsEven(int n) { 82 if ((n % 2) == 0) 83 return testing::AssertionSuccess(); 84 else 85 return testing::AssertionFailure() << n << " is odd"; 86} 87``` 88 89instead of: 90 91```c++ 92bool IsEven(int n) { 93 return (n % 2) == 0; 94} 95``` 96 97the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print: 98 99```none 100Value of: IsEven(Fib(4)) 101 Actual: false (3 is odd) 102Expected: true 103``` 104 105instead of a more opaque 106 107```none 108Value of: IsEven(Fib(4)) 109 Actual: false 110Expected: true 111``` 112 113If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well 114(one third of Boolean assertions in the Google code base are negative ones), and 115are fine with making the predicate slower in the success case, you can supply a 116success message: 117 118```c++ 119testing::AssertionResult IsEven(int n) { 120 if ((n % 2) == 0) 121 return testing::AssertionSuccess() << n << " is even"; 122 else 123 return testing::AssertionFailure() << n << " is odd"; 124} 125``` 126 127Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print 128 129```none 130 Value of: IsEven(Fib(6)) 131 Actual: true (8 is even) 132 Expected: false 133``` 134 135#### Using a Predicate-Formatter 136 137If you find the default message generated by 138[`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) and 139[`EXPECT_TRUE`](reference/assertions.md#EXPECT_TRUE) unsatisfactory, or some 140arguments to your predicate do not support streaming to `ostream`, you can 141instead use *predicate-formatter assertions* to *fully* customize how the 142message is formatted. See 143[`EXPECT_PRED_FORMAT*`](reference/assertions.md#EXPECT_PRED_FORMAT) in the 144Assertions Reference for details. 145 146### Floating-Point Comparison 147 148See [Floating-Point Comparison](reference/assertions.md#floating-point) in the 149Assertions Reference. 150 151#### Floating-Point Predicate-Format Functions 152 153Some floating-point operations are useful, but not that often used. In order to 154avoid an explosion of new macros, we provide them as predicate-format functions 155that can be used in the predicate assertion macro 156[`EXPECT_PRED_FORMAT2`](reference/assertions.md#EXPECT_PRED_FORMAT), for 157example: 158 159```c++ 160EXPECT_PRED_FORMAT2(testing::FloatLE, val1, val2); 161EXPECT_PRED_FORMAT2(testing::DoubleLE, val1, val2); 162``` 163 164The above code verifies that `val1` is less than, or approximately equal to, 165`val2`. 166 167### Asserting Using gMock Matchers 168 169See [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) in the Assertions 170Reference. 171 172### More String Assertions 173 174(Please read the [previous](#asserting-using-gmock-matchers) section first if 175you haven't.) 176 177You can use the gMock [string matchers](reference/matchers.md#string-matchers) 178with [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) to do more string 179comparison tricks (sub-string, prefix, suffix, regular expression, and etc). For 180example, 181 182```c++ 183using ::testing::HasSubstr; 184using ::testing::MatchesRegex; 185... 186 ASSERT_THAT(foo_string, HasSubstr("needle")); 187 EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+")); 188``` 189 190### Windows HRESULT assertions 191 192See [Windows HRESULT Assertions](reference/assertions.md#HRESULT) in the 193Assertions Reference. 194 195### Type Assertions 196 197You can call the function 198 199```c++ 200::testing::StaticAssertTypeEq<T1, T2>(); 201``` 202 203to assert that types `T1` and `T2` are the same. The function does nothing if 204the assertion is satisfied. If the types are different, the function call will 205fail to compile, the compiler error message will say that `T1 and T2 are not the 206same type` and most likely (depending on the compiler) show you the actual 207values of `T1` and `T2`. This is mainly useful inside template code. 208 209**Caveat**: When used inside a member function of a class template or a function 210template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is 211instantiated. For example, given: 212 213```c++ 214template <typename T> class Foo { 215 public: 216 void Bar() { testing::StaticAssertTypeEq<int, T>(); } 217}; 218``` 219 220the code: 221 222```c++ 223void Test1() { Foo<bool> foo; } 224``` 225 226will not generate a compiler error, as `Foo<bool>::Bar()` is never actually 227instantiated. Instead, you need: 228 229```c++ 230void Test2() { Foo<bool> foo; foo.Bar(); } 231``` 232 233to cause a compiler error. 234 235### Assertion Placement 236 237You can use assertions in any C++ function. In particular, it doesn't have to be 238a method of the test fixture class. The one constraint is that assertions that 239generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in 240void-returning functions. This is a consequence of Google's not using 241exceptions. By placing it in a non-void function you'll get a confusing compile 242error like `"error: void value not ignored as it ought to be"` or `"cannot 243initialize return object of type 'bool' with an rvalue of type 'void'"` or 244`"error: no viable conversion from 'void' to 'string'"`. 245 246If you need to use fatal assertions in a function that returns non-void, one 247option is to make the function return the value in an out parameter instead. For 248example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You 249need to make sure that `*result` contains some sensible value even when the 250function returns prematurely. As the function now returns `void`, you can use 251any assertion inside of it. 252 253If changing the function's type is not an option, you should just use assertions 254that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`. 255 256{: .callout .note} 257NOTE: Constructors and destructors are not considered void-returning functions, 258according to the C++ language specification, and so you may not use fatal 259assertions in them; you'll get a compilation error if you try. Instead, either 260call `abort` and crash the entire test executable, or put the fatal assertion in 261a `SetUp`/`TearDown` function; see 262[constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp) 263 264{: .callout .warning} 265WARNING: A fatal assertion in a helper function (private void-returning method) 266called from a constructor or destructor does not terminate the current test, as 267your intuition might suggest: it merely returns from the constructor or 268destructor early, possibly leaving your object in a partially-constructed or 269partially-destructed state! You almost certainly want to `abort` or use 270`SetUp`/`TearDown` instead. 271 272## Skipping test execution 273 274Related to the assertions `SUCCEED()` and `FAIL()`, you can prevent further test 275execution at runtime with the `GTEST_SKIP()` macro. This is useful when you need 276to check for preconditions of the system under test during runtime and skip 277tests in a meaningful way. 278 279`GTEST_SKIP()` can be used in individual test cases or in the `SetUp()` methods 280of classes derived from either `::testing::Environment` or `::testing::Test`. 281For example: 282 283```c++ 284TEST(SkipTest, DoesSkip) { 285 GTEST_SKIP() << "Skipping single test"; 286 EXPECT_EQ(0, 1); // Won't fail; it won't be executed 287} 288 289class SkipFixture : public ::testing::Test { 290 protected: 291 void SetUp() override { 292 GTEST_SKIP() << "Skipping all tests for this fixture"; 293 } 294}; 295 296// Tests for SkipFixture won't be executed. 297TEST_F(SkipFixture, SkipsOneTest) { 298 EXPECT_EQ(5, 7); // Won't fail 299} 300``` 301 302As with assertion macros, you can stream a custom message into `GTEST_SKIP()`. 303 304## Teaching googletest How to Print Your Values 305 306When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument 307values to help you debug. It does this using a user-extensible value printer. 308 309This printer knows how to print built-in C++ types, native arrays, STL 310containers, and any type that supports the `<<` operator. For other types, it 311prints the raw bytes in the value and hopes that you the user can figure it out. 312 313As mentioned earlier, the printer is *extensible*. That means you can teach it 314to do a better job at printing your particular type than to dump the bytes. To 315do that, define `<<` for your type: 316 317```c++ 318#include <ostream> 319 320namespace foo { 321 322class Bar { // We want googletest to be able to print instances of this. 323... 324 // Create a free inline friend function. 325 friend std::ostream& operator<<(std::ostream& os, const Bar& bar) { 326 return os << bar.DebugString(); // whatever needed to print bar to os 327 } 328}; 329 330// If you can't declare the function in the class it's important that the 331// << operator is defined in the SAME namespace that defines Bar. C++'s look-up 332// rules rely on that. 333std::ostream& operator<<(std::ostream& os, const Bar& bar) { 334 return os << bar.DebugString(); // whatever needed to print bar to os 335} 336 337} // namespace foo 338``` 339 340Sometimes, this might not be an option: your team may consider it bad style to 341have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that 342doesn't do what you want (and you cannot change it). If so, you can instead 343define a `PrintTo()` function like this: 344 345```c++ 346#include <ostream> 347 348namespace foo { 349 350class Bar { 351 ... 352 friend void PrintTo(const Bar& bar, std::ostream* os) { 353 *os << bar.DebugString(); // whatever needed to print bar to os 354 } 355}; 356 357// If you can't declare the function in the class it's important that PrintTo() 358// is defined in the SAME namespace that defines Bar. C++'s look-up rules rely 359// on that. 360void PrintTo(const Bar& bar, std::ostream* os) { 361 *os << bar.DebugString(); // whatever needed to print bar to os 362} 363 364} // namespace foo 365``` 366 367If you have defined both `<<` and `PrintTo()`, the latter will be used when 368googletest is concerned. This allows you to customize how the value appears in 369googletest's output without affecting code that relies on the behavior of its 370`<<` operator. 371 372If you want to print a value `x` using googletest's value printer yourself, just 373call `::testing::PrintToString(x)`, which returns an `std::string`: 374 375```c++ 376vector<pair<Bar, int> > bar_ints = GetBarIntVector(); 377 378EXPECT_TRUE(IsCorrectBarIntVector(bar_ints)) 379 << "bar_ints = " << testing::PrintToString(bar_ints); 380``` 381 382## Death Tests 383 384In many applications, there are assertions that can cause application failure if 385a condition is not met. These consistency checks, which ensure that the program 386is in a known good state, are there to fail at the earliest possible time after 387some program state is corrupted. If the assertion checks the wrong condition, 388then the program may proceed in an erroneous state, which could lead to memory 389corruption, security holes, or worse. Hence it is vitally important to test that 390such assertion statements work as expected. 391 392Since these precondition checks cause the processes to die, we call such tests 393_death tests_. More generally, any test that checks that a program terminates 394(except by throwing an exception) in an expected fashion is also a death test. 395 396Note that if a piece of code throws an exception, we don't consider it "death" 397for the purpose of death tests, as the caller of the code could catch the 398exception and avoid the crash. If you want to verify exceptions thrown by your 399code, see [Exception Assertions](#ExceptionAssertions). 400 401If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see 402["Catching" Failures](#catching-failures). 403 404### How to Write a Death Test 405 406GoogleTest provides assertion macros to support death tests. See 407[Death Assertions](reference/assertions.md#death) in the Assertions Reference 408for details. 409 410To write a death test, simply use one of the macros inside your test function. 411For example, 412 413```c++ 414TEST(MyDeathTest, Foo) { 415 // This death test uses a compound statement. 416 ASSERT_DEATH({ 417 int n = 5; 418 Foo(&n); 419 }, "Error on line .* of Foo()"); 420} 421 422TEST(MyDeathTest, NormalExit) { 423 EXPECT_EXIT(NormalExit(), testing::ExitedWithCode(0), "Success"); 424} 425 426TEST(MyDeathTest, KillProcess) { 427 EXPECT_EXIT(KillProcess(), testing::KilledBySignal(SIGKILL), 428 "Sending myself unblockable signal"); 429} 430``` 431 432verifies that: 433 434* calling `Foo(5)` causes the process to die with the given error message, 435* calling `NormalExit()` causes the process to print `"Success"` to stderr and 436 exit with exit code 0, and 437* calling `KillProcess()` kills the process with signal `SIGKILL`. 438 439The test function body may contain other assertions and statements as well, if 440necessary. 441 442Note that a death test only cares about three things: 443 4441. does `statement` abort or exit the process? 4452. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status 446 satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`) 447 is the exit status non-zero? And 4483. does the stderr output match `matcher`? 449 450In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it 451will **not** cause the death test to fail, as googletest assertions don't abort 452the process. 453 454### Death Test Naming 455 456{: .callout .important} 457IMPORTANT: We strongly recommend you to follow the convention of naming your 458**test suite** (not test) `*DeathTest` when it contains a death test, as 459demonstrated in the above example. The 460[Death Tests And Threads](#death-tests-and-threads) section below explains why. 461 462If a test fixture class is shared by normal tests and death tests, you can use 463`using` or `typedef` to introduce an alias for the fixture class and avoid 464duplicating its code: 465 466```c++ 467class FooTest : public testing::Test { ... }; 468 469using FooDeathTest = FooTest; 470 471TEST_F(FooTest, DoesThis) { 472 // normal test 473} 474 475TEST_F(FooDeathTest, DoesThat) { 476 // death test 477} 478``` 479 480### Regular Expression Syntax 481 482On POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the 483[POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04) 484syntax. To learn about this syntax, you may want to read this 485[Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions). 486 487On Windows, googletest uses its own simple regular expression implementation. It 488lacks many features. For example, we don't support union (`"x|y"`), grouping 489(`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among 490others. Below is what we do support (`A` denotes a literal character, period 491(`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular 492expressions.): 493 494Expression | Meaning 495---------- | -------------------------------------------------------------- 496`c` | matches any literal character `c` 497`\\d` | matches any decimal digit 498`\\D` | matches any character that's not a decimal digit 499`\\f` | matches `\f` 500`\\n` | matches `\n` 501`\\r` | matches `\r` 502`\\s` | matches any ASCII whitespace, including `\n` 503`\\S` | matches any character that's not a whitespace 504`\\t` | matches `\t` 505`\\v` | matches `\v` 506`\\w` | matches any letter, `_`, or decimal digit 507`\\W` | matches any character that `\\w` doesn't match 508`\\c` | matches any literal character `c`, which must be a punctuation 509`.` | matches any single character except `\n` 510`A?` | matches 0 or 1 occurrences of `A` 511`A*` | matches 0 or many occurrences of `A` 512`A+` | matches 1 or many occurrences of `A` 513`^` | matches the beginning of a string (not that of each line) 514`$` | matches the end of a string (not that of each line) 515`xy` | matches `x` followed by `y` 516 517To help you determine which capability is available on your system, googletest 518defines macros to govern which regular expression it is using. The macros are: 519`GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death 520tests to work in all cases, you can either `#if` on these macros or use the more 521limited syntax only. 522 523### How It Works 524 525See [Death Assertions](reference/assertions.md#death) in the Assertions 526Reference. 527 528### Death Tests And Threads 529 530The reason for the two death test styles has to do with thread safety. Due to 531well-known problems with forking in the presence of threads, death tests should 532be run in a single-threaded context. Sometimes, however, it isn't feasible to 533arrange that kind of environment. For example, statically-initialized modules 534may start threads before main is ever reached. Once threads have been created, 535it may be difficult or impossible to clean them up. 536 537googletest has three features intended to raise awareness of threading issues. 538 5391. A warning is emitted if multiple threads are running when a death test is 540 encountered. 5412. Test suites with a name ending in "DeathTest" are run before all other 542 tests. 5433. It uses `clone()` instead of `fork()` to spawn the child process on Linux 544 (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely 545 to cause the child to hang when the parent process has multiple threads. 546 547It's perfectly fine to create threads inside a death test statement; they are 548executed in a separate process and cannot affect the parent. 549 550### Death Test Styles 551 552The "threadsafe" death test style was introduced in order to help mitigate the 553risks of testing in a possibly multithreaded environment. It trades increased 554test execution time (potentially dramatically so) for improved thread safety. 555 556The automated testing framework does not set the style flag. You can choose a 557particular style of death tests by setting the flag programmatically: 558 559```c++ 560GTEST_FLAG_SET(death_test_style, "threadsafe") 561``` 562 563You can do this in `main()` to set the style for all death tests in the binary, 564or in individual tests. Recall that flags are saved before running each test and 565restored afterwards, so you need not do that yourself. For example: 566 567```c++ 568int main(int argc, char** argv) { 569 testing::InitGoogleTest(&argc, argv); 570 GTEST_FLAG_SET(death_test_style, "fast"); 571 return RUN_ALL_TESTS(); 572} 573 574TEST(MyDeathTest, TestOne) { 575 GTEST_FLAG_SET(death_test_style, "threadsafe"); 576 // This test is run in the "threadsafe" style: 577 ASSERT_DEATH(ThisShouldDie(), ""); 578} 579 580TEST(MyDeathTest, TestTwo) { 581 // This test is run in the "fast" style: 582 ASSERT_DEATH(ThisShouldDie(), ""); 583} 584``` 585 586### Caveats 587 588The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If 589it leaves the current function via a `return` statement or by throwing an 590exception, the death test is considered to have failed. Some googletest macros 591may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid 592them in `statement`. 593 594Since `statement` runs in the child process, any in-memory side effect (e.g. 595modifying a variable, releasing memory, etc) it causes will *not* be observable 596in the parent process. In particular, if you release memory in a death test, 597your program will fail the heap check as the parent process will never see the 598memory reclaimed. To solve this problem, you can 599 6001. try not to free memory in a death test; 6012. free the memory again in the parent process; or 6023. do not use the heap checker in your program. 603 604Due to an implementation detail, you cannot place multiple death test assertions 605on the same line; otherwise, compilation will fail with an unobvious error 606message. 607 608Despite the improved thread safety afforded by the "threadsafe" style of death 609test, thread problems such as deadlock are still possible in the presence of 610handlers registered with `pthread_atfork(3)`. 611 612## Using Assertions in Sub-routines 613 614{: .callout .note} 615Note: If you want to put a series of test assertions in a subroutine to check 616for a complex condition, consider using 617[a custom GMock matcher](gmock_cook_book.md#NewMatchers) instead. This lets you 618provide a more readable error message in case of failure and avoid all of the 619issues described below. 620 621### Adding Traces to Assertions 622 623If a test sub-routine is called from several places, when an assertion inside it 624fails, it can be hard to tell which invocation of the sub-routine the failure is 625from. You can alleviate this problem using extra logging or custom failure 626messages, but that usually clutters up your tests. A better solution is to use 627the `SCOPED_TRACE` macro or the `ScopedTrace` utility: 628 629```c++ 630SCOPED_TRACE(message); 631``` 632 633```c++ 634ScopedTrace trace("file_path", line_number, message); 635``` 636 637where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE` 638macro will cause the current file name, line number, and the given message to be 639added in every failure message. `ScopedTrace` accepts explicit file name and 640line number in arguments, which is useful for writing test helpers. The effect 641will be undone when the control leaves the current lexical scope. 642 643For example, 644 645```c++ 64610: void Sub1(int n) { 64711: EXPECT_EQ(Bar(n), 1); 64812: EXPECT_EQ(Bar(n + 1), 2); 64913: } 65014: 65115: TEST(FooTest, Bar) { 65216: { 65317: SCOPED_TRACE("A"); // This trace point will be included in 65418: // every failure in this scope. 65519: Sub1(1); 65620: } 65721: // Now it won't. 65822: Sub1(9); 65923: } 660``` 661 662could result in messages like these: 663 664```none 665path/to/foo_test.cc:11: Failure 666Value of: Bar(n) 667Expected: 1 668 Actual: 2 669Google Test trace: 670path/to/foo_test.cc:17: A 671 672path/to/foo_test.cc:12: Failure 673Value of: Bar(n + 1) 674Expected: 2 675 Actual: 3 676``` 677 678Without the trace, it would've been difficult to know which invocation of 679`Sub1()` the two failures come from respectively. (You could add an extra 680message to each assertion in `Sub1()` to indicate the value of `n`, but that's 681tedious.) 682 683Some tips on using `SCOPED_TRACE`: 684 6851. With a suitable message, it's often enough to use `SCOPED_TRACE` at the 686 beginning of a sub-routine, instead of at each call site. 6872. When calling sub-routines inside a loop, make the loop iterator part of the 688 message in `SCOPED_TRACE` such that you can know which iteration the failure 689 is from. 6903. Sometimes the line number of the trace point is enough for identifying the 691 particular invocation of a sub-routine. In this case, you don't have to 692 choose a unique message for `SCOPED_TRACE`. You can simply use `""`. 6934. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer 694 scope. In this case, all active trace points will be included in the failure 695 messages, in reverse order they are encountered. 6965. The trace dump is clickable in Emacs - hit `return` on a line number and 697 you'll be taken to that line in the source file! 698 699### Propagating Fatal Failures 700 701A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that 702when they fail they only abort the _current function_, not the entire test. For 703example, the following test will segfault: 704 705```c++ 706void Subroutine() { 707 // Generates a fatal failure and aborts the current function. 708 ASSERT_EQ(1, 2); 709 710 // The following won't be executed. 711 ... 712} 713 714TEST(FooTest, Bar) { 715 Subroutine(); // The intended behavior is for the fatal failure 716 // in Subroutine() to abort the entire test. 717 718 // The actual behavior: the function goes on after Subroutine() returns. 719 int* p = nullptr; 720 *p = 3; // Segfault! 721} 722``` 723 724To alleviate this, googletest provides three different solutions. You could use 725either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the 726`HasFatalFailure()` function. They are described in the following two 727subsections. 728 729#### Asserting on Subroutines with an exception 730 731The following code can turn ASSERT-failure into an exception: 732 733```c++ 734class ThrowListener : public testing::EmptyTestEventListener { 735 void OnTestPartResult(const testing::TestPartResult& result) override { 736 if (result.type() == testing::TestPartResult::kFatalFailure) { 737 throw testing::AssertionException(result); 738 } 739 } 740}; 741int main(int argc, char** argv) { 742 ... 743 testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener); 744 return RUN_ALL_TESTS(); 745} 746``` 747 748This listener should be added after other listeners if you have any, otherwise 749they won't see failed `OnTestPartResult`. 750 751#### Asserting on Subroutines 752 753As shown above, if your test calls a subroutine that has an `ASSERT_*` failure 754in it, the test will continue after the subroutine returns. This may not be what 755you want. 756 757Often people want fatal failures to propagate like exceptions. For that 758googletest offers the following macros: 759 760Fatal assertion | Nonfatal assertion | Verifies 761------------------------------------- | ------------------------------------- | -------- 762`ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread. 763 764Only failures in the thread that executes the assertion are checked to determine 765the result of this type of assertions. If `statement` creates new threads, 766failures in these threads are ignored. 767 768Examples: 769 770```c++ 771ASSERT_NO_FATAL_FAILURE(Foo()); 772 773int i; 774EXPECT_NO_FATAL_FAILURE({ 775 i = Bar(); 776}); 777``` 778 779Assertions from multiple threads are currently not supported on Windows. 780 781#### Checking for Failures in the Current Test 782 783`HasFatalFailure()` in the `::testing::Test` class returns `true` if an 784assertion in the current test has suffered a fatal failure. This allows 785functions to catch fatal failures in a sub-routine and return early. 786 787```c++ 788class Test { 789 public: 790 ... 791 static bool HasFatalFailure(); 792}; 793``` 794 795The typical usage, which basically simulates the behavior of a thrown exception, 796is: 797 798```c++ 799TEST(FooTest, Bar) { 800 Subroutine(); 801 // Aborts if Subroutine() had a fatal failure. 802 if (HasFatalFailure()) return; 803 804 // The following won't be executed. 805 ... 806} 807``` 808 809If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test 810fixture, you must add the `::testing::Test::` prefix, as in: 811 812```c++ 813if (testing::Test::HasFatalFailure()) return; 814``` 815 816Similarly, `HasNonfatalFailure()` returns `true` if the current test has at 817least one non-fatal failure, and `HasFailure()` returns `true` if the current 818test has at least one failure of either kind. 819 820## Logging Additional Information 821 822In your test code, you can call `RecordProperty("key", value)` to log additional 823information, where `value` can be either a string or an `int`. The *last* value 824recorded for a key will be emitted to the 825[XML output](#generating-an-xml-report) if you specify one. For example, the 826test 827 828```c++ 829TEST_F(WidgetUsageTest, MinAndMaxWidgets) { 830 RecordProperty("MaximumWidgets", ComputeMaxUsage()); 831 RecordProperty("MinimumWidgets", ComputeMinUsage()); 832} 833``` 834 835will output XML like this: 836 837```xml 838 ... 839 <testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" /> 840 ... 841``` 842 843{: .callout .note} 844> NOTE: 845> 846> * `RecordProperty()` is a static member of the `Test` class. Therefore it 847> needs to be prefixed with `::testing::Test::` if used outside of the 848> `TEST` body and the test fixture class. 849> * *`key`* must be a valid XML attribute name, and cannot conflict with the 850> ones already used by googletest (`name`, `status`, `time`, `classname`, 851> `type_param`, and `value_param`). 852> * Calling `RecordProperty()` outside of the lifespan of a test is allowed. 853> If it's called outside of a test but between a test suite's 854> `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be 855> attributed to the XML element for the test suite. If it's called outside 856> of all test suites (e.g. in a test environment), it will be attributed to 857> the top-level XML element. 858 859## Sharing Resources Between Tests in the Same Test Suite 860 861googletest creates a new test fixture object for each test in order to make 862tests independent and easier to debug. However, sometimes tests use resources 863that are expensive to set up, making the one-copy-per-test model prohibitively 864expensive. 865 866If the tests don't change the resource, there's no harm in their sharing a 867single resource copy. So, in addition to per-test set-up/tear-down, googletest 868also supports per-test-suite set-up/tear-down. To use it: 869 8701. In your test fixture class (say `FooTest` ), declare as `static` some member 871 variables to hold the shared resources. 8722. Outside your test fixture class (typically just below it), define those 873 member variables, optionally giving them initial values. 8743. In the same test fixture class, define a `static void SetUpTestSuite()` 875 function (remember not to spell it as **`SetupTestSuite`** with a small 876 `u`!) to set up the shared resources and a `static void TearDownTestSuite()` 877 function to tear them down. 878 879That's it! googletest automatically calls `SetUpTestSuite()` before running the 880*first test* in the `FooTest` test suite (i.e. before creating the first 881`FooTest` object), and calls `TearDownTestSuite()` after running the *last test* 882in it (i.e. after deleting the last `FooTest` object). In between, the tests can 883use the shared resources. 884 885Remember that the test order is undefined, so your code can't depend on a test 886preceding or following another. Also, the tests must either not modify the state 887of any shared resource, or, if they do modify the state, they must restore the 888state to its original value before passing control to the next test. 889 890Note that `SetUpTestSuite()` may be called multiple times for a test fixture 891class that has derived classes, so you should not expect code in the function 892body to be run only once. Also, derived classes still have access to shared 893resources defined as static members, so careful consideration is needed when 894managing shared resources to avoid memory leaks. 895 896Here's an example of per-test-suite set-up and tear-down: 897 898```c++ 899class FooTest : public testing::Test { 900 protected: 901 // Per-test-suite set-up. 902 // Called before the first test in this test suite. 903 // Can be omitted if not needed. 904 static void SetUpTestSuite() { 905 // Avoid reallocating static objects if called in subclasses of FooTest. 906 if (shared_resource_ == nullptr) { 907 shared_resource_ = new ...; 908 } 909 } 910 911 // Per-test-suite tear-down. 912 // Called after the last test in this test suite. 913 // Can be omitted if not needed. 914 static void TearDownTestSuite() { 915 delete shared_resource_; 916 shared_resource_ = nullptr; 917 } 918 919 // You can define per-test set-up logic as usual. 920 void SetUp() override { ... } 921 922 // You can define per-test tear-down logic as usual. 923 void TearDown() override { ... } 924 925 // Some expensive resource shared by all tests. 926 static T* shared_resource_; 927}; 928 929T* FooTest::shared_resource_ = nullptr; 930 931TEST_F(FooTest, Test1) { 932 ... you can refer to shared_resource_ here ... 933} 934 935TEST_F(FooTest, Test2) { 936 ... you can refer to shared_resource_ here ... 937} 938``` 939 940{: .callout .note} 941NOTE: Though the above code declares `SetUpTestSuite()` protected, it may 942sometimes be necessary to declare it public, such as when using it with 943`TEST_P`. 944 945## Global Set-Up and Tear-Down 946 947Just as you can do set-up and tear-down at the test level and the test suite 948level, you can also do it at the test program level. Here's how. 949 950First, you subclass the `::testing::Environment` class to define a test 951environment, which knows how to set-up and tear-down: 952 953```c++ 954class Environment : public ::testing::Environment { 955 public: 956 ~Environment() override {} 957 958 // Override this to define how to set up the environment. 959 void SetUp() override {} 960 961 // Override this to define how to tear down the environment. 962 void TearDown() override {} 963}; 964``` 965 966Then, you register an instance of your environment class with googletest by 967calling the `::testing::AddGlobalTestEnvironment()` function: 968 969```c++ 970Environment* AddGlobalTestEnvironment(Environment* env); 971``` 972 973Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of 974each environment object, then runs the tests if none of the environments 975reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()` 976always calls `TearDown()` with each environment object, regardless of whether or 977not the tests were run. 978 979It's OK to register multiple environment objects. In this suite, their `SetUp()` 980will be called in the order they are registered, and their `TearDown()` will be 981called in the reverse order. 982 983Note that googletest takes ownership of the registered environment objects. 984Therefore **do not delete them** by yourself. 985 986You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called, 987probably in `main()`. If you use `gtest_main`, you need to call this before 988`main()` starts for it to take effect. One way to do this is to define a global 989variable like this: 990 991```c++ 992testing::Environment* const foo_env = 993 testing::AddGlobalTestEnvironment(new FooEnvironment); 994``` 995 996However, we strongly recommend you to write your own `main()` and call 997`AddGlobalTestEnvironment()` there, as relying on initialization of global 998variables makes the code harder to read and may cause problems when you register 999multiple environments from different translation units and the environments have 1000dependencies among them (remember that the compiler doesn't guarantee the order 1001in which global variables from different translation units are initialized). 1002 1003## Value-Parameterized Tests 1004 1005*Value-parameterized tests* allow you to test your code with different 1006parameters without writing multiple copies of the same test. This is useful in a 1007number of situations, for example: 1008 1009* You have a piece of code whose behavior is affected by one or more 1010 command-line flags. You want to make sure your code performs correctly for 1011 various values of those flags. 1012* You want to test different implementations of an OO interface. 1013* You want to test your code over various inputs (a.k.a. data-driven testing). 1014 This feature is easy to abuse, so please exercise your good sense when doing 1015 it! 1016 1017### How to Write Value-Parameterized Tests 1018 1019To write value-parameterized tests, first you should define a fixture class. It 1020must be derived from both `testing::Test` and `testing::WithParamInterface<T>` 1021(the latter is a pure interface), where `T` is the type of your parameter 1022values. For convenience, you can just derive the fixture class from 1023`testing::TestWithParam<T>`, which itself is derived from both `testing::Test` 1024and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a 1025raw pointer, you are responsible for managing the lifespan of the pointed 1026values. 1027 1028{: .callout .note} 1029NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()` 1030they must be declared **public** rather than **protected** in order to use 1031`TEST_P`. 1032 1033```c++ 1034class FooTest : 1035 public testing::TestWithParam<const char*> { 1036 // You can implement all the usual fixture class members here. 1037 // To access the test parameter, call GetParam() from class 1038 // TestWithParam<T>. 1039}; 1040 1041// Or, when you want to add parameters to a pre-existing fixture class: 1042class BaseTest : public testing::Test { 1043 ... 1044}; 1045class BarTest : public BaseTest, 1046 public testing::WithParamInterface<const char*> { 1047 ... 1048}; 1049``` 1050 1051Then, use the `TEST_P` macro to define as many test patterns using this fixture 1052as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you 1053prefer to think. 1054 1055```c++ 1056TEST_P(FooTest, DoesBlah) { 1057 // Inside a test, access the test parameter with the GetParam() method 1058 // of the TestWithParam<T> class: 1059 EXPECT_TRUE(foo.Blah(GetParam())); 1060 ... 1061} 1062 1063TEST_P(FooTest, HasBlahBlah) { 1064 ... 1065} 1066``` 1067 1068Finally, you can use the `INSTANTIATE_TEST_SUITE_P` macro to instantiate the 1069test suite with any set of parameters you want. GoogleTest defines a number of 1070functions for generating test parameters—see details at 1071[`INSTANTIATE_TEST_SUITE_P`](reference/testing.md#INSTANTIATE_TEST_SUITE_P) in 1072the Testing Reference. 1073 1074For example, the following statement will instantiate tests from the `FooTest` 1075test suite each with parameter values `"meeny"`, `"miny"`, and `"moe"` using the 1076[`Values`](reference/testing.md#param-generators) parameter generator: 1077 1078```c++ 1079INSTANTIATE_TEST_SUITE_P(MeenyMinyMoe, 1080 FooTest, 1081 testing::Values("meeny", "miny", "moe")); 1082``` 1083 1084{: .callout .note} 1085NOTE: The code above must be placed at global or namespace scope, not at 1086function scope. 1087 1088The first argument to `INSTANTIATE_TEST_SUITE_P` is a unique name for the 1089instantiation of the test suite. The next argument is the name of the test 1090pattern, and the last is the 1091[parameter generator](reference/testing.md#param-generators). 1092 1093You can instantiate a test pattern more than once, so to distinguish different 1094instances of the pattern, the instantiation name is added as a prefix to the 1095actual test suite name. Remember to pick unique prefixes for different 1096instantiations. The tests from the instantiation above will have these names: 1097 1098* `MeenyMinyMoe/FooTest.DoesBlah/0` for `"meeny"` 1099* `MeenyMinyMoe/FooTest.DoesBlah/1` for `"miny"` 1100* `MeenyMinyMoe/FooTest.DoesBlah/2` for `"moe"` 1101* `MeenyMinyMoe/FooTest.HasBlahBlah/0` for `"meeny"` 1102* `MeenyMinyMoe/FooTest.HasBlahBlah/1` for `"miny"` 1103* `MeenyMinyMoe/FooTest.HasBlahBlah/2` for `"moe"` 1104 1105You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests). 1106 1107The following statement will instantiate all tests from `FooTest` again, each 1108with parameter values `"cat"` and `"dog"` using the 1109[`ValuesIn`](reference/testing.md#param-generators) parameter generator: 1110 1111```c++ 1112const char* pets[] = {"cat", "dog"}; 1113INSTANTIATE_TEST_SUITE_P(Pets, FooTest, testing::ValuesIn(pets)); 1114``` 1115 1116The tests from the instantiation above will have these names: 1117 1118* `Pets/FooTest.DoesBlah/0` for `"cat"` 1119* `Pets/FooTest.DoesBlah/1` for `"dog"` 1120* `Pets/FooTest.HasBlahBlah/0` for `"cat"` 1121* `Pets/FooTest.HasBlahBlah/1` for `"dog"` 1122 1123Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the 1124given test suite, whether their definitions come before or *after* the 1125`INSTANTIATE_TEST_SUITE_P` statement. 1126 1127Additionally, by default, every `TEST_P` without a corresponding 1128`INSTANTIATE_TEST_SUITE_P` causes a failing test in test suite 1129`GoogleTestVerification`. If you have a test suite where that omission is not an 1130error, for example it is in a library that may be linked in for other reasons or 1131where the list of test cases is dynamic and may be empty, then this check can be 1132suppressed by tagging the test suite: 1133 1134```c++ 1135GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(FooTest); 1136``` 1137 1138You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples. 1139 1140[sample7_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample7_unittest.cc "Parameterized Test example" 1141[sample8_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample8_unittest.cc "Parameterized Test example with multiple parameters" 1142 1143### Creating Value-Parameterized Abstract Tests 1144 1145In the above, we define and instantiate `FooTest` in the *same* source file. 1146Sometimes you may want to define value-parameterized tests in a library and let 1147other people instantiate them later. This pattern is known as *abstract tests*. 1148As an example of its application, when you are designing an interface you can 1149write a standard suite of abstract tests (perhaps using a factory function as 1150the test parameter) that all implementations of the interface are expected to 1151pass. When someone implements the interface, they can instantiate your suite to 1152get all the interface-conformance tests for free. 1153 1154To define abstract tests, you should organize your code like this: 1155 11561. Put the definition of the parameterized test fixture class (e.g. `FooTest`) 1157 in a header file, say `foo_param_test.h`. Think of this as *declaring* your 1158 abstract tests. 11592. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes 1160 `foo_param_test.h`. Think of this as *implementing* your abstract tests. 1161 1162Once they are defined, you can instantiate them by including `foo_param_test.h`, 1163invoking `INSTANTIATE_TEST_SUITE_P()`, and depending on the library target that 1164contains `foo_param_test.cc`. You can instantiate the same abstract test suite 1165multiple times, possibly in different source files. 1166 1167### Specifying Names for Value-Parameterized Test Parameters 1168 1169The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to 1170specify a function or functor that generates custom test name suffixes based on 1171the test parameters. The function should accept one argument of type 1172`testing::TestParamInfo<class ParamType>`, and return `std::string`. 1173 1174`testing::PrintToStringParamName` is a builtin test suffix generator that 1175returns the value of `testing::PrintToString(GetParam())`. It does not work for 1176`std::string` or C strings. 1177 1178{: .callout .note} 1179NOTE: test names must be non-empty, unique, and may only contain ASCII 1180alphanumeric characters. In particular, they 1181[should not contain underscores](faq.md#why-should-test-suite-names-and-test-names-not-contain-underscore) 1182 1183```c++ 1184class MyTestSuite : public testing::TestWithParam<int> {}; 1185 1186TEST_P(MyTestSuite, MyTest) 1187{ 1188 std::cout << "Example Test Param: " << GetParam() << std::endl; 1189} 1190 1191INSTANTIATE_TEST_SUITE_P(MyGroup, MyTestSuite, testing::Range(0, 10), 1192 testing::PrintToStringParamName()); 1193``` 1194 1195Providing a custom functor allows for more control over test parameter name 1196generation, especially for types where the automatic conversion does not 1197generate helpful parameter names (e.g. strings as demonstrated above). The 1198following example illustrates this for multiple parameters, an enumeration type 1199and a string, and also demonstrates how to combine generators. It uses a lambda 1200for conciseness: 1201 1202```c++ 1203enum class MyType { MY_FOO = 0, MY_BAR = 1 }; 1204 1205class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, std::string>> { 1206}; 1207 1208INSTANTIATE_TEST_SUITE_P( 1209 MyGroup, MyTestSuite, 1210 testing::Combine( 1211 testing::Values(MyType::MY_FOO, MyType::MY_BAR), 1212 testing::Values("A", "B")), 1213 [](const testing::TestParamInfo<MyTestSuite::ParamType>& info) { 1214 std::string name = absl::StrCat( 1215 std::get<0>(info.param) == MyType::MY_FOO ? "Foo" : "Bar", 1216 std::get<1>(info.param)); 1217 absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_'); 1218 return name; 1219 }); 1220``` 1221 1222## Typed Tests 1223 1224Suppose you have multiple implementations of the same interface and want to make 1225sure that all of them satisfy some common requirements. Or, you may have defined 1226several types that are supposed to conform to the same "concept" and you want to 1227verify it. In both cases, you want the same test logic repeated for different 1228types. 1229 1230While you can write one `TEST` or `TEST_F` for each type you want to test (and 1231you may even factor the test logic into a function template that you invoke from 1232the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n` 1233types, you'll end up writing `m*n` `TEST`s. 1234 1235*Typed tests* allow you to repeat the same test logic over a list of types. You 1236only need to write the test logic once, although you must know the type list 1237when writing typed tests. Here's how you do it: 1238 1239First, define a fixture class template. It should be parameterized by a type. 1240Remember to derive it from `::testing::Test`: 1241 1242```c++ 1243template <typename T> 1244class FooTest : public testing::Test { 1245 public: 1246 ... 1247 using List = std::list<T>; 1248 static T shared_; 1249 T value_; 1250}; 1251``` 1252 1253Next, associate a list of types with the test suite, which will be repeated for 1254each type in the list: 1255 1256```c++ 1257using MyTypes = ::testing::Types<char, int, unsigned int>; 1258TYPED_TEST_SUITE(FooTest, MyTypes); 1259``` 1260 1261The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_SUITE` 1262macro to parse correctly. Otherwise the compiler will think that each comma in 1263the type list introduces a new macro argument. 1264 1265Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this 1266test suite. You can repeat this as many times as you want: 1267 1268```c++ 1269TYPED_TEST(FooTest, DoesBlah) { 1270 // Inside a test, refer to the special name TypeParam to get the type 1271 // parameter. Since we are inside a derived class template, C++ requires 1272 // us to visit the members of FooTest via 'this'. 1273 TypeParam n = this->value_; 1274 1275 // To visit static members of the fixture, add the 'TestFixture::' 1276 // prefix. 1277 n += TestFixture::shared_; 1278 1279 // To refer to typedefs in the fixture, add the 'typename TestFixture::' 1280 // prefix. The 'typename' is required to satisfy the compiler. 1281 typename TestFixture::List values; 1282 1283 values.push_back(n); 1284 ... 1285} 1286 1287TYPED_TEST(FooTest, HasPropertyA) { ... } 1288``` 1289 1290You can see [sample6_unittest.cc] for a complete example. 1291 1292[sample6_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample6_unittest.cc "Typed Test example" 1293 1294## Type-Parameterized Tests 1295 1296*Type-parameterized tests* are like typed tests, except that they don't require 1297you to know the list of types ahead of time. Instead, you can define the test 1298logic first and instantiate it with different type lists later. You can even 1299instantiate it more than once in the same program. 1300 1301If you are designing an interface or concept, you can define a suite of 1302type-parameterized tests to verify properties that any valid implementation of 1303the interface/concept should have. Then, the author of each implementation can 1304just instantiate the test suite with their type to verify that it conforms to 1305the requirements, without having to write similar tests repeatedly. Here's an 1306example: 1307 1308First, define a fixture class template, as we did with typed tests: 1309 1310```c++ 1311template <typename T> 1312class FooTest : public testing::Test { 1313 ... 1314}; 1315``` 1316 1317Next, declare that you will define a type-parameterized test suite: 1318 1319```c++ 1320TYPED_TEST_SUITE_P(FooTest); 1321``` 1322 1323Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat 1324this as many times as you want: 1325 1326```c++ 1327TYPED_TEST_P(FooTest, DoesBlah) { 1328 // Inside a test, refer to TypeParam to get the type parameter. 1329 TypeParam n = 0; 1330 ... 1331} 1332 1333TYPED_TEST_P(FooTest, HasPropertyA) { ... } 1334``` 1335 1336Now the tricky part: you need to register all test patterns using the 1337`REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first 1338argument of the macro is the test suite name; the rest are the names of the 1339tests in this test suite: 1340 1341```c++ 1342REGISTER_TYPED_TEST_SUITE_P(FooTest, 1343 DoesBlah, HasPropertyA); 1344``` 1345 1346Finally, you are free to instantiate the pattern with the types you want. If you 1347put the above code in a header file, you can `#include` it in multiple C++ 1348source files and instantiate it multiple times. 1349 1350```c++ 1351using MyTypes = ::testing::Types<char, int, unsigned int>; 1352INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes); 1353``` 1354 1355To distinguish different instances of the pattern, the first argument to the 1356`INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the 1357actual test suite name. Remember to pick unique prefixes for different 1358instances. 1359 1360In the special case where the type list contains only one type, you can write 1361that type directly without `::testing::Types<...>`, like this: 1362 1363```c++ 1364INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, int); 1365``` 1366 1367You can see [sample6_unittest.cc] for a complete example. 1368 1369## Testing Private Code 1370 1371If you change your software's internal implementation, your tests should not 1372break as long as the change is not observable by users. Therefore, **per the 1373black-box testing principle, most of the time you should test your code through 1374its public interfaces.** 1375 1376**If you still find yourself needing to test internal implementation code, 1377consider if there's a better design.** The desire to test internal 1378implementation is often a sign that the class is doing too much. Consider 1379extracting an implementation class, and testing it. Then use that implementation 1380class in the original class. 1381 1382If you absolutely have to test non-public interface code though, you can. There 1383are two cases to consider: 1384 1385* Static functions ( *not* the same as static member functions!) or unnamed 1386 namespaces, and 1387* Private or protected class members 1388 1389To test them, we use the following special techniques: 1390 1391* Both static functions and definitions/declarations in an unnamed namespace 1392 are only visible within the same translation unit. To test them, you can 1393 `#include` the entire `.cc` file being tested in your `*_test.cc` file. 1394 (#including `.cc` files is not a good way to reuse code - you should not do 1395 this in production code!) 1396 1397 However, a better approach is to move the private code into the 1398 `foo::internal` namespace, where `foo` is the namespace your project 1399 normally uses, and put the private declarations in a `*-internal.h` file. 1400 Your production `.cc` files and your tests are allowed to include this 1401 internal header, but your clients are not. This way, you can fully test your 1402 internal implementation without leaking it to your clients. 1403 1404* Private class members are only accessible from within the class or by 1405 friends. To access a class' private members, you can declare your test 1406 fixture as a friend to the class and define accessors in your fixture. Tests 1407 using the fixture can then access the private members of your production 1408 class via the accessors in the fixture. Note that even though your fixture 1409 is a friend to your production class, your tests are not automatically 1410 friends to it, as they are technically defined in sub-classes of the 1411 fixture. 1412 1413 Another way to test private members is to refactor them into an 1414 implementation class, which is then declared in a `*-internal.h` file. Your 1415 clients aren't allowed to include this header but your tests can. Such is 1416 called the 1417 [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/) 1418 (Private Implementation) idiom. 1419 1420 Or, you can declare an individual test as a friend of your class by adding 1421 this line in the class body: 1422 1423 ```c++ 1424 FRIEND_TEST(TestSuiteName, TestName); 1425 ``` 1426 1427 For example, 1428 1429 ```c++ 1430 // foo.h 1431 class Foo { 1432 ... 1433 private: 1434 FRIEND_TEST(FooTest, BarReturnsZeroOnNull); 1435 1436 int Bar(void* x); 1437 }; 1438 1439 // foo_test.cc 1440 ... 1441 TEST(FooTest, BarReturnsZeroOnNull) { 1442 Foo foo; 1443 EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar(). 1444 } 1445 ``` 1446 1447 Pay special attention when your class is defined in a namespace. If you want 1448 your test fixtures and tests to be friends of your class, then they must be 1449 defined in the exact same namespace (no anonymous or inline namespaces). 1450 1451 For example, if the code to be tested looks like: 1452 1453 ```c++ 1454 namespace my_namespace { 1455 1456 class Foo { 1457 friend class FooTest; 1458 FRIEND_TEST(FooTest, Bar); 1459 FRIEND_TEST(FooTest, Baz); 1460 ... definition of the class Foo ... 1461 }; 1462 1463 } // namespace my_namespace 1464 ``` 1465 1466 Your test code should be something like: 1467 1468 ```c++ 1469 namespace my_namespace { 1470 1471 class FooTest : public testing::Test { 1472 protected: 1473 ... 1474 }; 1475 1476 TEST_F(FooTest, Bar) { ... } 1477 TEST_F(FooTest, Baz) { ... } 1478 1479 } // namespace my_namespace 1480 ``` 1481 1482## "Catching" Failures 1483 1484If you are building a testing utility on top of googletest, you'll want to test 1485your utility. What framework would you use to test it? googletest, of course. 1486 1487The challenge is to verify that your testing utility reports failures correctly. 1488In frameworks that report a failure by throwing an exception, you could catch 1489the exception and assert on it. But googletest doesn't use exceptions, so how do 1490we test that a piece of code generates an expected failure? 1491 1492`"gtest/gtest-spi.h"` contains some constructs to do this. 1493After #including this header, you can use 1494 1495```c++ 1496 EXPECT_FATAL_FAILURE(statement, substring); 1497``` 1498 1499to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the 1500current thread whose message contains the given `substring`, or use 1501 1502```c++ 1503 EXPECT_NONFATAL_FAILURE(statement, substring); 1504``` 1505 1506if you are expecting a non-fatal (e.g. `EXPECT_*`) failure. 1507 1508Only failures in the current thread are checked to determine the result of this 1509type of expectations. If `statement` creates new threads, failures in these 1510threads are also ignored. If you want to catch failures in other threads as 1511well, use one of the following macros instead: 1512 1513```c++ 1514 EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring); 1515 EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring); 1516``` 1517 1518{: .callout .note} 1519NOTE: Assertions from multiple threads are currently not supported on Windows. 1520 1521For technical reasons, there are some caveats: 1522 15231. You cannot stream a failure message to either macro. 1524 15252. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference 1526 local non-static variables or non-static members of `this` object. 1527 15283. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a 1529 value. 1530 1531## Registering tests programmatically 1532 1533The `TEST` macros handle the vast majority of all use cases, but there are few 1534where runtime registration logic is required. For those cases, the framework 1535provides the `::testing::RegisterTest` that allows callers to register arbitrary 1536tests dynamically. 1537 1538This is an advanced API only to be used when the `TEST` macros are insufficient. 1539The macros should be preferred when possible, as they avoid most of the 1540complexity of calling this function. 1541 1542It provides the following signature: 1543 1544```c++ 1545template <typename Factory> 1546TestInfo* RegisterTest(const char* test_suite_name, const char* test_name, 1547 const char* type_param, const char* value_param, 1548 const char* file, int line, Factory factory); 1549``` 1550 1551The `factory` argument is a factory callable (move-constructible) object or 1552function pointer that creates a new instance of the Test object. It handles 1553ownership to the caller. The signature of the callable is `Fixture*()`, where 1554`Fixture` is the test fixture class for the test. All tests registered with the 1555same `test_suite_name` must return the same fixture type. This is checked at 1556runtime. 1557 1558The framework will infer the fixture class from the factory and will call the 1559`SetUpTestSuite` and `TearDownTestSuite` for it. 1560 1561Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is 1562undefined. 1563 1564Use case example: 1565 1566```c++ 1567class MyFixture : public testing::Test { 1568 public: 1569 // All of these optional, just like in regular macro usage. 1570 static void SetUpTestSuite() { ... } 1571 static void TearDownTestSuite() { ... } 1572 void SetUp() override { ... } 1573 void TearDown() override { ... } 1574}; 1575 1576class MyTest : public MyFixture { 1577 public: 1578 explicit MyTest(int data) : data_(data) {} 1579 void TestBody() override { ... } 1580 1581 private: 1582 int data_; 1583}; 1584 1585void RegisterMyTests(const std::vector<int>& values) { 1586 for (int v : values) { 1587 testing::RegisterTest( 1588 "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr, 1589 std::to_string(v).c_str(), 1590 __FILE__, __LINE__, 1591 // Important to use the fixture type as the return type here. 1592 [=]() -> MyFixture* { return new MyTest(v); }); 1593 } 1594} 1595... 1596int main(int argc, char** argv) { 1597 std::vector<int> values_to_test = LoadValuesFromConfig(); 1598 RegisterMyTests(values_to_test); 1599 ... 1600 return RUN_ALL_TESTS(); 1601} 1602``` 1603 1604## Getting the Current Test's Name 1605 1606Sometimes a function may need to know the name of the currently running test. 1607For example, you may be using the `SetUp()` method of your test fixture to set 1608the golden file name based on which test is running. The 1609[`TestInfo`](reference/testing.md#TestInfo) class has this information. 1610 1611To obtain a `TestInfo` object for the currently running test, call 1612`current_test_info()` on the [`UnitTest`](reference/testing.md#UnitTest) 1613singleton object: 1614 1615```c++ 1616 // Gets information about the currently running test. 1617 // Do NOT delete the returned object - it's managed by the UnitTest class. 1618 const testing::TestInfo* const test_info = 1619 testing::UnitTest::GetInstance()->current_test_info(); 1620 1621 printf("We are in test %s of test suite %s.\n", 1622 test_info->name(), 1623 test_info->test_suite_name()); 1624``` 1625 1626`current_test_info()` returns a null pointer if no test is running. In 1627particular, you cannot find the test suite name in `SetUpTestSuite()`, 1628`TearDownTestSuite()` (where you know the test suite name implicitly), or 1629functions called from them. 1630 1631## Extending googletest by Handling Test Events 1632 1633googletest provides an **event listener API** to let you receive notifications 1634about the progress of a test program and test failures. The events you can 1635listen to include the start and end of the test program, a test suite, or a test 1636method, among others. You may use this API to augment or replace the standard 1637console output, replace the XML output, or provide a completely different form 1638of output, such as a GUI or a database. You can also use test events as 1639checkpoints to implement a resource leak checker, for example. 1640 1641### Defining Event Listeners 1642 1643To define a event listener, you subclass either 1644[`testing::TestEventListener`](reference/testing.md#TestEventListener) or 1645[`testing::EmptyTestEventListener`](reference/testing.md#EmptyTestEventListener) 1646The former is an (abstract) interface, where *each pure virtual method can be 1647overridden to handle a test event* (For example, when a test starts, the 1648`OnTestStart()` method will be called.). The latter provides an empty 1649implementation of all methods in the interface, such that a subclass only needs 1650to override the methods it cares about. 1651 1652When an event is fired, its context is passed to the handler function as an 1653argument. The following argument types are used: 1654 1655* UnitTest reflects the state of the entire test program, 1656* TestSuite has information about a test suite, which can contain one or more 1657 tests, 1658* TestInfo contains the state of a test, and 1659* TestPartResult represents the result of a test assertion. 1660 1661An event handler function can examine the argument it receives to find out 1662interesting information about the event and the test program's state. 1663 1664Here's an example: 1665 1666```c++ 1667 class MinimalistPrinter : public testing::EmptyTestEventListener { 1668 // Called before a test starts. 1669 void OnTestStart(const testing::TestInfo& test_info) override { 1670 printf("*** Test %s.%s starting.\n", 1671 test_info.test_suite_name(), test_info.name()); 1672 } 1673 1674 // Called after a failed assertion or a SUCCESS(). 1675 void OnTestPartResult(const testing::TestPartResult& test_part_result) override { 1676 printf("%s in %s:%d\n%s\n", 1677 test_part_result.failed() ? "*** Failure" : "Success", 1678 test_part_result.file_name(), 1679 test_part_result.line_number(), 1680 test_part_result.summary()); 1681 } 1682 1683 // Called after a test ends. 1684 void OnTestEnd(const testing::TestInfo& test_info) override { 1685 printf("*** Test %s.%s ending.\n", 1686 test_info.test_suite_name(), test_info.name()); 1687 } 1688 }; 1689``` 1690 1691### Using Event Listeners 1692 1693To use the event listener you have defined, add an instance of it to the 1694googletest event listener list (represented by class 1695[`TestEventListeners`](reference/testing.md#TestEventListeners) - note the "s" 1696at the end of the name) in your `main()` function, before calling 1697`RUN_ALL_TESTS()`: 1698 1699```c++ 1700int main(int argc, char** argv) { 1701 testing::InitGoogleTest(&argc, argv); 1702 // Gets hold of the event listener list. 1703 testing::TestEventListeners& listeners = 1704 testing::UnitTest::GetInstance()->listeners(); 1705 // Adds a listener to the end. googletest takes the ownership. 1706 listeners.Append(new MinimalistPrinter); 1707 return RUN_ALL_TESTS(); 1708} 1709``` 1710 1711There's only one problem: the default test result printer is still in effect, so 1712its output will mingle with the output from your minimalist printer. To suppress 1713the default printer, just release it from the event listener list and delete it. 1714You can do so by adding one line: 1715 1716```c++ 1717 ... 1718 delete listeners.Release(listeners.default_result_printer()); 1719 listeners.Append(new MinimalistPrinter); 1720 return RUN_ALL_TESTS(); 1721``` 1722 1723Now, sit back and enjoy a completely different output from your tests. For more 1724details, see [sample9_unittest.cc]. 1725 1726[sample9_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample9_unittest.cc "Event listener example" 1727 1728You may append more than one listener to the list. When an `On*Start()` or 1729`OnTestPartResult()` event is fired, the listeners will receive it in the order 1730they appear in the list (since new listeners are added to the end of the list, 1731the default text printer and the default XML generator will receive the event 1732first). An `On*End()` event will be received by the listeners in the *reverse* 1733order. This allows output by listeners added later to be framed by output from 1734listeners added earlier. 1735 1736### Generating Failures in Listeners 1737 1738You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc) 1739when processing an event. There are some restrictions: 1740 17411. You cannot generate any failure in `OnTestPartResult()` (otherwise it will 1742 cause `OnTestPartResult()` to be called recursively). 17432. A listener that handles `OnTestPartResult()` is not allowed to generate any 1744 failure. 1745 1746When you add listeners to the listener list, you should put listeners that 1747handle `OnTestPartResult()` *before* listeners that can generate failures. This 1748ensures that failures generated by the latter are attributed to the right test 1749by the former. 1750 1751See [sample10_unittest.cc] for an example of a failure-raising listener. 1752 1753[sample10_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample10_unittest.cc "Failure-raising listener example" 1754 1755## Running Test Programs: Advanced Options 1756 1757googletest test programs are ordinary executables. Once built, you can run them 1758directly and affect their behavior via the following environment variables 1759and/or command line flags. For the flags to work, your programs must call 1760`::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`. 1761 1762To see a list of supported flags and their usage, please run your test program 1763with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short. 1764 1765If an option is specified both by an environment variable and by a flag, the 1766latter takes precedence. 1767 1768### Selecting Tests 1769 1770#### Listing Test Names 1771 1772Sometimes it is necessary to list the available tests in a program before 1773running them so that a filter may be applied if needed. Including the flag 1774`--gtest_list_tests` overrides all other flags and lists tests in the following 1775format: 1776 1777```none 1778TestSuite1. 1779 TestName1 1780 TestName2 1781TestSuite2. 1782 TestName 1783``` 1784 1785None of the tests listed are actually run if the flag is provided. There is no 1786corresponding environment variable for this flag. 1787 1788#### Running a Subset of the Tests 1789 1790By default, a googletest program runs all tests the user has defined. Sometimes, 1791you want to run only a subset of the tests (e.g. for debugging or quickly 1792verifying a change). If you set the `GTEST_FILTER` environment variable or the 1793`--gtest_filter` flag to a filter string, googletest will only run the tests 1794whose full names (in the form of `TestSuiteName.TestName`) match the filter. 1795 1796The format of a filter is a '`:`'-separated list of wildcard patterns (called 1797the *positive patterns*) optionally followed by a '`-`' and another 1798'`:`'-separated pattern list (called the *negative patterns*). A test matches 1799the filter if and only if it matches any of the positive patterns but does not 1800match any of the negative patterns. 1801 1802A pattern may contain `'*'` (matches any string) or `'?'` (matches any single 1803character). For convenience, the filter `'*-NegativePatterns'` can be also 1804written as `'-NegativePatterns'`. 1805 1806For example: 1807 1808* `./foo_test` Has no flag, and thus runs all its tests. 1809* `./foo_test --gtest_filter=*` Also runs everything, due to the single 1810 match-everything `*` value. 1811* `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite 1812 `FooTest` . 1813* `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full 1814 name contains either `"Null"` or `"Constructor"` . 1815* `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests. 1816* `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test 1817 suite `FooTest` except `FooTest.Bar`. 1818* `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs 1819 everything in test suite `FooTest` except `FooTest.Bar` and everything in 1820 test suite `BarTest` except `BarTest.Foo`. 1821 1822#### Stop test execution upon first failure 1823 1824By default, a googletest program runs all tests the user has defined. In some 1825cases (e.g. iterative test development & execution) it may be desirable stop 1826test execution upon first failure (trading improved latency for completeness). 1827If `GTEST_FAIL_FAST` environment variable or `--gtest_fail_fast` flag is set, 1828the test runner will stop execution as soon as the first test failure is found. 1829 1830#### Temporarily Disabling Tests 1831 1832If you have a broken test that you cannot fix right away, you can add the 1833`DISABLED_` prefix to its name. This will exclude it from execution. This is 1834better than commenting out the code or using `#if 0`, as disabled tests are 1835still compiled (and thus won't rot). 1836 1837If you need to disable all tests in a test suite, you can either add `DISABLED_` 1838to the front of the name of each test, or alternatively add it to the front of 1839the test suite name. 1840 1841For example, the following tests won't be run by googletest, even though they 1842will still be compiled: 1843 1844```c++ 1845// Tests that Foo does Abc. 1846TEST(FooTest, DISABLED_DoesAbc) { ... } 1847 1848class DISABLED_BarTest : public testing::Test { ... }; 1849 1850// Tests that Bar does Xyz. 1851TEST_F(DISABLED_BarTest, DoesXyz) { ... } 1852``` 1853 1854{: .callout .note} 1855NOTE: This feature should only be used for temporary pain-relief. You still have 1856to fix the disabled tests at a later date. As a reminder, googletest will print 1857a banner warning you if a test program contains any disabled tests. 1858 1859{: .callout .tip} 1860TIP: You can easily count the number of disabled tests you have using 1861`grep`. This number can be used as a metric for 1862improving your test quality. 1863 1864#### Temporarily Enabling Disabled Tests 1865 1866To include disabled tests in test execution, just invoke the test program with 1867the `--gtest_also_run_disabled_tests` flag or set the 1868`GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`. 1869You can combine this with the `--gtest_filter` flag to further select which 1870disabled tests to run. 1871 1872### Repeating the Tests 1873 1874Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it 1875will fail only 1% of the time, making it rather hard to reproduce the bug under 1876a debugger. This can be a major source of frustration. 1877 1878The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in 1879a program many times. Hopefully, a flaky test will eventually fail and give you 1880a chance to debug. Here's how to use it: 1881 1882```none 1883$ foo_test --gtest_repeat=1000 1884Repeat foo_test 1000 times and don't stop at failures. 1885 1886$ foo_test --gtest_repeat=-1 1887A negative count means repeating forever. 1888 1889$ foo_test --gtest_repeat=1000 --gtest_break_on_failure 1890Repeat foo_test 1000 times, stopping at the first failure. This 1891is especially useful when running under a debugger: when the test 1892fails, it will drop into the debugger and you can then inspect 1893variables and stacks. 1894 1895$ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.* 1896Repeat the tests whose name matches the filter 1000 times. 1897``` 1898 1899If your test program contains 1900[global set-up/tear-down](#global-set-up-and-tear-down) code, it will be 1901repeated in each iteration as well, as the flakiness may be in it. You can also 1902specify the repeat count by setting the `GTEST_REPEAT` environment variable. 1903 1904### Shuffling the Tests 1905 1906You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE` 1907environment variable to `1`) to run the tests in a program in a random order. 1908This helps to reveal bad dependencies between tests. 1909 1910By default, googletest uses a random seed calculated from the current time. 1911Therefore you'll get a different order every time. The console output includes 1912the random seed value, such that you can reproduce an order-related test failure 1913later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED` 1914flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an 1915integer in the range [0, 99999]. The seed value 0 is special: it tells 1916googletest to do the default behavior of calculating the seed from the current 1917time. 1918 1919If you combine this with `--gtest_repeat=N`, googletest will pick a different 1920random seed and re-shuffle the tests in each iteration. 1921 1922### Controlling Test Output 1923 1924#### Colored Terminal Output 1925 1926googletest can use colors in its terminal output to make it easier to spot the 1927important information: 1928 1929<pre>... 1930<font color="green">[----------]</font> 1 test from FooTest 1931<font color="green">[ RUN ]</font> FooTest.DoesAbc 1932<font color="green">[ OK ]</font> FooTest.DoesAbc 1933<font color="green">[----------]</font> 2 tests from BarTest 1934<font color="green">[ RUN ]</font> BarTest.HasXyzProperty 1935<font color="green">[ OK ]</font> BarTest.HasXyzProperty 1936<font color="green">[ RUN ]</font> BarTest.ReturnsTrueOnSuccess 1937... some error messages ... 1938<font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess 1939... 1940<font color="green">[==========]</font> 30 tests from 14 test suites ran. 1941<font color="green">[ PASSED ]</font> 28 tests. 1942<font color="red">[ FAILED ]</font> 2 tests, listed below: 1943<font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess 1944<font color="red">[ FAILED ]</font> AnotherTest.DoesXyz 1945 1946 2 FAILED TESTS 1947</pre> 1948 1949You can set the `GTEST_COLOR` environment variable or the `--gtest_color` 1950command line flag to `yes`, `no`, or `auto` (the default) to enable colors, 1951disable colors, or let googletest decide. When the value is `auto`, googletest 1952will use colors if and only if the output goes to a terminal and (on non-Windows 1953platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`. 1954 1955#### Suppressing test passes 1956 1957By default, googletest prints 1 line of output for each test, indicating if it 1958passed or failed. To show only test failures, run the test program with 1959`--gtest_brief=1`, or set the GTEST_BRIEF environment variable to `1`. 1960 1961#### Suppressing the Elapsed Time 1962 1963By default, googletest prints the time it takes to run each test. To disable 1964that, run the test program with the `--gtest_print_time=0` command line flag, or 1965set the GTEST_PRINT_TIME environment variable to `0`. 1966 1967#### Suppressing UTF-8 Text Output 1968 1969In case of assertion failures, googletest prints expected and actual values of 1970type `string` both as hex-encoded strings as well as in readable UTF-8 text if 1971they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8 1972text because, for example, you don't have an UTF-8 compatible output medium, run 1973the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8` 1974environment variable to `0`. 1975 1976#### Generating an XML Report 1977 1978googletest can emit a detailed XML report to a file in addition to its normal 1979textual output. The report contains the duration of each test, and thus can help 1980you identify slow tests. 1981 1982To generate the XML report, set the `GTEST_OUTPUT` environment variable or the 1983`--gtest_output` flag to the string `"xml:path_to_output_file"`, which will 1984create the file at the given location. You can also just use the string `"xml"`, 1985in which case the output can be found in the `test_detail.xml` file in the 1986current directory. 1987 1988If you specify a directory (for example, `"xml:output/directory/"` on Linux or 1989`"xml:output\directory\"` on Windows), googletest will create the XML file in 1990that directory, named after the test executable (e.g. `foo_test.xml` for test 1991program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left 1992over from a previous run), googletest will pick a different name (e.g. 1993`foo_test_1.xml`) to avoid overwriting it. 1994 1995The report is based on the `junitreport` Ant task. Since that format was 1996originally intended for Java, a little interpretation is required to make it 1997apply to googletest tests, as shown here: 1998 1999```xml 2000<testsuites name="AllTests" ...> 2001 <testsuite name="test_case_name" ...> 2002 <testcase name="test_name" ...> 2003 <failure message="..."/> 2004 <failure message="..."/> 2005 <failure message="..."/> 2006 </testcase> 2007 </testsuite> 2008</testsuites> 2009``` 2010 2011* The root `<testsuites>` element corresponds to the entire test program. 2012* `<testsuite>` elements correspond to googletest test suites. 2013* `<testcase>` elements correspond to googletest test functions. 2014 2015For instance, the following program 2016 2017```c++ 2018TEST(MathTest, Addition) { ... } 2019TEST(MathTest, Subtraction) { ... } 2020TEST(LogicTest, NonContradiction) { ... } 2021``` 2022 2023could generate this report: 2024 2025```xml 2026<?xml version="1.0" encoding="UTF-8"?> 2027<testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests"> 2028 <testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015"> 2029 <testcase name="Addition" status="run" time="0.007" classname=""> 2030 <failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="">...</failure> 2031 <failure message="Value of: add(1, -1)
 Actual: 1
Expected: 0" type="">...</failure> 2032 </testcase> 2033 <testcase name="Subtraction" status="run" time="0.005" classname=""> 2034 </testcase> 2035 </testsuite> 2036 <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005"> 2037 <testcase name="NonContradiction" status="run" time="0.005" classname=""> 2038 </testcase> 2039 </testsuite> 2040</testsuites> 2041``` 2042 2043Things to note: 2044 2045* The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how 2046 many test functions the googletest program or test suite contains, while the 2047 `failures` attribute tells how many of them failed. 2048 2049* The `time` attribute expresses the duration of the test, test suite, or 2050 entire test program in seconds. 2051 2052* The `timestamp` attribute records the local date and time of the test 2053 execution. 2054 2055* Each `<failure>` element corresponds to a single failed googletest 2056 assertion. 2057 2058#### Generating a JSON Report 2059 2060googletest can also emit a JSON report as an alternative format to XML. To 2061generate the JSON report, set the `GTEST_OUTPUT` environment variable or the 2062`--gtest_output` flag to the string `"json:path_to_output_file"`, which will 2063create the file at the given location. You can also just use the string 2064`"json"`, in which case the output can be found in the `test_detail.json` file 2065in the current directory. 2066 2067The report format conforms to the following JSON Schema: 2068 2069```json 2070{ 2071 "$schema": "http://json-schema.org/schema#", 2072 "type": "object", 2073 "definitions": { 2074 "TestCase": { 2075 "type": "object", 2076 "properties": { 2077 "name": { "type": "string" }, 2078 "tests": { "type": "integer" }, 2079 "failures": { "type": "integer" }, 2080 "disabled": { "type": "integer" }, 2081 "time": { "type": "string" }, 2082 "testsuite": { 2083 "type": "array", 2084 "items": { 2085 "$ref": "#/definitions/TestInfo" 2086 } 2087 } 2088 } 2089 }, 2090 "TestInfo": { 2091 "type": "object", 2092 "properties": { 2093 "name": { "type": "string" }, 2094 "status": { 2095 "type": "string", 2096 "enum": ["RUN", "NOTRUN"] 2097 }, 2098 "time": { "type": "string" }, 2099 "classname": { "type": "string" }, 2100 "failures": { 2101 "type": "array", 2102 "items": { 2103 "$ref": "#/definitions/Failure" 2104 } 2105 } 2106 } 2107 }, 2108 "Failure": { 2109 "type": "object", 2110 "properties": { 2111 "failures": { "type": "string" }, 2112 "type": { "type": "string" } 2113 } 2114 } 2115 }, 2116 "properties": { 2117 "tests": { "type": "integer" }, 2118 "failures": { "type": "integer" }, 2119 "disabled": { "type": "integer" }, 2120 "errors": { "type": "integer" }, 2121 "timestamp": { 2122 "type": "string", 2123 "format": "date-time" 2124 }, 2125 "time": { "type": "string" }, 2126 "name": { "type": "string" }, 2127 "testsuites": { 2128 "type": "array", 2129 "items": { 2130 "$ref": "#/definitions/TestCase" 2131 } 2132 } 2133 } 2134} 2135``` 2136 2137The report uses the format that conforms to the following Proto3 using the 2138[JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json): 2139 2140```proto 2141syntax = "proto3"; 2142 2143package googletest; 2144 2145import "google/protobuf/timestamp.proto"; 2146import "google/protobuf/duration.proto"; 2147 2148message UnitTest { 2149 int32 tests = 1; 2150 int32 failures = 2; 2151 int32 disabled = 3; 2152 int32 errors = 4; 2153 google.protobuf.Timestamp timestamp = 5; 2154 google.protobuf.Duration time = 6; 2155 string name = 7; 2156 repeated TestCase testsuites = 8; 2157} 2158 2159message TestCase { 2160 string name = 1; 2161 int32 tests = 2; 2162 int32 failures = 3; 2163 int32 disabled = 4; 2164 int32 errors = 5; 2165 google.protobuf.Duration time = 6; 2166 repeated TestInfo testsuite = 7; 2167} 2168 2169message TestInfo { 2170 string name = 1; 2171 enum Status { 2172 RUN = 0; 2173 NOTRUN = 1; 2174 } 2175 Status status = 2; 2176 google.protobuf.Duration time = 3; 2177 string classname = 4; 2178 message Failure { 2179 string failures = 1; 2180 string type = 2; 2181 } 2182 repeated Failure failures = 5; 2183} 2184``` 2185 2186For instance, the following program 2187 2188```c++ 2189TEST(MathTest, Addition) { ... } 2190TEST(MathTest, Subtraction) { ... } 2191TEST(LogicTest, NonContradiction) { ... } 2192``` 2193 2194could generate this report: 2195 2196```json 2197{ 2198 "tests": 3, 2199 "failures": 1, 2200 "errors": 0, 2201 "time": "0.035s", 2202 "timestamp": "2011-10-31T18:52:42Z", 2203 "name": "AllTests", 2204 "testsuites": [ 2205 { 2206 "name": "MathTest", 2207 "tests": 2, 2208 "failures": 1, 2209 "errors": 0, 2210 "time": "0.015s", 2211 "testsuite": [ 2212 { 2213 "name": "Addition", 2214 "status": "RUN", 2215 "time": "0.007s", 2216 "classname": "", 2217 "failures": [ 2218 { 2219 "message": "Value of: add(1, 1)\n Actual: 3\nExpected: 2", 2220 "type": "" 2221 }, 2222 { 2223 "message": "Value of: add(1, -1)\n Actual: 1\nExpected: 0", 2224 "type": "" 2225 } 2226 ] 2227 }, 2228 { 2229 "name": "Subtraction", 2230 "status": "RUN", 2231 "time": "0.005s", 2232 "classname": "" 2233 } 2234 ] 2235 }, 2236 { 2237 "name": "LogicTest", 2238 "tests": 1, 2239 "failures": 0, 2240 "errors": 0, 2241 "time": "0.005s", 2242 "testsuite": [ 2243 { 2244 "name": "NonContradiction", 2245 "status": "RUN", 2246 "time": "0.005s", 2247 "classname": "" 2248 } 2249 ] 2250 } 2251 ] 2252} 2253``` 2254 2255{: .callout .important} 2256IMPORTANT: The exact format of the JSON document is subject to change. 2257 2258### Controlling How Failures Are Reported 2259 2260#### Detecting Test Premature Exit 2261 2262Google Test implements the _premature-exit-file_ protocol for test runners to 2263catch any kind of unexpected exits of test programs. Upon start, Google Test 2264creates the file which will be automatically deleted after all work has been 2265finished. Then, the test runner can check if this file exists. In case the file 2266remains undeleted, the inspected test has exited prematurely. 2267 2268This feature is enabled only if the `TEST_PREMATURE_EXIT_FILE` environment 2269variable has been set. 2270 2271#### Turning Assertion Failures into Break-Points 2272 2273When running test programs under a debugger, it's very convenient if the 2274debugger can catch an assertion failure and automatically drop into interactive 2275mode. googletest's *break-on-failure* mode supports this behavior. 2276 2277To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value 2278other than `0`. Alternatively, you can use the `--gtest_break_on_failure` 2279command line flag. 2280 2281#### Disabling Catching Test-Thrown Exceptions 2282 2283googletest can be used either with or without exceptions enabled. If a test 2284throws a C++ exception or (on Windows) a structured exception (SEH), by default 2285googletest catches it, reports it as a test failure, and continues with the next 2286test method. This maximizes the coverage of a test run. Also, on Windows an 2287uncaught exception will cause a pop-up window, so catching the exceptions allows 2288you to run the tests automatically. 2289 2290When debugging the test failures, however, you may instead want the exceptions 2291to be handled by the debugger, such that you can examine the call stack when an 2292exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS` 2293environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when 2294running the tests. 2295 2296### Sanitizer Integration 2297 2298The 2299[Undefined Behavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html), 2300[Address Sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer), 2301and 2302[Thread Sanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual) 2303all provide weak functions that you can override to trigger explicit failures 2304when they detect sanitizer errors, such as creating a reference from `nullptr`. 2305To override these functions, place definitions for them in a source file that 2306you compile as part of your main binary: 2307 2308``` 2309extern "C" { 2310void __ubsan_on_report() { 2311 FAIL() << "Encountered an undefined behavior sanitizer error"; 2312} 2313void __asan_on_error() { 2314 FAIL() << "Encountered an address sanitizer error"; 2315} 2316void __tsan_on_report() { 2317 FAIL() << "Encountered a thread sanitizer error"; 2318} 2319} // extern "C" 2320``` 2321 2322After compiling your project with one of the sanitizers enabled, if a particular 2323test triggers a sanitizer error, googletest will report that it failed. 2324