• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

.github/H12-Oct-2021-397324

bindings/python/H12-Oct-2021-575415

cmake/H12-Oct-2021-389335

docs/H12-Oct-2021-424337

include/benchmark/H12-Oct-2021-1,655870

src/H03-May-2022-6,6284,634

test/H03-May-2022-6,5595,253

tools/H03-May-2022-2,3362,079

.clang-formatH A D12-Oct-202174 65

.gitignoreH A D12-Oct-2021753 6753

.travis.ymlH A D12-Oct-20216.7 KiB209202

.ycm_extra_conf.pyH A D12-Oct-20213.6 KiB11676

AUTHORSH A D12-Oct-20212.2 KiB6159

BUILD.bazelH A D12-Oct-2021986 4539

CONTRIBUTING.mdH A D12-Oct-20212.4 KiB5941

CONTRIBUTORSH A D12-Oct-20213.4 KiB8684

LICENSEH A D12-Oct-202111.1 KiB203169

README.mdH A D12-Oct-202146.8 KiB1,3791,086

WORKSPACEH A D12-Oct-20211.8 KiB5242

_config.ymlH A D12-Oct-202143 32

appveyor.ymlH A D12-Oct-20211.2 KiB5138

dependencies.mdH A D12-Oct-2021647 1912

setup.pyH A D12-Oct-20214.4 KiB141112

README.md

1# Benchmark
2
3[![build-and-test](https://github.com/google/benchmark/workflows/build-and-test/badge.svg)](https://github.com/google/benchmark/actions?query=workflow%3Abuild-and-test)
4[![bazel](https://github.com/google/benchmark/actions/workflows/bazel.yml/badge.svg)](https://github.com/google/benchmark/actions/workflows/bazel.yml)
5[![pylint](https://github.com/google/benchmark/workflows/pylint/badge.svg)](https://github.com/google/benchmark/actions?query=workflow%3Apylint)
6[![test-bindings](https://github.com/google/benchmark/workflows/test-bindings/badge.svg)](https://github.com/google/benchmark/actions?query=workflow%3Atest-bindings)
7
8[![Build Status](https://travis-ci.org/google/benchmark.svg?branch=master)](https://travis-ci.org/google/benchmark)
9[![Coverage Status](https://coveralls.io/repos/google/benchmark/badge.svg)](https://coveralls.io/r/google/benchmark)
10
11
12A library to benchmark code snippets, similar to unit tests. Example:
13
14```c++
15#include <benchmark/benchmark.h>
16
17static void BM_SomeFunction(benchmark::State& state) {
18  // Perform setup here
19  for (auto _ : state) {
20    // This code gets timed
21    SomeFunction();
22  }
23}
24// Register the function as a benchmark
25BENCHMARK(BM_SomeFunction);
26// Run the benchmark
27BENCHMARK_MAIN();
28```
29
30To get started, see [Requirements](#requirements) and
31[Installation](#installation). See [Usage](#usage) for a full example and the
32[User Guide](#user-guide) for a more comprehensive feature overview.
33
34It may also help to read the [Google Test documentation](https://github.com/google/googletest/blob/master/docs/primer.md)
35as some of the structural aspects of the APIs are similar.
36
37### Resources
38
39[Discussion group](https://groups.google.com/d/forum/benchmark-discuss)
40
41IRC channels:
42* [libera](https://libera.chat) #benchmark
43
44[Additional Tooling Documentation](docs/tools.md)
45
46[Assembly Testing Documentation](docs/AssemblyTests.md)
47
48## Requirements
49
50The library can be used with C++03. However, it requires C++11 to build,
51including compiler and standard library support.
52
53The following minimum versions are required to build the library:
54
55* GCC 4.8
56* Clang 3.4
57* Visual Studio 14 2015
58* Intel 2015 Update 1
59
60See [Platform-Specific Build Instructions](#platform-specific-build-instructions).
61
62## Installation
63
64This describes the installation process using cmake. As pre-requisites, you'll
65need git and cmake installed.
66
67_See [dependencies.md](dependencies.md) for more details regarding supported
68versions of build tools._
69
70```bash
71# Check out the library.
72$ git clone https://github.com/google/benchmark.git
73# Benchmark requires Google Test as a dependency. Add the source tree as a subdirectory.
74$ git clone https://github.com/google/googletest.git benchmark/googletest
75# Go to the library root directory
76$ cd benchmark
77# Make a build directory to place the build output.
78$ cmake -E make_directory "build"
79# Generate build system files with cmake.
80$ cmake -E chdir "build" cmake -DCMAKE_BUILD_TYPE=Release ../
81# or, starting with CMake 3.13, use a simpler form:
82# cmake -DCMAKE_BUILD_TYPE=Release -S . -B "build"
83# Build the library.
84$ cmake --build "build" --config Release
85```
86This builds the `benchmark` and `benchmark_main` libraries and tests.
87On a unix system, the build directory should now look something like this:
88
89```
90/benchmark
91  /build
92    /src
93      /libbenchmark.a
94      /libbenchmark_main.a
95    /test
96      ...
97```
98
99Next, you can run the tests to check the build.
100
101```bash
102$ cmake -E chdir "build" ctest --build-config Release
103```
104
105If you want to install the library globally, also run:
106
107```
108sudo cmake --build "build" --config Release --target install
109```
110
111Note that Google Benchmark requires Google Test to build and run the tests. This
112dependency can be provided two ways:
113
114* Checkout the Google Test sources into `benchmark/googletest` as above.
115* Otherwise, if `-DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON` is specified during
116  configuration, the library will automatically download and build any required
117  dependencies.
118
119If you do not wish to build and run the tests, add `-DBENCHMARK_ENABLE_GTEST_TESTS=OFF`
120to `CMAKE_ARGS`.
121
122### Debug vs Release
123
124By default, benchmark builds as a debug library. You will see a warning in the
125output when this is the case. To build it as a release library instead, add
126`-DCMAKE_BUILD_TYPE=Release` when generating the build system files, as shown
127above. The use of `--config Release` in build commands is needed to properly
128support multi-configuration tools (like Visual Studio for example) and can be
129skipped for other build systems (like Makefile).
130
131To enable link-time optimisation, also add `-DBENCHMARK_ENABLE_LTO=true` when
132generating the build system files.
133
134If you are using gcc, you might need to set `GCC_AR` and `GCC_RANLIB` cmake
135cache variables, if autodetection fails.
136
137If you are using clang, you may need to set `LLVMAR_EXECUTABLE`,
138`LLVMNM_EXECUTABLE` and `LLVMRANLIB_EXECUTABLE` cmake cache variables.
139
140### Stable and Experimental Library Versions
141
142The main branch contains the latest stable version of the benchmarking library;
143the API of which can be considered largely stable, with source breaking changes
144being made only upon the release of a new major version.
145
146Newer, experimental, features are implemented and tested on the
147[`v2` branch](https://github.com/google/benchmark/tree/v2). Users who wish
148to use, test, and provide feedback on the new features are encouraged to try
149this branch. However, this branch provides no stability guarantees and reserves
150the right to change and break the API at any time.
151
152## Usage
153
154### Basic usage
155
156Define a function that executes the code to measure, register it as a benchmark
157function using the `BENCHMARK` macro, and ensure an appropriate `main` function
158is available:
159
160```c++
161#include <benchmark/benchmark.h>
162
163static void BM_StringCreation(benchmark::State& state) {
164  for (auto _ : state)
165    std::string empty_string;
166}
167// Register the function as a benchmark
168BENCHMARK(BM_StringCreation);
169
170// Define another benchmark
171static void BM_StringCopy(benchmark::State& state) {
172  std::string x = "hello";
173  for (auto _ : state)
174    std::string copy(x);
175}
176BENCHMARK(BM_StringCopy);
177
178BENCHMARK_MAIN();
179```
180
181To run the benchmark, compile and link against the `benchmark` library
182(libbenchmark.a/.so). If you followed the build steps above, this library will
183be under the build directory you created.
184
185```bash
186# Example on linux after running the build steps above. Assumes the
187# `benchmark` and `build` directories are under the current directory.
188$ g++ mybenchmark.cc -std=c++11 -isystem benchmark/include \
189  -Lbenchmark/build/src -lbenchmark -lpthread -o mybenchmark
190```
191
192Alternatively, link against the `benchmark_main` library and remove
193`BENCHMARK_MAIN();` above to get the same behavior.
194
195The compiled executable will run all benchmarks by default. Pass the `--help`
196flag for option information or see the guide below.
197
198### Usage with CMake
199
200If using CMake, it is recommended to link against the project-provided
201`benchmark::benchmark` and `benchmark::benchmark_main` targets using
202`target_link_libraries`.
203It is possible to use ```find_package``` to import an installed version of the
204library.
205```cmake
206find_package(benchmark REQUIRED)
207```
208Alternatively, ```add_subdirectory``` will incorporate the library directly in
209to one's CMake project.
210```cmake
211add_subdirectory(benchmark)
212```
213Either way, link to the library as follows.
214```cmake
215target_link_libraries(MyTarget benchmark::benchmark)
216```
217
218## Platform Specific Build Instructions
219
220### Building with GCC
221
222When the library is built using GCC it is necessary to link with the pthread
223library due to how GCC implements `std::thread`. Failing to link to pthread will
224lead to runtime exceptions (unless you're using libc++), not linker errors. See
225[issue #67](https://github.com/google/benchmark/issues/67) for more details. You
226can link to pthread by adding `-pthread` to your linker command. Note, you can
227also use `-lpthread`, but there are potential issues with ordering of command
228line parameters if you use that.
229
230### Building with Visual Studio 2015 or 2017
231
232The `shlwapi` library (`-lshlwapi`) is required to support a call to `CPUInfo` which reads the registry. Either add `shlwapi.lib` under `[ Configuration Properties > Linker > Input ]`, or use the following:
233
234```
235// Alternatively, can add libraries using linker options.
236#ifdef _WIN32
237#pragma comment ( lib, "Shlwapi.lib" )
238#ifdef _DEBUG
239#pragma comment ( lib, "benchmarkd.lib" )
240#else
241#pragma comment ( lib, "benchmark.lib" )
242#endif
243#endif
244```
245
246Can also use the graphical version of CMake:
247* Open `CMake GUI`.
248* Under `Where to build the binaries`, same path as source plus `build`.
249* Under `CMAKE_INSTALL_PREFIX`, same path as source plus `install`.
250* Click `Configure`, `Generate`, `Open Project`.
251* If build fails, try deleting entire directory and starting again, or unticking options to build less.
252
253### Building with Intel 2015 Update 1 or Intel System Studio Update 4
254
255See instructions for building with Visual Studio. Once built, right click on the solution and change the build to Intel.
256
257### Building on Solaris
258
259If you're running benchmarks on solaris, you'll want the kstat library linked in
260too (`-lkstat`).
261
262## User Guide
263
264### Command Line
265
266[Output Formats](#output-formats)
267
268[Output Files](#output-files)
269
270[Running Benchmarks](#running-benchmarks)
271
272[Running a Subset of Benchmarks](#running-a-subset-of-benchmarks)
273
274[Result Comparison](#result-comparison)
275
276[Extra Context](#extra-context)
277
278### Library
279
280[Runtime and Reporting Considerations](#runtime-and-reporting-considerations)
281
282[Passing Arguments](#passing-arguments)
283
284[Custom Benchmark Name](#custom-benchmark-name)
285
286[Calculating Asymptotic Complexity](#asymptotic-complexity)
287
288[Templated Benchmarks](#templated-benchmarks)
289
290[Fixtures](#fixtures)
291
292[Custom Counters](#custom-counters)
293
294[Multithreaded Benchmarks](#multithreaded-benchmarks)
295
296[CPU Timers](#cpu-timers)
297
298[Manual Timing](#manual-timing)
299
300[Setting the Time Unit](#setting-the-time-unit)
301
302[Random Interleaving](docs/random_interleaving.md)
303
304[User-Requested Performance Counters](docs/perf_counters.md)
305
306[Preventing Optimization](#preventing-optimization)
307
308[Reporting Statistics](#reporting-statistics)
309
310[Custom Statistics](#custom-statistics)
311
312[Using RegisterBenchmark](#using-register-benchmark)
313
314[Exiting with an Error](#exiting-with-an-error)
315
316[A Faster KeepRunning Loop](#a-faster-keep-running-loop)
317
318[Disabling CPU Frequency Scaling](#disabling-cpu-frequency-scaling)
319
320
321<a name="output-formats" />
322
323### Output Formats
324
325The library supports multiple output formats. Use the
326`--benchmark_format=<console|json|csv>` flag (or set the
327`BENCHMARK_FORMAT=<console|json|csv>` environment variable) to set
328the format type. `console` is the default format.
329
330The Console format is intended to be a human readable format. By default
331the format generates color output. Context is output on stderr and the
332tabular data on stdout. Example tabular output looks like:
333
334```
335Benchmark                               Time(ns)    CPU(ns) Iterations
336----------------------------------------------------------------------
337BM_SetInsert/1024/1                        28928      29349      23853  133.097kB/s   33.2742k items/s
338BM_SetInsert/1024/8                        32065      32913      21375  949.487kB/s   237.372k items/s
339BM_SetInsert/1024/10                       33157      33648      21431  1.13369MB/s   290.225k items/s
340```
341
342The JSON format outputs human readable json split into two top level attributes.
343The `context` attribute contains information about the run in general, including
344information about the CPU and the date.
345The `benchmarks` attribute contains a list of every benchmark run. Example json
346output looks like:
347
348```json
349{
350  "context": {
351    "date": "2015/03/17-18:40:25",
352    "num_cpus": 40,
353    "mhz_per_cpu": 2801,
354    "cpu_scaling_enabled": false,
355    "build_type": "debug"
356  },
357  "benchmarks": [
358    {
359      "name": "BM_SetInsert/1024/1",
360      "iterations": 94877,
361      "real_time": 29275,
362      "cpu_time": 29836,
363      "bytes_per_second": 134066,
364      "items_per_second": 33516
365    },
366    {
367      "name": "BM_SetInsert/1024/8",
368      "iterations": 21609,
369      "real_time": 32317,
370      "cpu_time": 32429,
371      "bytes_per_second": 986770,
372      "items_per_second": 246693
373    },
374    {
375      "name": "BM_SetInsert/1024/10",
376      "iterations": 21393,
377      "real_time": 32724,
378      "cpu_time": 33355,
379      "bytes_per_second": 1199226,
380      "items_per_second": 299807
381    }
382  ]
383}
384```
385
386The CSV format outputs comma-separated values. The `context` is output on stderr
387and the CSV itself on stdout. Example CSV output looks like:
388
389```
390name,iterations,real_time,cpu_time,bytes_per_second,items_per_second,label
391"BM_SetInsert/1024/1",65465,17890.7,8407.45,475768,118942,
392"BM_SetInsert/1024/8",116606,18810.1,9766.64,3.27646e+06,819115,
393"BM_SetInsert/1024/10",106365,17238.4,8421.53,4.74973e+06,1.18743e+06,
394```
395
396<a name="output-files" />
397
398### Output Files
399
400Write benchmark results to a file with the `--benchmark_out=<filename>` option
401(or set `BENCHMARK_OUT`). Specify the output format with
402`--benchmark_out_format={json|console|csv}` (or set
403`BENCHMARK_OUT_FORMAT={json|console|csv}`). Note that the 'csv' reporter is
404deprecated and the saved `.csv` file
405[is not parsable](https://github.com/google/benchmark/issues/794) by csv
406parsers.
407
408Specifying `--benchmark_out` does not suppress the console output.
409
410<a name="running-benchmarks" />
411
412### Running Benchmarks
413
414Benchmarks are executed by running the produced binaries. Benchmarks binaries,
415by default, accept options that may be specified either through their command
416line interface or by setting environment variables before execution. For every
417`--option_flag=<value>` CLI switch, a corresponding environment variable
418`OPTION_FLAG=<value>` exist and is used as default if set (CLI switches always
419 prevails). A complete list of CLI options is available running benchmarks
420 with the `--help` switch.
421
422<a name="running-a-subset-of-benchmarks" />
423
424### Running a Subset of Benchmarks
425
426The `--benchmark_filter=<regex>` option (or `BENCHMARK_FILTER=<regex>`
427environment variable) can be used to only run the benchmarks that match
428the specified `<regex>`. For example:
429
430```bash
431$ ./run_benchmarks.x --benchmark_filter=BM_memcpy/32
432Run on (1 X 2300 MHz CPU )
4332016-06-25 19:34:24
434Benchmark              Time           CPU Iterations
435----------------------------------------------------
436BM_memcpy/32          11 ns         11 ns   79545455
437BM_memcpy/32k       2181 ns       2185 ns     324074
438BM_memcpy/32          12 ns         12 ns   54687500
439BM_memcpy/32k       1834 ns       1837 ns     357143
440```
441
442<a name="result-comparison" />
443
444### Result comparison
445
446It is possible to compare the benchmarking results.
447See [Additional Tooling Documentation](docs/tools.md)
448
449<a name="extra-context" />
450
451### Extra Context
452
453Sometimes it's useful to add extra context to the content printed before the
454results. By default this section includes information about the CPU on which
455the benchmarks are running. If you do want to add more context, you can use
456the `benchmark_context` command line flag:
457
458```bash
459$ ./run_benchmarks --benchmark_context=pwd=`pwd`
460Run on (1 x 2300 MHz CPU)
461pwd: /home/user/benchmark/
462Benchmark              Time           CPU Iterations
463----------------------------------------------------
464BM_memcpy/32          11 ns         11 ns   79545455
465BM_memcpy/32k       2181 ns       2185 ns     324074
466```
467
468You can get the same effect with the API:
469
470```c++
471  benchmark::AddCustomContext("foo", "bar");
472```
473
474Note that attempts to add a second value with the same key will fail with an
475error message.
476
477<a name="runtime-and-reporting-considerations" />
478
479### Runtime and Reporting Considerations
480
481When the benchmark binary is executed, each benchmark function is run serially.
482The number of iterations to run is determined dynamically by running the
483benchmark a few times and measuring the time taken and ensuring that the
484ultimate result will be statistically stable. As such, faster benchmark
485functions will be run for more iterations than slower benchmark functions, and
486the number of iterations is thus reported.
487
488In all cases, the number of iterations for which the benchmark is run is
489governed by the amount of time the benchmark takes. Concretely, the number of
490iterations is at least one, not more than 1e9, until CPU time is greater than
491the minimum time, or the wallclock time is 5x minimum time. The minimum time is
492set per benchmark by calling `MinTime` on the registered benchmark object.
493
494Average timings are then reported over the iterations run. If multiple
495repetitions are requested using the `--benchmark_repetitions` command-line
496option, or at registration time, the benchmark function will be run several
497times and statistical results across these repetitions will also be reported.
498
499As well as the per-benchmark entries, a preamble in the report will include
500information about the machine on which the benchmarks are run.
501
502<a name="passing-arguments" />
503
504### Passing Arguments
505
506Sometimes a family of benchmarks can be implemented with just one routine that
507takes an extra argument to specify which one of the family of benchmarks to
508run. For example, the following code defines a family of benchmarks for
509measuring the speed of `memcpy()` calls of different lengths:
510
511```c++
512static void BM_memcpy(benchmark::State& state) {
513  char* src = new char[state.range(0)];
514  char* dst = new char[state.range(0)];
515  memset(src, 'x', state.range(0));
516  for (auto _ : state)
517    memcpy(dst, src, state.range(0));
518  state.SetBytesProcessed(int64_t(state.iterations()) *
519                          int64_t(state.range(0)));
520  delete[] src;
521  delete[] dst;
522}
523BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(1<<10)->Arg(8<<10);
524```
525
526The preceding code is quite repetitive, and can be replaced with the following
527short-hand. The following invocation will pick a few appropriate arguments in
528the specified range and will generate a benchmark for each such argument.
529
530```c++
531BENCHMARK(BM_memcpy)->Range(8, 8<<10);
532```
533
534By default the arguments in the range are generated in multiples of eight and
535the command above selects [ 8, 64, 512, 4k, 8k ]. In the following code the
536range multiplier is changed to multiples of two.
537
538```c++
539BENCHMARK(BM_memcpy)->RangeMultiplier(2)->Range(8, 8<<10);
540```
541
542Now arguments generated are [ 8, 16, 32, 64, 128, 256, 512, 1024, 2k, 4k, 8k ].
543
544The preceding code shows a method of defining a sparse range.  The following
545example shows a method of defining a dense range. It is then used to benchmark
546the performance of `std::vector` initialization for uniformly increasing sizes.
547
548```c++
549static void BM_DenseRange(benchmark::State& state) {
550  for(auto _ : state) {
551    std::vector<int> v(state.range(0), state.range(0));
552    benchmark::DoNotOptimize(v.data());
553    benchmark::ClobberMemory();
554  }
555}
556BENCHMARK(BM_DenseRange)->DenseRange(0, 1024, 128);
557```
558
559Now arguments generated are [ 0, 128, 256, 384, 512, 640, 768, 896, 1024 ].
560
561You might have a benchmark that depends on two or more inputs. For example, the
562following code defines a family of benchmarks for measuring the speed of set
563insertion.
564
565```c++
566static void BM_SetInsert(benchmark::State& state) {
567  std::set<int> data;
568  for (auto _ : state) {
569    state.PauseTiming();
570    data = ConstructRandomSet(state.range(0));
571    state.ResumeTiming();
572    for (int j = 0; j < state.range(1); ++j)
573      data.insert(RandomNumber());
574  }
575}
576BENCHMARK(BM_SetInsert)
577    ->Args({1<<10, 128})
578    ->Args({2<<10, 128})
579    ->Args({4<<10, 128})
580    ->Args({8<<10, 128})
581    ->Args({1<<10, 512})
582    ->Args({2<<10, 512})
583    ->Args({4<<10, 512})
584    ->Args({8<<10, 512});
585```
586
587The preceding code is quite repetitive, and can be replaced with the following
588short-hand. The following macro will pick a few appropriate arguments in the
589product of the two specified ranges and will generate a benchmark for each such
590pair.
591
592```c++
593BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {128, 512}});
594```
595
596Some benchmarks may require specific argument values that cannot be expressed
597with `Ranges`. In this case, `ArgsProduct` offers the ability to generate a
598benchmark input for each combination in the product of the supplied vectors.
599
600```c++
601BENCHMARK(BM_SetInsert)
602    ->ArgsProduct({{1<<10, 3<<10, 8<<10}, {20, 40, 60, 80}})
603// would generate the same benchmark arguments as
604BENCHMARK(BM_SetInsert)
605    ->Args({1<<10, 20})
606    ->Args({3<<10, 20})
607    ->Args({8<<10, 20})
608    ->Args({3<<10, 40})
609    ->Args({8<<10, 40})
610    ->Args({1<<10, 40})
611    ->Args({1<<10, 60})
612    ->Args({3<<10, 60})
613    ->Args({8<<10, 60})
614    ->Args({1<<10, 80})
615    ->Args({3<<10, 80})
616    ->Args({8<<10, 80});
617```
618
619For more complex patterns of inputs, passing a custom function to `Apply` allows
620programmatic specification of an arbitrary set of arguments on which to run the
621benchmark. The following example enumerates a dense range on one parameter,
622and a sparse range on the second.
623
624```c++
625static void CustomArguments(benchmark::internal::Benchmark* b) {
626  for (int i = 0; i <= 10; ++i)
627    for (int j = 32; j <= 1024*1024; j *= 8)
628      b->Args({i, j});
629}
630BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
631```
632
633#### Passing Arbitrary Arguments to a Benchmark
634
635In C++11 it is possible to define a benchmark that takes an arbitrary number
636of extra arguments. The `BENCHMARK_CAPTURE(func, test_case_name, ...args)`
637macro creates a benchmark that invokes `func`  with the `benchmark::State` as
638the first argument followed by the specified `args...`.
639The `test_case_name` is appended to the name of the benchmark and
640should describe the values passed.
641
642```c++
643template <class ...ExtraArgs>
644void BM_takes_args(benchmark::State& state, ExtraArgs&&... extra_args) {
645  [...]
646}
647// Registers a benchmark named "BM_takes_args/int_string_test" that passes
648// the specified values to `extra_args`.
649BENCHMARK_CAPTURE(BM_takes_args, int_string_test, 42, std::string("abc"));
650```
651
652Note that elements of `...args` may refer to global variables. Users should
653avoid modifying global state inside of a benchmark.
654
655<a name="asymptotic-complexity" />
656
657### Calculating Asymptotic Complexity (Big O)
658
659Asymptotic complexity might be calculated for a family of benchmarks. The
660following code will calculate the coefficient for the high-order term in the
661running time and the normalized root-mean square error of string comparison.
662
663```c++
664static void BM_StringCompare(benchmark::State& state) {
665  std::string s1(state.range(0), '-');
666  std::string s2(state.range(0), '-');
667  for (auto _ : state) {
668    benchmark::DoNotOptimize(s1.compare(s2));
669  }
670  state.SetComplexityN(state.range(0));
671}
672BENCHMARK(BM_StringCompare)
673    ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity(benchmark::oN);
674```
675
676As shown in the following invocation, asymptotic complexity might also be
677calculated automatically.
678
679```c++
680BENCHMARK(BM_StringCompare)
681    ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity();
682```
683
684The following code will specify asymptotic complexity with a lambda function,
685that might be used to customize high-order term calculation.
686
687```c++
688BENCHMARK(BM_StringCompare)->RangeMultiplier(2)
689    ->Range(1<<10, 1<<18)->Complexity([](benchmark::IterationCount n)->double{return n; });
690```
691
692<a name="custom-benchmark-name" />
693
694### Custom Benchmark Name
695
696You can change the benchmark's name as follows:
697
698```c++
699BENCHMARK(BM_memcpy)->Name("memcpy")->RangeMultiplier(2)->Range(8, 8<<10);
700```
701
702The invocation will execute the benchmark as before using `BM_memcpy` but changes
703the prefix in the report to `memcpy`.
704
705<a name="templated-benchmarks" />
706
707### Templated Benchmarks
708
709This example produces and consumes messages of size `sizeof(v)` `range_x`
710times. It also outputs throughput in the absence of multiprogramming.
711
712```c++
713template <class Q> void BM_Sequential(benchmark::State& state) {
714  Q q;
715  typename Q::value_type v;
716  for (auto _ : state) {
717    for (int i = state.range(0); i--; )
718      q.push(v);
719    for (int e = state.range(0); e--; )
720      q.Wait(&v);
721  }
722  // actually messages, not bytes:
723  state.SetBytesProcessed(
724      static_cast<int64_t>(state.iterations())*state.range(0));
725}
726BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
727```
728
729Three macros are provided for adding benchmark templates.
730
731```c++
732#ifdef BENCHMARK_HAS_CXX11
733#define BENCHMARK_TEMPLATE(func, ...) // Takes any number of parameters.
734#else // C++ < C++11
735#define BENCHMARK_TEMPLATE(func, arg1)
736#endif
737#define BENCHMARK_TEMPLATE1(func, arg1)
738#define BENCHMARK_TEMPLATE2(func, arg1, arg2)
739```
740
741<a name="fixtures" />
742
743### Fixtures
744
745Fixture tests are created by first defining a type that derives from
746`::benchmark::Fixture` and then creating/registering the tests using the
747following macros:
748
749* `BENCHMARK_F(ClassName, Method)`
750* `BENCHMARK_DEFINE_F(ClassName, Method)`
751* `BENCHMARK_REGISTER_F(ClassName, Method)`
752
753For Example:
754
755```c++
756class MyFixture : public benchmark::Fixture {
757public:
758  void SetUp(const ::benchmark::State& state) {
759  }
760
761  void TearDown(const ::benchmark::State& state) {
762  }
763};
764
765BENCHMARK_F(MyFixture, FooTest)(benchmark::State& st) {
766   for (auto _ : st) {
767     ...
768  }
769}
770
771BENCHMARK_DEFINE_F(MyFixture, BarTest)(benchmark::State& st) {
772   for (auto _ : st) {
773     ...
774  }
775}
776/* BarTest is NOT registered */
777BENCHMARK_REGISTER_F(MyFixture, BarTest)->Threads(2);
778/* BarTest is now registered */
779```
780
781#### Templated Fixtures
782
783Also you can create templated fixture by using the following macros:
784
785* `BENCHMARK_TEMPLATE_F(ClassName, Method, ...)`
786* `BENCHMARK_TEMPLATE_DEFINE_F(ClassName, Method, ...)`
787
788For example:
789
790```c++
791template<typename T>
792class MyFixture : public benchmark::Fixture {};
793
794BENCHMARK_TEMPLATE_F(MyFixture, IntTest, int)(benchmark::State& st) {
795   for (auto _ : st) {
796     ...
797  }
798}
799
800BENCHMARK_TEMPLATE_DEFINE_F(MyFixture, DoubleTest, double)(benchmark::State& st) {
801   for (auto _ : st) {
802     ...
803  }
804}
805
806BENCHMARK_REGISTER_F(MyFixture, DoubleTest)->Threads(2);
807```
808
809<a name="custom-counters" />
810
811### Custom Counters
812
813You can add your own counters with user-defined names. The example below
814will add columns "Foo", "Bar" and "Baz" in its output:
815
816```c++
817static void UserCountersExample1(benchmark::State& state) {
818  double numFoos = 0, numBars = 0, numBazs = 0;
819  for (auto _ : state) {
820    // ... count Foo,Bar,Baz events
821  }
822  state.counters["Foo"] = numFoos;
823  state.counters["Bar"] = numBars;
824  state.counters["Baz"] = numBazs;
825}
826```
827
828The `state.counters` object is a `std::map` with `std::string` keys
829and `Counter` values. The latter is a `double`-like class, via an implicit
830conversion to `double&`. Thus you can use all of the standard arithmetic
831assignment operators (`=,+=,-=,*=,/=`) to change the value of each counter.
832
833In multithreaded benchmarks, each counter is set on the calling thread only.
834When the benchmark finishes, the counters from each thread will be summed;
835the resulting sum is the value which will be shown for the benchmark.
836
837The `Counter` constructor accepts three parameters: the value as a `double`
838; a bit flag which allows you to show counters as rates, and/or as per-thread
839iteration, and/or as per-thread averages, and/or iteration invariants,
840and/or finally inverting the result; and a flag specifying the 'unit' - i.e.
841is 1k a 1000 (default, `benchmark::Counter::OneK::kIs1000`), or 1024
842(`benchmark::Counter::OneK::kIs1024`)?
843
844```c++
845  // sets a simple counter
846  state.counters["Foo"] = numFoos;
847
848  // Set the counter as a rate. It will be presented divided
849  // by the duration of the benchmark.
850  // Meaning: per one second, how many 'foo's are processed?
851  state.counters["FooRate"] = Counter(numFoos, benchmark::Counter::kIsRate);
852
853  // Set the counter as a rate. It will be presented divided
854  // by the duration of the benchmark, and the result inverted.
855  // Meaning: how many seconds it takes to process one 'foo'?
856  state.counters["FooInvRate"] = Counter(numFoos, benchmark::Counter::kIsRate | benchmark::Counter::kInvert);
857
858  // Set the counter as a thread-average quantity. It will
859  // be presented divided by the number of threads.
860  state.counters["FooAvg"] = Counter(numFoos, benchmark::Counter::kAvgThreads);
861
862  // There's also a combined flag:
863  state.counters["FooAvgRate"] = Counter(numFoos,benchmark::Counter::kAvgThreadsRate);
864
865  // This says that we process with the rate of state.range(0) bytes every iteration:
866  state.counters["BytesProcessed"] = Counter(state.range(0), benchmark::Counter::kIsIterationInvariantRate, benchmark::Counter::OneK::kIs1024);
867```
868
869When you're compiling in C++11 mode or later you can use `insert()` with
870`std::initializer_list`:
871
872```c++
873  // With C++11, this can be done:
874  state.counters.insert({{"Foo", numFoos}, {"Bar", numBars}, {"Baz", numBazs}});
875  // ... instead of:
876  state.counters["Foo"] = numFoos;
877  state.counters["Bar"] = numBars;
878  state.counters["Baz"] = numBazs;
879```
880
881#### Counter Reporting
882
883When using the console reporter, by default, user counters are printed at
884the end after the table, the same way as ``bytes_processed`` and
885``items_processed``. This is best for cases in which there are few counters,
886or where there are only a couple of lines per benchmark. Here's an example of
887the default output:
888
889```
890------------------------------------------------------------------------------
891Benchmark                        Time           CPU Iterations UserCounters...
892------------------------------------------------------------------------------
893BM_UserCounter/threads:8      2248 ns      10277 ns      68808 Bar=16 Bat=40 Baz=24 Foo=8
894BM_UserCounter/threads:1      9797 ns       9788 ns      71523 Bar=2 Bat=5 Baz=3 Foo=1024m
895BM_UserCounter/threads:2      4924 ns       9842 ns      71036 Bar=4 Bat=10 Baz=6 Foo=2
896BM_UserCounter/threads:4      2589 ns      10284 ns      68012 Bar=8 Bat=20 Baz=12 Foo=4
897BM_UserCounter/threads:8      2212 ns      10287 ns      68040 Bar=16 Bat=40 Baz=24 Foo=8
898BM_UserCounter/threads:16     1782 ns      10278 ns      68144 Bar=32 Bat=80 Baz=48 Foo=16
899BM_UserCounter/threads:32     1291 ns      10296 ns      68256 Bar=64 Bat=160 Baz=96 Foo=32
900BM_UserCounter/threads:4      2615 ns      10307 ns      68040 Bar=8 Bat=20 Baz=12 Foo=4
901BM_Factorial                    26 ns         26 ns   26608979 40320
902BM_Factorial/real_time          26 ns         26 ns   26587936 40320
903BM_CalculatePiRange/1           16 ns         16 ns   45704255 0
904BM_CalculatePiRange/8           73 ns         73 ns    9520927 3.28374
905BM_CalculatePiRange/64         609 ns        609 ns    1140647 3.15746
906BM_CalculatePiRange/512       4900 ns       4901 ns     142696 3.14355
907```
908
909If this doesn't suit you, you can print each counter as a table column by
910passing the flag `--benchmark_counters_tabular=true` to the benchmark
911application. This is best for cases in which there are a lot of counters, or
912a lot of lines per individual benchmark. Note that this will trigger a
913reprinting of the table header any time the counter set changes between
914individual benchmarks. Here's an example of corresponding output when
915`--benchmark_counters_tabular=true` is passed:
916
917```
918---------------------------------------------------------------------------------------
919Benchmark                        Time           CPU Iterations    Bar   Bat   Baz   Foo
920---------------------------------------------------------------------------------------
921BM_UserCounter/threads:8      2198 ns       9953 ns      70688     16    40    24     8
922BM_UserCounter/threads:1      9504 ns       9504 ns      73787      2     5     3     1
923BM_UserCounter/threads:2      4775 ns       9550 ns      72606      4    10     6     2
924BM_UserCounter/threads:4      2508 ns       9951 ns      70332      8    20    12     4
925BM_UserCounter/threads:8      2055 ns       9933 ns      70344     16    40    24     8
926BM_UserCounter/threads:16     1610 ns       9946 ns      70720     32    80    48    16
927BM_UserCounter/threads:32     1192 ns       9948 ns      70496     64   160    96    32
928BM_UserCounter/threads:4      2506 ns       9949 ns      70332      8    20    12     4
929--------------------------------------------------------------
930Benchmark                        Time           CPU Iterations
931--------------------------------------------------------------
932BM_Factorial                    26 ns         26 ns   26392245 40320
933BM_Factorial/real_time          26 ns         26 ns   26494107 40320
934BM_CalculatePiRange/1           15 ns         15 ns   45571597 0
935BM_CalculatePiRange/8           74 ns         74 ns    9450212 3.28374
936BM_CalculatePiRange/64         595 ns        595 ns    1173901 3.15746
937BM_CalculatePiRange/512       4752 ns       4752 ns     147380 3.14355
938BM_CalculatePiRange/4k       37970 ns      37972 ns      18453 3.14184
939BM_CalculatePiRange/32k     303733 ns     303744 ns       2305 3.14162
940BM_CalculatePiRange/256k   2434095 ns    2434186 ns        288 3.1416
941BM_CalculatePiRange/1024k  9721140 ns    9721413 ns         71 3.14159
942BM_CalculatePi/threads:8      2255 ns       9943 ns      70936
943```
944
945Note above the additional header printed when the benchmark changes from
946``BM_UserCounter`` to ``BM_Factorial``. This is because ``BM_Factorial`` does
947not have the same counter set as ``BM_UserCounter``.
948
949<a name="multithreaded-benchmarks"/>
950
951### Multithreaded Benchmarks
952
953In a multithreaded test (benchmark invoked by multiple threads simultaneously),
954it is guaranteed that none of the threads will start until all have reached
955the start of the benchmark loop, and all will have finished before any thread
956exits the benchmark loop. (This behavior is also provided by the `KeepRunning()`
957API) As such, any global setup or teardown can be wrapped in a check against the thread
958index:
959
960```c++
961static void BM_MultiThreaded(benchmark::State& state) {
962  if (state.thread_index == 0) {
963    // Setup code here.
964  }
965  for (auto _ : state) {
966    // Run the test as normal.
967  }
968  if (state.thread_index == 0) {
969    // Teardown code here.
970  }
971}
972BENCHMARK(BM_MultiThreaded)->Threads(2);
973```
974
975If the benchmarked code itself uses threads and you want to compare it to
976single-threaded code, you may want to use real-time ("wallclock") measurements
977for latency comparisons:
978
979```c++
980BENCHMARK(BM_test)->Range(8, 8<<10)->UseRealTime();
981```
982
983Without `UseRealTime`, CPU time is used by default.
984
985<a name="cpu-timers" />
986
987### CPU Timers
988
989By default, the CPU timer only measures the time spent by the main thread.
990If the benchmark itself uses threads internally, this measurement may not
991be what you are looking for. Instead, there is a way to measure the total
992CPU usage of the process, by all the threads.
993
994```c++
995void callee(int i);
996
997static void MyMain(int size) {
998#pragma omp parallel for
999  for(int i = 0; i < size; i++)
1000    callee(i);
1001}
1002
1003static void BM_OpenMP(benchmark::State& state) {
1004  for (auto _ : state)
1005    MyMain(state.range(0));
1006}
1007
1008// Measure the time spent by the main thread, use it to decide for how long to
1009// run the benchmark loop. Depending on the internal implementation detail may
1010// measure to anywhere from near-zero (the overhead spent before/after work
1011// handoff to worker thread[s]) to the whole single-thread time.
1012BENCHMARK(BM_OpenMP)->Range(8, 8<<10);
1013
1014// Measure the user-visible time, the wall clock (literally, the time that
1015// has passed on the clock on the wall), use it to decide for how long to
1016// run the benchmark loop. This will always be meaningful, an will match the
1017// time spent by the main thread in single-threaded case, in general decreasing
1018// with the number of internal threads doing the work.
1019BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->UseRealTime();
1020
1021// Measure the total CPU consumption, use it to decide for how long to
1022// run the benchmark loop. This will always measure to no less than the
1023// time spent by the main thread in single-threaded case.
1024BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime();
1025
1026// A mixture of the last two. Measure the total CPU consumption, but use the
1027// wall clock to decide for how long to run the benchmark loop.
1028BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime()->UseRealTime();
1029```
1030
1031#### Controlling Timers
1032
1033Normally, the entire duration of the work loop (`for (auto _ : state) {}`)
1034is measured. But sometimes, it is necessary to do some work inside of
1035that loop, every iteration, but without counting that time to the benchmark time.
1036That is possible, although it is not recommended, since it has high overhead.
1037
1038```c++
1039static void BM_SetInsert_With_Timer_Control(benchmark::State& state) {
1040  std::set<int> data;
1041  for (auto _ : state) {
1042    state.PauseTiming(); // Stop timers. They will not count until they are resumed.
1043    data = ConstructRandomSet(state.range(0)); // Do something that should not be measured
1044    state.ResumeTiming(); // And resume timers. They are now counting again.
1045    // The rest will be measured.
1046    for (int j = 0; j < state.range(1); ++j)
1047      data.insert(RandomNumber());
1048  }
1049}
1050BENCHMARK(BM_SetInsert_With_Timer_Control)->Ranges({{1<<10, 8<<10}, {128, 512}});
1051```
1052
1053<a name="manual-timing" />
1054
1055### Manual Timing
1056
1057For benchmarking something for which neither CPU time nor real-time are
1058correct or accurate enough, completely manual timing is supported using
1059the `UseManualTime` function.
1060
1061When `UseManualTime` is used, the benchmarked code must call
1062`SetIterationTime` once per iteration of the benchmark loop to
1063report the manually measured time.
1064
1065An example use case for this is benchmarking GPU execution (e.g. OpenCL
1066or CUDA kernels, OpenGL or Vulkan or Direct3D draw calls), which cannot
1067be accurately measured using CPU time or real-time. Instead, they can be
1068measured accurately using a dedicated API, and these measurement results
1069can be reported back with `SetIterationTime`.
1070
1071```c++
1072static void BM_ManualTiming(benchmark::State& state) {
1073  int microseconds = state.range(0);
1074  std::chrono::duration<double, std::micro> sleep_duration {
1075    static_cast<double>(microseconds)
1076  };
1077
1078  for (auto _ : state) {
1079    auto start = std::chrono::high_resolution_clock::now();
1080    // Simulate some useful workload with a sleep
1081    std::this_thread::sleep_for(sleep_duration);
1082    auto end = std::chrono::high_resolution_clock::now();
1083
1084    auto elapsed_seconds =
1085      std::chrono::duration_cast<std::chrono::duration<double>>(
1086        end - start);
1087
1088    state.SetIterationTime(elapsed_seconds.count());
1089  }
1090}
1091BENCHMARK(BM_ManualTiming)->Range(1, 1<<17)->UseManualTime();
1092```
1093
1094<a name="setting-the-time-unit" />
1095
1096### Setting the Time Unit
1097
1098If a benchmark runs a few milliseconds it may be hard to visually compare the
1099measured times, since the output data is given in nanoseconds per default. In
1100order to manually set the time unit, you can specify it manually:
1101
1102```c++
1103BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
1104```
1105
1106<a name="preventing-optimization" />
1107
1108### Preventing Optimization
1109
1110To prevent a value or expression from being optimized away by the compiler
1111the `benchmark::DoNotOptimize(...)` and `benchmark::ClobberMemory()`
1112functions can be used.
1113
1114```c++
1115static void BM_test(benchmark::State& state) {
1116  for (auto _ : state) {
1117      int x = 0;
1118      for (int i=0; i < 64; ++i) {
1119        benchmark::DoNotOptimize(x += i);
1120      }
1121  }
1122}
1123```
1124
1125`DoNotOptimize(<expr>)` forces the  *result* of `<expr>` to be stored in either
1126memory or a register. For GNU based compilers it acts as read/write barrier
1127for global memory. More specifically it forces the compiler to flush pending
1128writes to memory and reload any other values as necessary.
1129
1130Note that `DoNotOptimize(<expr>)` does not prevent optimizations on `<expr>`
1131in any way. `<expr>` may even be removed entirely when the result is already
1132known. For example:
1133
1134```c++
1135  /* Example 1: `<expr>` is removed entirely. */
1136  int foo(int x) { return x + 42; }
1137  while (...) DoNotOptimize(foo(0)); // Optimized to DoNotOptimize(42);
1138
1139  /*  Example 2: Result of '<expr>' is only reused */
1140  int bar(int) __attribute__((const));
1141  while (...) DoNotOptimize(bar(0)); // Optimized to:
1142  // int __result__ = bar(0);
1143  // while (...) DoNotOptimize(__result__);
1144```
1145
1146The second tool for preventing optimizations is `ClobberMemory()`. In essence
1147`ClobberMemory()` forces the compiler to perform all pending writes to global
1148memory. Memory managed by block scope objects must be "escaped" using
1149`DoNotOptimize(...)` before it can be clobbered. In the below example
1150`ClobberMemory()` prevents the call to `v.push_back(42)` from being optimized
1151away.
1152
1153```c++
1154static void BM_vector_push_back(benchmark::State& state) {
1155  for (auto _ : state) {
1156    std::vector<int> v;
1157    v.reserve(1);
1158    benchmark::DoNotOptimize(v.data()); // Allow v.data() to be clobbered.
1159    v.push_back(42);
1160    benchmark::ClobberMemory(); // Force 42 to be written to memory.
1161  }
1162}
1163```
1164
1165Note that `ClobberMemory()` is only available for GNU or MSVC based compilers.
1166
1167<a name="reporting-statistics" />
1168
1169### Statistics: Reporting the Mean, Median and Standard Deviation of Repeated Benchmarks
1170
1171By default each benchmark is run once and that single result is reported.
1172However benchmarks are often noisy and a single result may not be representative
1173of the overall behavior. For this reason it's possible to repeatedly rerun the
1174benchmark.
1175
1176The number of runs of each benchmark is specified globally by the
1177`--benchmark_repetitions` flag or on a per benchmark basis by calling
1178`Repetitions` on the registered benchmark object. When a benchmark is run more
1179than once the mean, median and standard deviation of the runs will be reported.
1180
1181Additionally the `--benchmark_report_aggregates_only={true|false}`,
1182`--benchmark_display_aggregates_only={true|false}` flags or
1183`ReportAggregatesOnly(bool)`, `DisplayAggregatesOnly(bool)` functions can be
1184used to change how repeated tests are reported. By default the result of each
1185repeated run is reported. When `report aggregates only` option is `true`,
1186only the aggregates (i.e. mean, median and standard deviation, maybe complexity
1187measurements if they were requested) of the runs is reported, to both the
1188reporters - standard output (console), and the file.
1189However when only the `display aggregates only` option is `true`,
1190only the aggregates are displayed in the standard output, while the file
1191output still contains everything.
1192Calling `ReportAggregatesOnly(bool)` / `DisplayAggregatesOnly(bool)` on a
1193registered benchmark object overrides the value of the appropriate flag for that
1194benchmark.
1195
1196<a name="custom-statistics" />
1197
1198### Custom Statistics
1199
1200While having mean, median and standard deviation is nice, this may not be
1201enough for everyone. For example you may want to know what the largest
1202observation is, e.g. because you have some real-time constraints. This is easy.
1203The following code will specify a custom statistic to be calculated, defined
1204by a lambda function.
1205
1206```c++
1207void BM_spin_empty(benchmark::State& state) {
1208  for (auto _ : state) {
1209    for (int x = 0; x < state.range(0); ++x) {
1210      benchmark::DoNotOptimize(x);
1211    }
1212  }
1213}
1214
1215BENCHMARK(BM_spin_empty)
1216  ->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
1217    return *(std::max_element(std::begin(v), std::end(v)));
1218  })
1219  ->Arg(512);
1220```
1221
1222<a name="using-register-benchmark" />
1223
1224### Using RegisterBenchmark(name, fn, args...)
1225
1226The `RegisterBenchmark(name, func, args...)` function provides an alternative
1227way to create and register benchmarks.
1228`RegisterBenchmark(name, func, args...)` creates, registers, and returns a
1229pointer to a new benchmark with the specified `name` that invokes
1230`func(st, args...)` where `st` is a `benchmark::State` object.
1231
1232Unlike the `BENCHMARK` registration macros, which can only be used at the global
1233scope, the `RegisterBenchmark` can be called anywhere. This allows for
1234benchmark tests to be registered programmatically.
1235
1236Additionally `RegisterBenchmark` allows any callable object to be registered
1237as a benchmark. Including capturing lambdas and function objects.
1238
1239For Example:
1240```c++
1241auto BM_test = [](benchmark::State& st, auto Inputs) { /* ... */ };
1242
1243int main(int argc, char** argv) {
1244  for (auto& test_input : { /* ... */ })
1245      benchmark::RegisterBenchmark(test_input.name(), BM_test, test_input);
1246  benchmark::Initialize(&argc, argv);
1247  benchmark::RunSpecifiedBenchmarks();
1248  benchmark::Shutdown();
1249}
1250```
1251
1252<a name="exiting-with-an-error" />
1253
1254### Exiting with an Error
1255
1256When errors caused by external influences, such as file I/O and network
1257communication, occur within a benchmark the
1258`State::SkipWithError(const char* msg)` function can be used to skip that run
1259of benchmark and report the error. Note that only future iterations of the
1260`KeepRunning()` are skipped. For the ranged-for version of the benchmark loop
1261Users must explicitly exit the loop, otherwise all iterations will be performed.
1262Users may explicitly return to exit the benchmark immediately.
1263
1264The `SkipWithError(...)` function may be used at any point within the benchmark,
1265including before and after the benchmark loop. Moreover, if `SkipWithError(...)`
1266has been used, it is not required to reach the benchmark loop and one may return
1267from the benchmark function early.
1268
1269For example:
1270
1271```c++
1272static void BM_test(benchmark::State& state) {
1273  auto resource = GetResource();
1274  if (!resource.good()) {
1275    state.SkipWithError("Resource is not good!");
1276    // KeepRunning() loop will not be entered.
1277  }
1278  while (state.KeepRunning()) {
1279    auto data = resource.read_data();
1280    if (!resource.good()) {
1281      state.SkipWithError("Failed to read data!");
1282      break; // Needed to skip the rest of the iteration.
1283    }
1284    do_stuff(data);
1285  }
1286}
1287
1288static void BM_test_ranged_fo(benchmark::State & state) {
1289  auto resource = GetResource();
1290  if (!resource.good()) {
1291    state.SkipWithError("Resource is not good!");
1292    return; // Early return is allowed when SkipWithError() has been used.
1293  }
1294  for (auto _ : state) {
1295    auto data = resource.read_data();
1296    if (!resource.good()) {
1297      state.SkipWithError("Failed to read data!");
1298      break; // REQUIRED to prevent all further iterations.
1299    }
1300    do_stuff(data);
1301  }
1302}
1303```
1304<a name="a-faster-keep-running-loop" />
1305
1306### A Faster KeepRunning Loop
1307
1308In C++11 mode, a ranged-based for loop should be used in preference to
1309the `KeepRunning` loop for running the benchmarks. For example:
1310
1311```c++
1312static void BM_Fast(benchmark::State &state) {
1313  for (auto _ : state) {
1314    FastOperation();
1315  }
1316}
1317BENCHMARK(BM_Fast);
1318```
1319
1320The reason the ranged-for loop is faster than using `KeepRunning`, is
1321because `KeepRunning` requires a memory load and store of the iteration count
1322ever iteration, whereas the ranged-for variant is able to keep the iteration count
1323in a register.
1324
1325For example, an empty inner loop of using the ranged-based for method looks like:
1326
1327```asm
1328# Loop Init
1329  mov rbx, qword ptr [r14 + 104]
1330  call benchmark::State::StartKeepRunning()
1331  test rbx, rbx
1332  je .LoopEnd
1333.LoopHeader: # =>This Inner Loop Header: Depth=1
1334  add rbx, -1
1335  jne .LoopHeader
1336.LoopEnd:
1337```
1338
1339Compared to an empty `KeepRunning` loop, which looks like:
1340
1341```asm
1342.LoopHeader: # in Loop: Header=BB0_3 Depth=1
1343  cmp byte ptr [rbx], 1
1344  jne .LoopInit
1345.LoopBody: # =>This Inner Loop Header: Depth=1
1346  mov rax, qword ptr [rbx + 8]
1347  lea rcx, [rax + 1]
1348  mov qword ptr [rbx + 8], rcx
1349  cmp rax, qword ptr [rbx + 104]
1350  jb .LoopHeader
1351  jmp .LoopEnd
1352.LoopInit:
1353  mov rdi, rbx
1354  call benchmark::State::StartKeepRunning()
1355  jmp .LoopBody
1356.LoopEnd:
1357```
1358
1359Unless C++03 compatibility is required, the ranged-for variant of writing
1360the benchmark loop should be preferred.
1361
1362<a name="disabling-cpu-frequency-scaling" />
1363
1364### Disabling CPU Frequency Scaling
1365
1366If you see this error:
1367
1368```
1369***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
1370```
1371
1372you might want to disable the CPU frequency scaling while running the benchmark:
1373
1374```bash
1375sudo cpupower frequency-set --governor performance
1376./mybench
1377sudo cpupower frequency-set --governor powersave
1378```
1379