]> git.proxmox.com Git - ceph.git/blob - ceph/src/jaegertracing/opentelemetry-cpp/third_party/benchmark/README.md
update ceph source to reef 18.1.2
[ceph.git] / ceph / src / jaegertracing / opentelemetry-cpp / third_party / benchmark / README.md
1 # Benchmark
2
3 [![build-and-test](https://github.com/google/benchmark/workflows/build-and-test/badge.svg)](https://github.com/google/benchmark/actions?query=workflow%3Abuild-and-test)
4 [![bazel](https://github.com/google/benchmark/actions/workflows/bazel.yml/badge.svg)](https://github.com/google/benchmark/actions/workflows/bazel.yml)
5 [![pylint](https://github.com/google/benchmark/workflows/pylint/badge.svg)](https://github.com/google/benchmark/actions?query=workflow%3Apylint)
6 [![test-bindings](https://github.com/google/benchmark/workflows/test-bindings/badge.svg)](https://github.com/google/benchmark/actions?query=workflow%3Atest-bindings)
7
8 [![Build Status](https://travis-ci.org/google/benchmark.svg?branch=master)](https://travis-ci.org/google/benchmark)
9 [![Build status](https://ci.appveyor.com/api/projects/status/u0qsyp7t1tk7cpxs/branch/master?svg=true)](https://ci.appveyor.com/project/google/benchmark/branch/master)
10 [![Coverage Status](https://coveralls.io/repos/google/benchmark/badge.svg)](https://coveralls.io/r/google/benchmark)
11
12
13 A library to benchmark code snippets, similar to unit tests. Example:
14
15 ```c++
16 #include <benchmark/benchmark.h>
17
18 static void BM_SomeFunction(benchmark::State& state) {
19 // Perform setup here
20 for (auto _ : state) {
21 // This code gets timed
22 SomeFunction();
23 }
24 }
25 // Register the function as a benchmark
26 BENCHMARK(BM_SomeFunction);
27 // Run the benchmark
28 BENCHMARK_MAIN();
29 ```
30
31 To get started, see [Requirements](#requirements) and
32 [Installation](#installation). See [Usage](#usage) for a full example and the
33 [User Guide](#user-guide) for a more comprehensive feature overview.
34
35 It may also help to read the [Google Test documentation](https://github.com/google/googletest/blob/master/docs/primer.md)
36 as some of the structural aspects of the APIs are similar.
37
38 ### Resources
39
40 [Discussion group](https://groups.google.com/d/forum/benchmark-discuss)
41
42 IRC channel: [freenode](https://freenode.net) #googlebenchmark
43
44 [Additional Tooling Documentation](docs/tools.md)
45
46 [Assembly Testing Documentation](docs/AssemblyTests.md)
47
48 ## Requirements
49
50 The library can be used with C++03. However, it requires C++11 to build,
51 including compiler and standard library support.
52
53 The following minimum versions are required to build the library:
54
55 * GCC 4.8
56 * Clang 3.4
57 * Visual Studio 14 2015
58 * Intel 2015 Update 1
59
60 See [Platform-Specific Build Instructions](#platform-specific-build-instructions).
61
62 ## Installation
63
64 This describes the installation process using cmake. As pre-requisites, you'll
65 need git and cmake installed.
66
67 _See [dependencies.md](dependencies.md) for more details regarding supported
68 versions of build tools._
69
70 ```bash
71 # Check out the library.
72 $ git clone https://github.com/google/benchmark.git
73 # Benchmark requires Google Test as a dependency. Add the source tree as a subdirectory.
74 $ git clone https://github.com/google/googletest.git benchmark/googletest
75 # Go to the library root directory
76 $ cd benchmark
77 # Make a build directory to place the build output.
78 $ cmake -E make_directory "build"
79 # Generate build system files with cmake.
80 $ cmake -E chdir "build" cmake -DCMAKE_BUILD_TYPE=Release ../
81 # or, starting with CMake 3.13, use a simpler form:
82 # cmake -DCMAKE_BUILD_TYPE=Release -S . -B "build"
83 # Build the library.
84 $ cmake --build "build" --config Release
85 ```
86 This builds the `benchmark` and `benchmark_main` libraries and tests.
87 On a unix system, the build directory should now look something like this:
88
89 ```
90 /benchmark
91 /build
92 /src
93 /libbenchmark.a
94 /libbenchmark_main.a
95 /test
96 ...
97 ```
98
99 Next, you can run the tests to check the build.
100
101 ```bash
102 $ cmake -E chdir "build" ctest --build-config Release
103 ```
104
105 If you want to install the library globally, also run:
106
107 ```
108 sudo cmake --build "build" --config Release --target install
109 ```
110
111 Note that Google Benchmark requires Google Test to build and run the tests. This
112 dependency can be provided two ways:
113
114 * Checkout the Google Test sources into `benchmark/googletest` as above.
115 * Otherwise, if `-DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON` is specified during
116 configuration, the library will automatically download and build any required
117 dependencies.
118
119 If you do not wish to build and run the tests, add `-DBENCHMARK_ENABLE_GTEST_TESTS=OFF`
120 to `CMAKE_ARGS`.
121
122 ### Debug vs Release
123
124 By default, benchmark builds as a debug library. You will see a warning in the
125 output when this is the case. To build it as a release library instead, add
126 `-DCMAKE_BUILD_TYPE=Release` when generating the build system files, as shown
127 above. The use of `--config Release` in build commands is needed to properly
128 support multi-configuration tools (like Visual Studio for example) and can be
129 skipped for other build systems (like Makefile).
130
131 To enable link-time optimisation, also add `-DBENCHMARK_ENABLE_LTO=true` when
132 generating the build system files.
133
134 If you are using gcc, you might need to set `GCC_AR` and `GCC_RANLIB` cmake
135 cache variables, if autodetection fails.
136
137 If you are using clang, you may need to set `LLVMAR_EXECUTABLE`,
138 `LLVMNM_EXECUTABLE` and `LLVMRANLIB_EXECUTABLE` cmake cache variables.
139
140 ### Stable and Experimental Library Versions
141
142 The main branch contains the latest stable version of the benchmarking library;
143 the API of which can be considered largely stable, with source breaking changes
144 being made only upon the release of a new major version.
145
146 Newer, experimental, features are implemented and tested on the
147 [`v2` branch](https://github.com/google/benchmark/tree/v2). Users who wish
148 to use, test, and provide feedback on the new features are encouraged to try
149 this branch. However, this branch provides no stability guarantees and reserves
150 the right to change and break the API at any time.
151
152 ## Usage
153
154 ### Basic usage
155
156 Define a function that executes the code to measure, register it as a benchmark
157 function using the `BENCHMARK` macro, and ensure an appropriate `main` function
158 is available:
159
160 ```c++
161 #include <benchmark/benchmark.h>
162
163 static void BM_StringCreation(benchmark::State& state) {
164 for (auto _ : state)
165 std::string empty_string;
166 }
167 // Register the function as a benchmark
168 BENCHMARK(BM_StringCreation);
169
170 // Define another benchmark
171 static void BM_StringCopy(benchmark::State& state) {
172 std::string x = "hello";
173 for (auto _ : state)
174 std::string copy(x);
175 }
176 BENCHMARK(BM_StringCopy);
177
178 BENCHMARK_MAIN();
179 ```
180
181 To run the benchmark, compile and link against the `benchmark` library
182 (libbenchmark.a/.so). If you followed the build steps above, this library will
183 be under the build directory you created.
184
185 ```bash
186 # Example on linux after running the build steps above. Assumes the
187 # `benchmark` and `build` directories are under the current directory.
188 $ g++ mybenchmark.cc -std=c++11 -isystem benchmark/include \
189 -Lbenchmark/build/src -lbenchmark -lpthread -o mybenchmark
190 ```
191
192 Alternatively, link against the `benchmark_main` library and remove
193 `BENCHMARK_MAIN();` above to get the same behavior.
194
195 The compiled executable will run all benchmarks by default. Pass the `--help`
196 flag for option information or see the guide below.
197
198 ### Usage with CMake
199
200 If using CMake, it is recommended to link against the project-provided
201 `benchmark::benchmark` and `benchmark::benchmark_main` targets using
202 `target_link_libraries`.
203 It is possible to use ```find_package``` to import an installed version of the
204 library.
205 ```cmake
206 find_package(benchmark REQUIRED)
207 ```
208 Alternatively, ```add_subdirectory``` will incorporate the library directly in
209 to one's CMake project.
210 ```cmake
211 add_subdirectory(benchmark)
212 ```
213 Either way, link to the library as follows.
214 ```cmake
215 target_link_libraries(MyTarget benchmark::benchmark)
216 ```
217
218 ## Platform Specific Build Instructions
219
220 ### Building with GCC
221
222 When the library is built using GCC it is necessary to link with the pthread
223 library due to how GCC implements `std::thread`. Failing to link to pthread will
224 lead to runtime exceptions (unless you're using libc++), not linker errors. See
225 [issue #67](https://github.com/google/benchmark/issues/67) for more details. You
226 can link to pthread by adding `-pthread` to your linker command. Note, you can
227 also use `-lpthread`, but there are potential issues with ordering of command
228 line parameters if you use that.
229
230 ### Building with Visual Studio 2015 or 2017
231
232 The `shlwapi` library (`-lshlwapi`) is required to support a call to `CPUInfo` which reads the registry. Either add `shlwapi.lib` under `[ Configuration Properties > Linker > Input ]`, or use the following:
233
234 ```
235 // Alternatively, can add libraries using linker options.
236 #ifdef _WIN32
237 #pragma comment ( lib, "Shlwapi.lib" )
238 #ifdef _DEBUG
239 #pragma comment ( lib, "benchmarkd.lib" )
240 #else
241 #pragma comment ( lib, "benchmark.lib" )
242 #endif
243 #endif
244 ```
245
246 Can also use the graphical version of CMake:
247 * Open `CMake GUI`.
248 * Under `Where to build the binaries`, same path as source plus `build`.
249 * Under `CMAKE_INSTALL_PREFIX`, same path as source plus `install`.
250 * Click `Configure`, `Generate`, `Open Project`.
251 * If build fails, try deleting entire directory and starting again, or unticking options to build less.
252
253 ### Building with Intel 2015 Update 1 or Intel System Studio Update 4
254
255 See instructions for building with Visual Studio. Once built, right click on the solution and change the build to Intel.
256
257 ### Building on Solaris
258
259 If you're running benchmarks on solaris, you'll want the kstat library linked in
260 too (`-lkstat`).
261
262 ## User Guide
263
264 ### Command Line
265
266 [Output Formats](#output-formats)
267
268 [Output Files](#output-files)
269
270 [Running Benchmarks](#running-benchmarks)
271
272 [Running a Subset of Benchmarks](#running-a-subset-of-benchmarks)
273
274 [Result Comparison](#result-comparison)
275
276 ### Library
277
278 [Runtime and Reporting Considerations](#runtime-and-reporting-considerations)
279
280 [Passing Arguments](#passing-arguments)
281
282 [Custom Benchmark Name](#custom-benchmark-name)
283
284 [Calculating Asymptotic Complexity](#asymptotic-complexity)
285
286 [Templated Benchmarks](#templated-benchmarks)
287
288 [Fixtures](#fixtures)
289
290 [Custom Counters](#custom-counters)
291
292 [Multithreaded Benchmarks](#multithreaded-benchmarks)
293
294 [CPU Timers](#cpu-timers)
295
296 [Manual Timing](#manual-timing)
297
298 [Setting the Time Unit](#setting-the-time-unit)
299
300 [Preventing Optimization](#preventing-optimization)
301
302 [Reporting Statistics](#reporting-statistics)
303
304 [Custom Statistics](#custom-statistics)
305
306 [Using RegisterBenchmark](#using-register-benchmark)
307
308 [Exiting with an Error](#exiting-with-an-error)
309
310 [A Faster KeepRunning Loop](#a-faster-keep-running-loop)
311
312 [Disabling CPU Frequency Scaling](#disabling-cpu-frequency-scaling)
313
314
315 <a name="output-formats" />
316
317 ### Output Formats
318
319 The library supports multiple output formats. Use the
320 `--benchmark_format=<console|json|csv>` flag (or set the
321 `BENCHMARK_FORMAT=<console|json|csv>` environment variable) to set
322 the format type. `console` is the default format.
323
324 The Console format is intended to be a human readable format. By default
325 the format generates color output. Context is output on stderr and the
326 tabular data on stdout. Example tabular output looks like:
327
328 ```
329 Benchmark Time(ns) CPU(ns) Iterations
330 ----------------------------------------------------------------------
331 BM_SetInsert/1024/1 28928 29349 23853 133.097kB/s 33.2742k items/s
332 BM_SetInsert/1024/8 32065 32913 21375 949.487kB/s 237.372k items/s
333 BM_SetInsert/1024/10 33157 33648 21431 1.13369MB/s 290.225k items/s
334 ```
335
336 The JSON format outputs human readable json split into two top level attributes.
337 The `context` attribute contains information about the run in general, including
338 information about the CPU and the date.
339 The `benchmarks` attribute contains a list of every benchmark run. Example json
340 output looks like:
341
342 ```json
343 {
344 "context": {
345 "date": "2015/03/17-18:40:25",
346 "num_cpus": 40,
347 "mhz_per_cpu": 2801,
348 "cpu_scaling_enabled": false,
349 "build_type": "debug"
350 },
351 "benchmarks": [
352 {
353 "name": "BM_SetInsert/1024/1",
354 "iterations": 94877,
355 "real_time": 29275,
356 "cpu_time": 29836,
357 "bytes_per_second": 134066,
358 "items_per_second": 33516
359 },
360 {
361 "name": "BM_SetInsert/1024/8",
362 "iterations": 21609,
363 "real_time": 32317,
364 "cpu_time": 32429,
365 "bytes_per_second": 986770,
366 "items_per_second": 246693
367 },
368 {
369 "name": "BM_SetInsert/1024/10",
370 "iterations": 21393,
371 "real_time": 32724,
372 "cpu_time": 33355,
373 "bytes_per_second": 1199226,
374 "items_per_second": 299807
375 }
376 ]
377 }
378 ```
379
380 The CSV format outputs comma-separated values. The `context` is output on stderr
381 and the CSV itself on stdout. Example CSV output looks like:
382
383 ```
384 name,iterations,real_time,cpu_time,bytes_per_second,items_per_second,label
385 "BM_SetInsert/1024/1",65465,17890.7,8407.45,475768,118942,
386 "BM_SetInsert/1024/8",116606,18810.1,9766.64,3.27646e+06,819115,
387 "BM_SetInsert/1024/10",106365,17238.4,8421.53,4.74973e+06,1.18743e+06,
388 ```
389
390 <a name="output-files" />
391
392 ### Output Files
393
394 Write benchmark results to a file with the `--benchmark_out=<filename>` option
395 (or set `BENCHMARK_OUT`). Specify the output format with
396 `--benchmark_out_format={json|console|csv}` (or set
397 `BENCHMARK_OUT_FORMAT={json|console|csv}`). Note that the 'csv' reporter is
398 deprecated and the saved `.csv` file
399 [is not parsable](https://github.com/google/benchmark/issues/794) by csv
400 parsers.
401
402 Specifying `--benchmark_out` does not suppress the console output.
403
404 <a name="running-benchmarks" />
405
406 ### Running Benchmarks
407
408 Benchmarks are executed by running the produced binaries. Benchmarks binaries,
409 by default, accept options that may be specified either through their command
410 line interface or by setting environment variables before execution. For every
411 `--option_flag=<value>` CLI switch, a corresponding environment variable
412 `OPTION_FLAG=<value>` exist and is used as default if set (CLI switches always
413 prevails). A complete list of CLI options is available running benchmarks
414 with the `--help` switch.
415
416 <a name="running-a-subset-of-benchmarks" />
417
418 ### Running a Subset of Benchmarks
419
420 The `--benchmark_filter=<regex>` option (or `BENCHMARK_FILTER=<regex>`
421 environment variable) can be used to only run the benchmarks that match
422 the specified `<regex>`. For example:
423
424 ```bash
425 $ ./run_benchmarks.x --benchmark_filter=BM_memcpy/32
426 Run on (1 X 2300 MHz CPU )
427 2016-06-25 19:34:24
428 Benchmark Time CPU Iterations
429 ----------------------------------------------------
430 BM_memcpy/32 11 ns 11 ns 79545455
431 BM_memcpy/32k 2181 ns 2185 ns 324074
432 BM_memcpy/32 12 ns 12 ns 54687500
433 BM_memcpy/32k 1834 ns 1837 ns 357143
434 ```
435
436 <a name="result-comparison" />
437
438 ### Result comparison
439
440 It is possible to compare the benchmarking results.
441 See [Additional Tooling Documentation](docs/tools.md)
442
443 <a name="runtime-and-reporting-considerations" />
444
445 ### Runtime and Reporting Considerations
446
447 When the benchmark binary is executed, each benchmark function is run serially.
448 The number of iterations to run is determined dynamically by running the
449 benchmark a few times and measuring the time taken and ensuring that the
450 ultimate result will be statistically stable. As such, faster benchmark
451 functions will be run for more iterations than slower benchmark functions, and
452 the number of iterations is thus reported.
453
454 In all cases, the number of iterations for which the benchmark is run is
455 governed by the amount of time the benchmark takes. Concretely, the number of
456 iterations is at least one, not more than 1e9, until CPU time is greater than
457 the minimum time, or the wallclock time is 5x minimum time. The minimum time is
458 set per benchmark by calling `MinTime` on the registered benchmark object.
459
460 Average timings are then reported over the iterations run. If multiple
461 repetitions are requested using the `--benchmark_repetitions` command-line
462 option, or at registration time, the benchmark function will be run several
463 times and statistical results across these repetitions will also be reported.
464
465 As well as the per-benchmark entries, a preamble in the report will include
466 information about the machine on which the benchmarks are run.
467
468 <a name="passing-arguments" />
469
470 ### Passing Arguments
471
472 Sometimes a family of benchmarks can be implemented with just one routine that
473 takes an extra argument to specify which one of the family of benchmarks to
474 run. For example, the following code defines a family of benchmarks for
475 measuring the speed of `memcpy()` calls of different lengths:
476
477 ```c++
478 static void BM_memcpy(benchmark::State& state) {
479 char* src = new char[state.range(0)];
480 char* dst = new char[state.range(0)];
481 memset(src, 'x', state.range(0));
482 for (auto _ : state)
483 memcpy(dst, src, state.range(0));
484 state.SetBytesProcessed(int64_t(state.iterations()) *
485 int64_t(state.range(0)));
486 delete[] src;
487 delete[] dst;
488 }
489 BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(1<<10)->Arg(8<<10);
490 ```
491
492 The preceding code is quite repetitive, and can be replaced with the following
493 short-hand. The following invocation will pick a few appropriate arguments in
494 the specified range and will generate a benchmark for each such argument.
495
496 ```c++
497 BENCHMARK(BM_memcpy)->Range(8, 8<<10);
498 ```
499
500 By default the arguments in the range are generated in multiples of eight and
501 the command above selects [ 8, 64, 512, 4k, 8k ]. In the following code the
502 range multiplier is changed to multiples of two.
503
504 ```c++
505 BENCHMARK(BM_memcpy)->RangeMultiplier(2)->Range(8, 8<<10);
506 ```
507
508 Now arguments generated are [ 8, 16, 32, 64, 128, 256, 512, 1024, 2k, 4k, 8k ].
509
510 The preceding code shows a method of defining a sparse range. The following
511 example shows a method of defining a dense range. It is then used to benchmark
512 the performance of `std::vector` initialization for uniformly increasing sizes.
513
514 ```c++
515 static void BM_DenseRange(benchmark::State& state) {
516 for(auto _ : state) {
517 std::vector<int> v(state.range(0), state.range(0));
518 benchmark::DoNotOptimize(v.data());
519 benchmark::ClobberMemory();
520 }
521 }
522 BENCHMARK(BM_DenseRange)->DenseRange(0, 1024, 128);
523 ```
524
525 Now arguments generated are [ 0, 128, 256, 384, 512, 640, 768, 896, 1024 ].
526
527 You might have a benchmark that depends on two or more inputs. For example, the
528 following code defines a family of benchmarks for measuring the speed of set
529 insertion.
530
531 ```c++
532 static void BM_SetInsert(benchmark::State& state) {
533 std::set<int> data;
534 for (auto _ : state) {
535 state.PauseTiming();
536 data = ConstructRandomSet(state.range(0));
537 state.ResumeTiming();
538 for (int j = 0; j < state.range(1); ++j)
539 data.insert(RandomNumber());
540 }
541 }
542 BENCHMARK(BM_SetInsert)
543 ->Args({1<<10, 128})
544 ->Args({2<<10, 128})
545 ->Args({4<<10, 128})
546 ->Args({8<<10, 128})
547 ->Args({1<<10, 512})
548 ->Args({2<<10, 512})
549 ->Args({4<<10, 512})
550 ->Args({8<<10, 512});
551 ```
552
553 The preceding code is quite repetitive, and can be replaced with the following
554 short-hand. The following macro will pick a few appropriate arguments in the
555 product of the two specified ranges and will generate a benchmark for each such
556 pair.
557
558 ```c++
559 BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {128, 512}});
560 ```
561
562 Some benchmarks may require specific argument values that cannot be expressed
563 with `Ranges`. In this case, `ArgsProduct` offers the ability to generate a
564 benchmark input for each combination in the product of the supplied vectors.
565
566 ```c++
567 BENCHMARK(BM_SetInsert)
568 ->ArgsProduct({{1<<10, 3<<10, 8<<10}, {20, 40, 60, 80}})
569 // would generate the same benchmark arguments as
570 BENCHMARK(BM_SetInsert)
571 ->Args({1<<10, 20})
572 ->Args({3<<10, 20})
573 ->Args({8<<10, 20})
574 ->Args({3<<10, 40})
575 ->Args({8<<10, 40})
576 ->Args({1<<10, 40})
577 ->Args({1<<10, 60})
578 ->Args({3<<10, 60})
579 ->Args({8<<10, 60})
580 ->Args({1<<10, 80})
581 ->Args({3<<10, 80})
582 ->Args({8<<10, 80});
583 ```
584
585 For more complex patterns of inputs, passing a custom function to `Apply` allows
586 programmatic specification of an arbitrary set of arguments on which to run the
587 benchmark. The following example enumerates a dense range on one parameter,
588 and a sparse range on the second.
589
590 ```c++
591 static void CustomArguments(benchmark::internal::Benchmark* b) {
592 for (int i = 0; i <= 10; ++i)
593 for (int j = 32; j <= 1024*1024; j *= 8)
594 b->Args({i, j});
595 }
596 BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
597 ```
598
599 #### Passing Arbitrary Arguments to a Benchmark
600
601 In C++11 it is possible to define a benchmark that takes an arbitrary number
602 of extra arguments. The `BENCHMARK_CAPTURE(func, test_case_name, ...args)`
603 macro creates a benchmark that invokes `func` with the `benchmark::State` as
604 the first argument followed by the specified `args...`.
605 The `test_case_name` is appended to the name of the benchmark and
606 should describe the values passed.
607
608 ```c++
609 template <class ...ExtraArgs>
610 void BM_takes_args(benchmark::State& state, ExtraArgs&&... extra_args) {
611 [...]
612 }
613 // Registers a benchmark named "BM_takes_args/int_string_test" that passes
614 // the specified values to `extra_args`.
615 BENCHMARK_CAPTURE(BM_takes_args, int_string_test, 42, std::string("abc"));
616 ```
617
618 Note that elements of `...args` may refer to global variables. Users should
619 avoid modifying global state inside of a benchmark.
620
621 <a name="asymptotic-complexity" />
622
623 ### Calculating Asymptotic Complexity (Big O)
624
625 Asymptotic complexity might be calculated for a family of benchmarks. The
626 following code will calculate the coefficient for the high-order term in the
627 running time and the normalized root-mean square error of string comparison.
628
629 ```c++
630 static void BM_StringCompare(benchmark::State& state) {
631 std::string s1(state.range(0), '-');
632 std::string s2(state.range(0), '-');
633 for (auto _ : state) {
634 benchmark::DoNotOptimize(s1.compare(s2));
635 }
636 state.SetComplexityN(state.range(0));
637 }
638 BENCHMARK(BM_StringCompare)
639 ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity(benchmark::oN);
640 ```
641
642 As shown in the following invocation, asymptotic complexity might also be
643 calculated automatically.
644
645 ```c++
646 BENCHMARK(BM_StringCompare)
647 ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity();
648 ```
649
650 The following code will specify asymptotic complexity with a lambda function,
651 that might be used to customize high-order term calculation.
652
653 ```c++
654 BENCHMARK(BM_StringCompare)->RangeMultiplier(2)
655 ->Range(1<<10, 1<<18)->Complexity([](benchmark::IterationCount n)->double{return n; });
656 ```
657
658 <a name="custom-benchmark-name" />
659
660 ### Custom Benchmark Name
661
662 You can change the benchmark's name as follows:
663
664 ```c++
665 BENCHMARK(BM_memcpy)->Name("memcpy")->RangeMultiplier(2)->Range(8, 8<<10);
666 ```
667
668 The invocation will execute the benchmark as before using `BM_memcpy` but changes
669 the prefix in the report to `memcpy`.
670
671 <a name="templated-benchmarks" />
672
673 ### Templated Benchmarks
674
675 This example produces and consumes messages of size `sizeof(v)` `range_x`
676 times. It also outputs throughput in the absence of multiprogramming.
677
678 ```c++
679 template <class Q> void BM_Sequential(benchmark::State& state) {
680 Q q;
681 typename Q::value_type v;
682 for (auto _ : state) {
683 for (int i = state.range(0); i--; )
684 q.push(v);
685 for (int e = state.range(0); e--; )
686 q.Wait(&v);
687 }
688 // actually messages, not bytes:
689 state.SetBytesProcessed(
690 static_cast<int64_t>(state.iterations())*state.range(0));
691 }
692 BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
693 ```
694
695 Three macros are provided for adding benchmark templates.
696
697 ```c++
698 #ifdef BENCHMARK_HAS_CXX11
699 #define BENCHMARK_TEMPLATE(func, ...) // Takes any number of parameters.
700 #else // C++ < C++11
701 #define BENCHMARK_TEMPLATE(func, arg1)
702 #endif
703 #define BENCHMARK_TEMPLATE1(func, arg1)
704 #define BENCHMARK_TEMPLATE2(func, arg1, arg2)
705 ```
706
707 <a name="fixtures" />
708
709 ### Fixtures
710
711 Fixture tests are created by first defining a type that derives from
712 `::benchmark::Fixture` and then creating/registering the tests using the
713 following macros:
714
715 * `BENCHMARK_F(ClassName, Method)`
716 * `BENCHMARK_DEFINE_F(ClassName, Method)`
717 * `BENCHMARK_REGISTER_F(ClassName, Method)`
718
719 For Example:
720
721 ```c++
722 class MyFixture : public benchmark::Fixture {
723 public:
724 void SetUp(const ::benchmark::State& state) {
725 }
726
727 void TearDown(const ::benchmark::State& state) {
728 }
729 };
730
731 BENCHMARK_F(MyFixture, FooTest)(benchmark::State& st) {
732 for (auto _ : st) {
733 ...
734 }
735 }
736
737 BENCHMARK_DEFINE_F(MyFixture, BarTest)(benchmark::State& st) {
738 for (auto _ : st) {
739 ...
740 }
741 }
742 /* BarTest is NOT registered */
743 BENCHMARK_REGISTER_F(MyFixture, BarTest)->Threads(2);
744 /* BarTest is now registered */
745 ```
746
747 #### Templated Fixtures
748
749 Also you can create templated fixture by using the following macros:
750
751 * `BENCHMARK_TEMPLATE_F(ClassName, Method, ...)`
752 * `BENCHMARK_TEMPLATE_DEFINE_F(ClassName, Method, ...)`
753
754 For example:
755
756 ```c++
757 template<typename T>
758 class MyFixture : public benchmark::Fixture {};
759
760 BENCHMARK_TEMPLATE_F(MyFixture, IntTest, int)(benchmark::State& st) {
761 for (auto _ : st) {
762 ...
763 }
764 }
765
766 BENCHMARK_TEMPLATE_DEFINE_F(MyFixture, DoubleTest, double)(benchmark::State& st) {
767 for (auto _ : st) {
768 ...
769 }
770 }
771
772 BENCHMARK_REGISTER_F(MyFixture, DoubleTest)->Threads(2);
773 ```
774
775 <a name="custom-counters" />
776
777 ### Custom Counters
778
779 You can add your own counters with user-defined names. The example below
780 will add columns "Foo", "Bar" and "Baz" in its output:
781
782 ```c++
783 static void UserCountersExample1(benchmark::State& state) {
784 double numFoos = 0, numBars = 0, numBazs = 0;
785 for (auto _ : state) {
786 // ... count Foo,Bar,Baz events
787 }
788 state.counters["Foo"] = numFoos;
789 state.counters["Bar"] = numBars;
790 state.counters["Baz"] = numBazs;
791 }
792 ```
793
794 The `state.counters` object is a `std::map` with `std::string` keys
795 and `Counter` values. The latter is a `double`-like class, via an implicit
796 conversion to `double&`. Thus you can use all of the standard arithmetic
797 assignment operators (`=,+=,-=,*=,/=`) to change the value of each counter.
798
799 In multithreaded benchmarks, each counter is set on the calling thread only.
800 When the benchmark finishes, the counters from each thread will be summed;
801 the resulting sum is the value which will be shown for the benchmark.
802
803 The `Counter` constructor accepts three parameters: the value as a `double`
804 ; a bit flag which allows you to show counters as rates, and/or as per-thread
805 iteration, and/or as per-thread averages, and/or iteration invariants,
806 and/or finally inverting the result; and a flag specifying the 'unit' - i.e.
807 is 1k a 1000 (default, `benchmark::Counter::OneK::kIs1000`), or 1024
808 (`benchmark::Counter::OneK::kIs1024`)?
809
810 ```c++
811 // sets a simple counter
812 state.counters["Foo"] = numFoos;
813
814 // Set the counter as a rate. It will be presented divided
815 // by the duration of the benchmark.
816 // Meaning: per one second, how many 'foo's are processed?
817 state.counters["FooRate"] = Counter(numFoos, benchmark::Counter::kIsRate);
818
819 // Set the counter as a rate. It will be presented divided
820 // by the duration of the benchmark, and the result inverted.
821 // Meaning: how many seconds it takes to process one 'foo'?
822 state.counters["FooInvRate"] = Counter(numFoos, benchmark::Counter::kIsRate | benchmark::Counter::kInvert);
823
824 // Set the counter as a thread-average quantity. It will
825 // be presented divided by the number of threads.
826 state.counters["FooAvg"] = Counter(numFoos, benchmark::Counter::kAvgThreads);
827
828 // There's also a combined flag:
829 state.counters["FooAvgRate"] = Counter(numFoos,benchmark::Counter::kAvgThreadsRate);
830
831 // This says that we process with the rate of state.range(0) bytes every iteration:
832 state.counters["BytesProcessed"] = Counter(state.range(0), benchmark::Counter::kIsIterationInvariantRate, benchmark::Counter::OneK::kIs1024);
833 ```
834
835 When you're compiling in C++11 mode or later you can use `insert()` with
836 `std::initializer_list`:
837
838 ```c++
839 // With C++11, this can be done:
840 state.counters.insert({{"Foo", numFoos}, {"Bar", numBars}, {"Baz", numBazs}});
841 // ... instead of:
842 state.counters["Foo"] = numFoos;
843 state.counters["Bar"] = numBars;
844 state.counters["Baz"] = numBazs;
845 ```
846
847 #### Counter Reporting
848
849 When using the console reporter, by default, user counters are printed at
850 the end after the table, the same way as ``bytes_processed`` and
851 ``items_processed``. This is best for cases in which there are few counters,
852 or where there are only a couple of lines per benchmark. Here's an example of
853 the default output:
854
855 ```
856 ------------------------------------------------------------------------------
857 Benchmark Time CPU Iterations UserCounters...
858 ------------------------------------------------------------------------------
859 BM_UserCounter/threads:8 2248 ns 10277 ns 68808 Bar=16 Bat=40 Baz=24 Foo=8
860 BM_UserCounter/threads:1 9797 ns 9788 ns 71523 Bar=2 Bat=5 Baz=3 Foo=1024m
861 BM_UserCounter/threads:2 4924 ns 9842 ns 71036 Bar=4 Bat=10 Baz=6 Foo=2
862 BM_UserCounter/threads:4 2589 ns 10284 ns 68012 Bar=8 Bat=20 Baz=12 Foo=4
863 BM_UserCounter/threads:8 2212 ns 10287 ns 68040 Bar=16 Bat=40 Baz=24 Foo=8
864 BM_UserCounter/threads:16 1782 ns 10278 ns 68144 Bar=32 Bat=80 Baz=48 Foo=16
865 BM_UserCounter/threads:32 1291 ns 10296 ns 68256 Bar=64 Bat=160 Baz=96 Foo=32
866 BM_UserCounter/threads:4 2615 ns 10307 ns 68040 Bar=8 Bat=20 Baz=12 Foo=4
867 BM_Factorial 26 ns 26 ns 26608979 40320
868 BM_Factorial/real_time 26 ns 26 ns 26587936 40320
869 BM_CalculatePiRange/1 16 ns 16 ns 45704255 0
870 BM_CalculatePiRange/8 73 ns 73 ns 9520927 3.28374
871 BM_CalculatePiRange/64 609 ns 609 ns 1140647 3.15746
872 BM_CalculatePiRange/512 4900 ns 4901 ns 142696 3.14355
873 ```
874
875 If this doesn't suit you, you can print each counter as a table column by
876 passing the flag `--benchmark_counters_tabular=true` to the benchmark
877 application. This is best for cases in which there are a lot of counters, or
878 a lot of lines per individual benchmark. Note that this will trigger a
879 reprinting of the table header any time the counter set changes between
880 individual benchmarks. Here's an example of corresponding output when
881 `--benchmark_counters_tabular=true` is passed:
882
883 ```
884 ---------------------------------------------------------------------------------------
885 Benchmark Time CPU Iterations Bar Bat Baz Foo
886 ---------------------------------------------------------------------------------------
887 BM_UserCounter/threads:8 2198 ns 9953 ns 70688 16 40 24 8
888 BM_UserCounter/threads:1 9504 ns 9504 ns 73787 2 5 3 1
889 BM_UserCounter/threads:2 4775 ns 9550 ns 72606 4 10 6 2
890 BM_UserCounter/threads:4 2508 ns 9951 ns 70332 8 20 12 4
891 BM_UserCounter/threads:8 2055 ns 9933 ns 70344 16 40 24 8
892 BM_UserCounter/threads:16 1610 ns 9946 ns 70720 32 80 48 16
893 BM_UserCounter/threads:32 1192 ns 9948 ns 70496 64 160 96 32
894 BM_UserCounter/threads:4 2506 ns 9949 ns 70332 8 20 12 4
895 --------------------------------------------------------------
896 Benchmark Time CPU Iterations
897 --------------------------------------------------------------
898 BM_Factorial 26 ns 26 ns 26392245 40320
899 BM_Factorial/real_time 26 ns 26 ns 26494107 40320
900 BM_CalculatePiRange/1 15 ns 15 ns 45571597 0
901 BM_CalculatePiRange/8 74 ns 74 ns 9450212 3.28374
902 BM_CalculatePiRange/64 595 ns 595 ns 1173901 3.15746
903 BM_CalculatePiRange/512 4752 ns 4752 ns 147380 3.14355
904 BM_CalculatePiRange/4k 37970 ns 37972 ns 18453 3.14184
905 BM_CalculatePiRange/32k 303733 ns 303744 ns 2305 3.14162
906 BM_CalculatePiRange/256k 2434095 ns 2434186 ns 288 3.1416
907 BM_CalculatePiRange/1024k 9721140 ns 9721413 ns 71 3.14159
908 BM_CalculatePi/threads:8 2255 ns 9943 ns 70936
909 ```
910
911 Note above the additional header printed when the benchmark changes from
912 ``BM_UserCounter`` to ``BM_Factorial``. This is because ``BM_Factorial`` does
913 not have the same counter set as ``BM_UserCounter``.
914
915 <a name="multithreaded-benchmarks"/>
916
917 ### Multithreaded Benchmarks
918
919 In a multithreaded test (benchmark invoked by multiple threads simultaneously),
920 it is guaranteed that none of the threads will start until all have reached
921 the start of the benchmark loop, and all will have finished before any thread
922 exits the benchmark loop. (This behavior is also provided by the `KeepRunning()`
923 API) As such, any global setup or teardown can be wrapped in a check against the thread
924 index:
925
926 ```c++
927 static void BM_MultiThreaded(benchmark::State& state) {
928 if (state.thread_index == 0) {
929 // Setup code here.
930 }
931 for (auto _ : state) {
932 // Run the test as normal.
933 }
934 if (state.thread_index == 0) {
935 // Teardown code here.
936 }
937 }
938 BENCHMARK(BM_MultiThreaded)->Threads(2);
939 ```
940
941 If the benchmarked code itself uses threads and you want to compare it to
942 single-threaded code, you may want to use real-time ("wallclock") measurements
943 for latency comparisons:
944
945 ```c++
946 BENCHMARK(BM_test)->Range(8, 8<<10)->UseRealTime();
947 ```
948
949 Without `UseRealTime`, CPU time is used by default.
950
951 <a name="cpu-timers" />
952
953 ### CPU Timers
954
955 By default, the CPU timer only measures the time spent by the main thread.
956 If the benchmark itself uses threads internally, this measurement may not
957 be what you are looking for. Instead, there is a way to measure the total
958 CPU usage of the process, by all the threads.
959
960 ```c++
961 void callee(int i);
962
963 static void MyMain(int size) {
964 #pragma omp parallel for
965 for(int i = 0; i < size; i++)
966 callee(i);
967 }
968
969 static void BM_OpenMP(benchmark::State& state) {
970 for (auto _ : state)
971 MyMain(state.range(0));
972 }
973
974 // Measure the time spent by the main thread, use it to decide for how long to
975 // run the benchmark loop. Depending on the internal implementation detail may
976 // measure to anywhere from near-zero (the overhead spent before/after work
977 // handoff to worker thread[s]) to the whole single-thread time.
978 BENCHMARK(BM_OpenMP)->Range(8, 8<<10);
979
980 // Measure the user-visible time, the wall clock (literally, the time that
981 // has passed on the clock on the wall), use it to decide for how long to
982 // run the benchmark loop. This will always be meaningful, an will match the
983 // time spent by the main thread in single-threaded case, in general decreasing
984 // with the number of internal threads doing the work.
985 BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->UseRealTime();
986
987 // Measure the total CPU consumption, use it to decide for how long to
988 // run the benchmark loop. This will always measure to no less than the
989 // time spent by the main thread in single-threaded case.
990 BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime();
991
992 // A mixture of the last two. Measure the total CPU consumption, but use the
993 // wall clock to decide for how long to run the benchmark loop.
994 BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime()->UseRealTime();
995 ```
996
997 #### Controlling Timers
998
999 Normally, the entire duration of the work loop (`for (auto _ : state) {}`)
1000 is measured. But sometimes, it is necessary to do some work inside of
1001 that loop, every iteration, but without counting that time to the benchmark time.
1002 That is possible, although it is not recommended, since it has high overhead.
1003
1004 ```c++
1005 static void BM_SetInsert_With_Timer_Control(benchmark::State& state) {
1006 std::set<int> data;
1007 for (auto _ : state) {
1008 state.PauseTiming(); // Stop timers. They will not count until they are resumed.
1009 data = ConstructRandomSet(state.range(0)); // Do something that should not be measured
1010 state.ResumeTiming(); // And resume timers. They are now counting again.
1011 // The rest will be measured.
1012 for (int j = 0; j < state.range(1); ++j)
1013 data.insert(RandomNumber());
1014 }
1015 }
1016 BENCHMARK(BM_SetInsert_With_Timer_Control)->Ranges({{1<<10, 8<<10}, {128, 512}});
1017 ```
1018
1019 <a name="manual-timing" />
1020
1021 ### Manual Timing
1022
1023 For benchmarking something for which neither CPU time nor real-time are
1024 correct or accurate enough, completely manual timing is supported using
1025 the `UseManualTime` function.
1026
1027 When `UseManualTime` is used, the benchmarked code must call
1028 `SetIterationTime` once per iteration of the benchmark loop to
1029 report the manually measured time.
1030
1031 An example use case for this is benchmarking GPU execution (e.g. OpenCL
1032 or CUDA kernels, OpenGL or Vulkan or Direct3D draw calls), which cannot
1033 be accurately measured using CPU time or real-time. Instead, they can be
1034 measured accurately using a dedicated API, and these measurement results
1035 can be reported back with `SetIterationTime`.
1036
1037 ```c++
1038 static void BM_ManualTiming(benchmark::State& state) {
1039 int microseconds = state.range(0);
1040 std::chrono::duration<double, std::micro> sleep_duration {
1041 static_cast<double>(microseconds)
1042 };
1043
1044 for (auto _ : state) {
1045 auto start = std::chrono::high_resolution_clock::now();
1046 // Simulate some useful workload with a sleep
1047 std::this_thread::sleep_for(sleep_duration);
1048 auto end = std::chrono::high_resolution_clock::now();
1049
1050 auto elapsed_seconds =
1051 std::chrono::duration_cast<std::chrono::duration<double>>(
1052 end - start);
1053
1054 state.SetIterationTime(elapsed_seconds.count());
1055 }
1056 }
1057 BENCHMARK(BM_ManualTiming)->Range(1, 1<<17)->UseManualTime();
1058 ```
1059
1060 <a name="setting-the-time-unit" />
1061
1062 ### Setting the Time Unit
1063
1064 If a benchmark runs a few milliseconds it may be hard to visually compare the
1065 measured times, since the output data is given in nanoseconds per default. In
1066 order to manually set the time unit, you can specify it manually:
1067
1068 ```c++
1069 BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
1070 ```
1071
1072 <a name="preventing-optimization" />
1073
1074 ### Preventing Optimization
1075
1076 To prevent a value or expression from being optimized away by the compiler
1077 the `benchmark::DoNotOptimize(...)` and `benchmark::ClobberMemory()`
1078 functions can be used.
1079
1080 ```c++
1081 static void BM_test(benchmark::State& state) {
1082 for (auto _ : state) {
1083 int x = 0;
1084 for (int i=0; i < 64; ++i) {
1085 benchmark::DoNotOptimize(x += i);
1086 }
1087 }
1088 }
1089 ```
1090
1091 `DoNotOptimize(<expr>)` forces the *result* of `<expr>` to be stored in either
1092 memory or a register. For GNU based compilers it acts as read/write barrier
1093 for global memory. More specifically it forces the compiler to flush pending
1094 writes to memory and reload any other values as necessary.
1095
1096 Note that `DoNotOptimize(<expr>)` does not prevent optimizations on `<expr>`
1097 in any way. `<expr>` may even be removed entirely when the result is already
1098 known. For example:
1099
1100 ```c++
1101 /* Example 1: `<expr>` is removed entirely. */
1102 int foo(int x) { return x + 42; }
1103 while (...) DoNotOptimize(foo(0)); // Optimized to DoNotOptimize(42);
1104
1105 /* Example 2: Result of '<expr>' is only reused */
1106 int bar(int) __attribute__((const));
1107 while (...) DoNotOptimize(bar(0)); // Optimized to:
1108 // int __result__ = bar(0);
1109 // while (...) DoNotOptimize(__result__);
1110 ```
1111
1112 The second tool for preventing optimizations is `ClobberMemory()`. In essence
1113 `ClobberMemory()` forces the compiler to perform all pending writes to global
1114 memory. Memory managed by block scope objects must be "escaped" using
1115 `DoNotOptimize(...)` before it can be clobbered. In the below example
1116 `ClobberMemory()` prevents the call to `v.push_back(42)` from being optimized
1117 away.
1118
1119 ```c++
1120 static void BM_vector_push_back(benchmark::State& state) {
1121 for (auto _ : state) {
1122 std::vector<int> v;
1123 v.reserve(1);
1124 benchmark::DoNotOptimize(v.data()); // Allow v.data() to be clobbered.
1125 v.push_back(42);
1126 benchmark::ClobberMemory(); // Force 42 to be written to memory.
1127 }
1128 }
1129 ```
1130
1131 Note that `ClobberMemory()` is only available for GNU or MSVC based compilers.
1132
1133 <a name="reporting-statistics" />
1134
1135 ### Statistics: Reporting the Mean, Median and Standard Deviation of Repeated Benchmarks
1136
1137 By default each benchmark is run once and that single result is reported.
1138 However benchmarks are often noisy and a single result may not be representative
1139 of the overall behavior. For this reason it's possible to repeatedly rerun the
1140 benchmark.
1141
1142 The number of runs of each benchmark is specified globally by the
1143 `--benchmark_repetitions` flag or on a per benchmark basis by calling
1144 `Repetitions` on the registered benchmark object. When a benchmark is run more
1145 than once the mean, median and standard deviation of the runs will be reported.
1146
1147 Additionally the `--benchmark_report_aggregates_only={true|false}`,
1148 `--benchmark_display_aggregates_only={true|false}` flags or
1149 `ReportAggregatesOnly(bool)`, `DisplayAggregatesOnly(bool)` functions can be
1150 used to change how repeated tests are reported. By default the result of each
1151 repeated run is reported. When `report aggregates only` option is `true`,
1152 only the aggregates (i.e. mean, median and standard deviation, maybe complexity
1153 measurements if they were requested) of the runs is reported, to both the
1154 reporters - standard output (console), and the file.
1155 However when only the `display aggregates only` option is `true`,
1156 only the aggregates are displayed in the standard output, while the file
1157 output still contains everything.
1158 Calling `ReportAggregatesOnly(bool)` / `DisplayAggregatesOnly(bool)` on a
1159 registered benchmark object overrides the value of the appropriate flag for that
1160 benchmark.
1161
1162 <a name="custom-statistics" />
1163
1164 ### Custom Statistics
1165
1166 While having mean, median and standard deviation is nice, this may not be
1167 enough for everyone. For example you may want to know what the largest
1168 observation is, e.g. because you have some real-time constraints. This is easy.
1169 The following code will specify a custom statistic to be calculated, defined
1170 by a lambda function.
1171
1172 ```c++
1173 void BM_spin_empty(benchmark::State& state) {
1174 for (auto _ : state) {
1175 for (int x = 0; x < state.range(0); ++x) {
1176 benchmark::DoNotOptimize(x);
1177 }
1178 }
1179 }
1180
1181 BENCHMARK(BM_spin_empty)
1182 ->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
1183 return *(std::max_element(std::begin(v), std::end(v)));
1184 })
1185 ->Arg(512);
1186 ```
1187
1188 <a name="using-register-benchmark" />
1189
1190 ### Using RegisterBenchmark(name, fn, args...)
1191
1192 The `RegisterBenchmark(name, func, args...)` function provides an alternative
1193 way to create and register benchmarks.
1194 `RegisterBenchmark(name, func, args...)` creates, registers, and returns a
1195 pointer to a new benchmark with the specified `name` that invokes
1196 `func(st, args...)` where `st` is a `benchmark::State` object.
1197
1198 Unlike the `BENCHMARK` registration macros, which can only be used at the global
1199 scope, the `RegisterBenchmark` can be called anywhere. This allows for
1200 benchmark tests to be registered programmatically.
1201
1202 Additionally `RegisterBenchmark` allows any callable object to be registered
1203 as a benchmark. Including capturing lambdas and function objects.
1204
1205 For Example:
1206 ```c++
1207 auto BM_test = [](benchmark::State& st, auto Inputs) { /* ... */ };
1208
1209 int main(int argc, char** argv) {
1210 for (auto& test_input : { /* ... */ })
1211 benchmark::RegisterBenchmark(test_input.name(), BM_test, test_input);
1212 benchmark::Initialize(&argc, argv);
1213 benchmark::RunSpecifiedBenchmarks();
1214 }
1215 ```
1216
1217 <a name="exiting-with-an-error" />
1218
1219 ### Exiting with an Error
1220
1221 When errors caused by external influences, such as file I/O and network
1222 communication, occur within a benchmark the
1223 `State::SkipWithError(const char* msg)` function can be used to skip that run
1224 of benchmark and report the error. Note that only future iterations of the
1225 `KeepRunning()` are skipped. For the ranged-for version of the benchmark loop
1226 Users must explicitly exit the loop, otherwise all iterations will be performed.
1227 Users may explicitly return to exit the benchmark immediately.
1228
1229 The `SkipWithError(...)` function may be used at any point within the benchmark,
1230 including before and after the benchmark loop. Moreover, if `SkipWithError(...)`
1231 has been used, it is not required to reach the benchmark loop and one may return
1232 from the benchmark function early.
1233
1234 For example:
1235
1236 ```c++
1237 static void BM_test(benchmark::State& state) {
1238 auto resource = GetResource();
1239 if (!resource.good()) {
1240 state.SkipWithError("Resource is not good!");
1241 // KeepRunning() loop will not be entered.
1242 }
1243 while (state.KeepRunning()) {
1244 auto data = resource.read_data();
1245 if (!resource.good()) {
1246 state.SkipWithError("Failed to read data!");
1247 break; // Needed to skip the rest of the iteration.
1248 }
1249 do_stuff(data);
1250 }
1251 }
1252
1253 static void BM_test_ranged_fo(benchmark::State & state) {
1254 auto resource = GetResource();
1255 if (!resource.good()) {
1256 state.SkipWithError("Resource is not good!");
1257 return; // Early return is allowed when SkipWithError() has been used.
1258 }
1259 for (auto _ : state) {
1260 auto data = resource.read_data();
1261 if (!resource.good()) {
1262 state.SkipWithError("Failed to read data!");
1263 break; // REQUIRED to prevent all further iterations.
1264 }
1265 do_stuff(data);
1266 }
1267 }
1268 ```
1269 <a name="a-faster-keep-running-loop" />
1270
1271 ### A Faster KeepRunning Loop
1272
1273 In C++11 mode, a ranged-based for loop should be used in preference to
1274 the `KeepRunning` loop for running the benchmarks. For example:
1275
1276 ```c++
1277 static void BM_Fast(benchmark::State &state) {
1278 for (auto _ : state) {
1279 FastOperation();
1280 }
1281 }
1282 BENCHMARK(BM_Fast);
1283 ```
1284
1285 The reason the ranged-for loop is faster than using `KeepRunning`, is
1286 because `KeepRunning` requires a memory load and store of the iteration count
1287 ever iteration, whereas the ranged-for variant is able to keep the iteration count
1288 in a register.
1289
1290 For example, an empty inner loop of using the ranged-based for method looks like:
1291
1292 ```asm
1293 # Loop Init
1294 mov rbx, qword ptr [r14 + 104]
1295 call benchmark::State::StartKeepRunning()
1296 test rbx, rbx
1297 je .LoopEnd
1298 .LoopHeader: # =>This Inner Loop Header: Depth=1
1299 add rbx, -1
1300 jne .LoopHeader
1301 .LoopEnd:
1302 ```
1303
1304 Compared to an empty `KeepRunning` loop, which looks like:
1305
1306 ```asm
1307 .LoopHeader: # in Loop: Header=BB0_3 Depth=1
1308 cmp byte ptr [rbx], 1
1309 jne .LoopInit
1310 .LoopBody: # =>This Inner Loop Header: Depth=1
1311 mov rax, qword ptr [rbx + 8]
1312 lea rcx, [rax + 1]
1313 mov qword ptr [rbx + 8], rcx
1314 cmp rax, qword ptr [rbx + 104]
1315 jb .LoopHeader
1316 jmp .LoopEnd
1317 .LoopInit:
1318 mov rdi, rbx
1319 call benchmark::State::StartKeepRunning()
1320 jmp .LoopBody
1321 .LoopEnd:
1322 ```
1323
1324 Unless C++03 compatibility is required, the ranged-for variant of writing
1325 the benchmark loop should be preferred.
1326
1327 <a name="disabling-cpu-frequency-scaling" />
1328
1329 ### Disabling CPU Frequency Scaling
1330
1331 If you see this error:
1332
1333 ```
1334 ***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
1335 ```
1336
1337 you might want to disable the CPU frequency scaling while running the benchmark:
1338
1339 ```bash
1340 sudo cpupower frequency-set --governor performance
1341 ./mybench
1342 sudo cpupower frequency-set --governor powersave
1343 ```