]> git.proxmox.com Git - ceph.git/blob - ceph/src/seastar/tests/perf/perf-tests.md
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / src / seastar / tests / perf / perf-tests.md
1 # perf-tests
2
3 `perf-tests` is a simple microbenchmarking framework. Its main purpose is to allow monitoring the impact that code changes have on performance.
4
5 ## Theory of operation
6
7 The framework performs each test in several runs. During a run the microbenchmark code is executed in a loop and the average time of an iteration is computed. The shown results are median, median absolute deviation, maximum and minimum value of all the runs.
8
9 ```
10 single run iterations: 0
11 single run duration: 1.000s
12 number of runs: 5
13
14 test iterations median mad min max
15 combined.one_row 745336 691.218ns 0.175ns 689.073ns 696.476ns
16 combined.single_active 7871 85.271us 76.185ns 85.145us 108.316us
17 ```
18
19 `perf-tests` allows limiting the number of iterations or the duration of each run. In the latter case there is an additional dry run used to estimate how many iterations can be run in the specified time. The measured runs are limited by that number of iterations. This means that there is no overhead caused by timers and that each run consists of the same number of iterations.
20
21 ### Flags
22
23 * `-i <n>` or `--iterations <n>` – limits the number of iterations in each run to no more than `n` (0 for unlimited)
24 * `-d <t>` or `--duration <t>` – limits the duration of each run to no more than `t` seconds (0 for unlimited)
25 * `-r <n>` or `--runs <n>` – the number of runs of each test to execute
26 * `-t <regexs>` or `--tests <regexs>` – executes only tests which names match any regular expression in a comma-separated list `regexs`
27 * `--list` – lists all available tests
28
29 ## Example usage
30
31 ### Simple test
32
33 Performance tests are defined in a similar manner to unit tests. Macro `PERF_TEST(test_group, test_case)` allows specifying the name of the test and the group it belongs to. Microbenchmark can either return nothing or a future.
34
35 Compiler may attempt to optimise too much of the test logic. A way of preventing this is passing the final result of all computations to a function `perf_tests::do_not_optimize()`. That function should introduce little to none overhead, but forces the compiler to actually compute the value.
36
37 ```c++
38 PERF_TEST(example, simple1)
39 {
40 auto v = compute_value();
41 perf_tests::do_not_optimize(v);
42 }
43
44 PERF_TEST(example, simple2)
45 {
46 return compute_different_value().then([] (auto v) {
47 perf_tests::do_not_optimize(v);
48 });
49 }
50 ```
51
52 ### Fixtures
53
54 As it is in case of unit tests, performance tests may benefit from using a fixture that would set up a proper environment. Such tests should use macro `PERF_TEST_F(test_group, test_case)`. The test itself will be a member function of a class derivative of `test_group`.
55
56 The constructor and destructor of a fixture are executed in a context of Seastar thread, but the actual test logic is not. The same instance of a fixture will be used for multiple iterations of the test.
57
58 ```c++
59 class example {
60 protected:
61 data_set _ds1;
62 data_set _ds2;
63 private:
64 static data_set perpare_data_set();
65 public:
66 example()
67 : _ds1(prepare_data_set())
68 , _ds2(prepare_data_set())
69 { }
70 };
71
72 PERF_TEST_F(example, fixture1)
73 {
74 auto r = do_something_with(_ds1);
75 perf_tests::do_not_optimize(r);
76 }
77
78 PERF_TEST_F(example, fixture2)
79 {
80 auto r = do_something_with(_ds1, _ds2);
81 perf_tests::do_not_optimize(r);
82 }
83 ```
84
85 ### Custom time measurement
86
87 Even with fixtures it may be necessary to do some costly initialisation during each iteration. Its impact can be reduced by specifying the exact part of the test that should be measured using functions `perf_tests::start_measuring_time()` and `perf_tests::stop_measuring_time()`.
88
89 ```c++
90 PERF_TEST(example, custom_time_measurement2)
91 {
92 auto data = prepare_data();
93 perf_tests::start_measuring_time();
94 do_something(std::move(data));
95 perf_tests::stop_measuring_time();
96 }
97
98 PERF_TEST(example, custom_time_measurement2)
99 {
100 auto data = prepare_data();
101 perf_tests::start_measuring_time();
102 return do_something_else(std::move(data)).finally([] {
103 perf_tests::stop_measuring_time();
104 });
105 }
106 ```