5 The perf histograms build on perf counters infrastructure. Histograms are built for a number of counters and simplify gathering data on which groups of counter values occur most often over time.
6 Perf histograms are currently unsigned 64-bit integer counters, so they're mostly useful for time and sizes. Data dumped by perf histogram can then be feed into other analysis tools/scripts.
11 The perf histogram data are accessed via the admin socket. For example::
13 ceph daemon osd.0 perf histogram schema
14 ceph daemon osd.0 perf histogram dump
20 The histograms are grouped into named collections, normally representing a subsystem or an instance of a subsystem. For example, the internal ``throttle`` mechanism reports statistics on how it is throttling, and each instance is named something like::
23 op_r_latency_out_bytes_histogram
24 op_rw_latency_in_bytes_histogram
25 op_rw_latency_out_bytes_histogram
32 The ``perf histogram schema`` command dumps a json description of which values are available, and what their type is. Each named value as a ``type`` bitfield, with the 5-th bit always set and following bits defined.
34 +------+-------------------------------------+
36 +======+=====================================+
37 | 1 | floating point value |
38 +------+-------------------------------------+
39 | 2 | unsigned 64-bit integer value |
40 +------+-------------------------------------+
41 | 4 | average (sum + count pair) |
42 +------+-------------------------------------+
43 | 8 | counter (vs gauge) |
44 +------+-------------------------------------+
46 In other words, histogram of type "18" is a histogram of unsigned 64-bit integer values (16 + 2).
48 Here is an example of the schema output::
51 "AsyncMessenger::Worker-0": {},
52 "AsyncMessenger::Worker-1": {},
53 "AsyncMessenger::Worker-2": {},
54 "mutex-WBThrottle::lock": {},
57 "op_r_latency_out_bytes_histogram": {
59 "description": "Histogram of operation latency (including queue time) + data read",
62 "op_w_latency_in_bytes_histogram": {
64 "description": "Histogram of operation latency (including queue time) + data written",
67 "op_rw_latency_in_bytes_histogram": {
69 "description": "Histogram of rw operation latency (including queue time) + data written",
72 "op_rw_latency_out_bytes_histogram": {
74 "description": "Histogram of rw operation latency (including queue time) + data read",
84 The actual dump is similar to the schema, except that there are actual value groups. For example::
87 "op_r_latency_out_bytes_histogram": {
90 "name": "Latency (usec)",
200 "min": 1677721600000,
204 "min": 3355443200000,
208 "min": 6710886400000,
209 "max": 13421772799999
212 "min": 13421772800000,
213 "max": 26843545599999
216 "min": 26843545600000,
217 "max": 53687091199999
221 "min": 53687091200000
226 "name": "Request size (bytes)",
230 "scale_type": "log2",
672 This represents the 2d histogram, consisting of 9 history entrires and 32 value groups per each history entry.
673 "Ranges" element denote value bounds for each of value groups. "Buckets" denote amount of value groups ("buckets"),
674 "Min" is a minimum accepted valaue, "quant_size" is quantization unit and "scale_type" is either "log2" (logarhitmic
675 scale) or "linear" (linear scale).
676 You can use histogram_dump.py tool (see src/tools/histogram_dump.py) for quick visualisation of existing histogram