]> git.proxmox.com Git - ceph.git/blob - ceph/doc/man/8/osdmaptool.rst
add subtree-ish sources for 12.0.3
[ceph.git] / ceph / doc / man / 8 / osdmaptool.rst
1 :orphan:
2
3 ======================================================
4 osdmaptool -- ceph osd cluster map manipulation tool
5 ======================================================
6
7 .. program:: osdmaptool
8
9 Synopsis
10 ========
11
12 | **osdmaptool** *mapfilename* [--print] [--createsimple *numosd*
13 [--pgbits *bitsperosd* ] ] [--clobber]
14
15
16 Description
17 ===========
18
19 **osdmaptool** is a utility that lets you create, view, and manipulate
20 OSD cluster maps from the Ceph distributed storage system. Notably, it
21 lets you extract the embedded CRUSH map or import a new CRUSH map.
22
23
24 Options
25 =======
26
27 .. option:: --print
28
29 will simply make the tool print a plaintext dump of the map, after
30 any modifications are made.
31
32 .. option:: --clobber
33
34 will allow osdmaptool to overwrite mapfilename if changes are made.
35
36 .. option:: --import-crush mapfile
37
38 will load the CRUSH map from mapfile and embed it in the OSD map.
39
40 .. option:: --export-crush mapfile
41
42 will extract the CRUSH map from the OSD map and write it to
43 mapfile.
44
45 .. option:: --createsimple numosd [--pgbits bitsperosd]
46
47 will create a relatively generic OSD map with the numosd devices.
48 If --pgbits is specified, the initial placement group counts will
49 be set with bitsperosd bits per OSD. That is, the pg_num map
50 attribute will be set to numosd shifted by bitsperosd.
51
52 .. option:: --test-map-pgs [--pool poolid]
53
54 will print out the mappings from placement groups to OSDs.
55
56 .. option:: --test-map-pgs-dump [--pool poolid]
57
58 will print out the summary of all placement groups and the mappings
59 from them to the mapped OSDs.
60
61
62 Example
63 =======
64
65 To create a simple map with 16 devices::
66
67 osdmaptool --createsimple 16 osdmap --clobber
68
69 To view the result::
70
71 osdmaptool --print osdmap
72
73 To view the mappings of placement groups for pool 0::
74
75 osdmaptool --test-map-pgs-dump rbd --pool 0
76
77 pool 0 pg_num 8
78 0.0 [0,2,1] 0
79 0.1 [2,0,1] 2
80 0.2 [0,1,2] 0
81 0.3 [2,0,1] 2
82 0.4 [0,2,1] 0
83 0.5 [0,2,1] 0
84 0.6 [0,1,2] 0
85 0.7 [1,0,2] 1
86 #osd count first primary c wt wt
87 osd.0 8 5 5 1 1
88 osd.1 8 1 1 1 1
89 osd.2 8 2 2 1 1
90 in 3
91 avg 8 stddev 0 (0x) (expected 2.3094 0.288675x))
92 min osd.0 8
93 max osd.0 8
94 size 0 0
95 size 1 0
96 size 2 0
97 size 3 8
98
99 In which,
100 #. pool 0 has 8 placement groups. And two tables follow:
101 #. A table for placement groups. Each row presents a placement group. With columns of:
102
103 * placement group id,
104 * acting set, and
105 * primary OSD.
106 #. A table for all OSDs. Each row presents an OSD. With columns of:
107
108 * count of placement groups being mapped to this OSD,
109 * count of placement groups where this OSD is the first one in their acting sets,
110 * count of placement groups where this OSD is the primary of them,
111 * the CRUSH weight of this OSD, and
112 * the weight of this OSD.
113 #. Looking at the number of placement groups held by 3 OSDs. We have
114
115 * avarge, stddev, stddev/average, expected stddev, expected stddev / average
116 * min and max
117 #. The number of placement groups mapping to n OSDs. In this case, all 8 placement
118 groups are mapping to 3 different OSDs.
119
120 In a less-balanced cluster, we could have following output for the statistics of
121 placement group distribution, whose standard deviation is 1.41421::
122
123 #osd count first primary c wt wt
124 osd.0 8 5 5 1 1
125 osd.1 8 1 1 1 1
126 osd.2 8 2 2 1 1
127
128 #osd count first primary c wt wt
129 osd.0 33 9 9 0.0145874 1
130 osd.1 34 14 14 0.0145874 1
131 osd.2 31 7 7 0.0145874 1
132 osd.3 31 13 13 0.0145874 1
133 osd.4 30 14 14 0.0145874 1
134 osd.5 33 7 7 0.0145874 1
135 in 6
136 avg 32 stddev 1.41421 (0.0441942x) (expected 5.16398 0.161374x))
137 min osd.4 30
138 max osd.1 34
139 size 00
140 size 10
141 size 20
142 size 364
143
144
145 Availability
146 ============
147
148 **osdmaptool** is part of Ceph, a massively scalable, open-source, distributed storage system. Please
149 refer to the Ceph documentation at http://ceph.com/docs for more
150 information.
151
152
153 See also
154 ========
155
156 :doc:`ceph <ceph>`\(8),
157 :doc:`crushtool <crushtool>`\(8),