]> git.proxmox.com Git - ceph.git/blob - ceph/doc/man/8/osdmaptool.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / man / 8 / osdmaptool.rst
1 :orphan:
2
3 .. _osdmaptool:
4
5 ======================================================
6 osdmaptool -- ceph osd cluster map manipulation tool
7 ======================================================
8
9 .. program:: osdmaptool
10
11 Synopsis
12 ========
13
14 | **osdmaptool** *mapfilename* [--print] [--createsimple *numosd*
15 [--pgbits *bitsperosd* ] ] [--clobber]
16
17
18 Description
19 ===========
20
21 **osdmaptool** is a utility that lets you create, view, and manipulate
22 OSD cluster maps from the Ceph distributed storage system. Notably, it
23 lets you extract the embedded CRUSH map or import a new CRUSH map.
24
25
26 Options
27 =======
28
29 .. option:: --print
30
31 will simply make the tool print a plaintext dump of the map, after
32 any modifications are made.
33
34 .. option:: --dump <format>
35
36 displays the map in plain text when <format> is 'plain', 'json' if specified
37 format is not supported. This is an alternative to the print option.
38
39 .. option:: --clobber
40
41 will allow osdmaptool to overwrite mapfilename if changes are made.
42
43 .. option:: --import-crush mapfile
44
45 will load the CRUSH map from mapfile and embed it in the OSD map.
46
47 .. option:: --export-crush mapfile
48
49 will extract the CRUSH map from the OSD map and write it to
50 mapfile.
51
52 .. option:: --createsimple numosd [--pg-bits bitsperosd] [--pgp-bits bits]
53
54 will create a relatively generic OSD map with the numosd devices.
55 If --pg-bits is specified, the initial placement group counts will
56 be set with bitsperosd bits per OSD. That is, the pg_num map
57 attribute will be set to numosd shifted by bitsperosd.
58 If --pgp-bits is specified, then the pgp_num map attribute will
59 be set to numosd shifted by bits.
60
61 .. option:: --create-from-conf
62
63 creates an osd map with default configurations.
64
65 .. option:: --test-map-pgs [--pool poolid] [--range-first <first> --range-last <last>]
66
67 will print out the mappings from placement groups to OSDs.
68 If range is specified, then it iterates from first to last in the directory
69 specified by argument to osdmaptool.
70 Eg: **osdmaptool --test-map-pgs --range-first 0 --range-last 2 osdmap_dir**.
71 This will iterate through the files named 0,1,2 in osdmap_dir.
72
73 .. option:: --test-map-pgs-dump [--pool poolid] [--range-first <first> --range-last <last>]
74
75 will print out the summary of all placement groups and the mappings from them to the mapped OSDs.
76 If range is specified, then it iterates from first to last in the directory
77 specified by argument to osdmaptool.
78 Eg: **osdmaptool --test-map-pgs-dump --range-first 0 --range-last 2 osdmap_dir**.
79 This will iterate through the files named 0,1,2 in osdmap_dir.
80
81 .. option:: --test-map-pgs-dump-all [--pool poolid] [--range-first <first> --range-last <last>]
82
83 will print out the summary of all placement groups and the mappings
84 from them to all the OSDs.
85 If range is specified, then it iterates from first to last in the directory
86 specified by argument to osdmaptool.
87 Eg: **osdmaptool --test-map-pgs-dump-all --range-first 0 --range-last 2 osdmap_dir**.
88 This will iterate through the files named 0,1,2 in osdmap_dir.
89
90 .. option:: --test-random
91
92 does a random mapping of placement groups to the OSDs.
93
94 .. option:: --test-map-pg <pgid>
95
96 map a particular placement group(specified by pgid) to the OSDs.
97
98 .. option:: --test-map-object <objectname> [--pool <poolid>]
99
100 map a particular placement group(specified by objectname) to the OSDs.
101
102 .. option:: --test-crush [--range-first <first> --range-last <last>]
103
104 map placement groups to acting OSDs.
105 If range is specified, then it iterates from first to last in the directory
106 specified by argument to osdmaptool.
107 Eg: **osdmaptool --test-crush --range-first 0 --range-last 2 osdmap_dir**.
108 This will iterate through the files named 0,1,2 in osdmap_dir.
109
110 .. option:: --mark-up-in
111
112 mark osds up and in (but do not persist).
113
114 .. option:: --tree
115
116 Displays a hierarchical tree of the map.
117
118 .. option:: --clear-temp
119
120 clears pg_temp and primary_temp variables.
121
122 Example
123 =======
124
125 To create a simple map with 16 devices::
126
127 osdmaptool --createsimple 16 osdmap --clobber
128
129 To view the result::
130
131 osdmaptool --print osdmap
132
133 To view the mappings of placement groups for pool 0::
134
135 osdmaptool --test-map-pgs-dump rbd --pool 0
136
137 pool 0 pg_num 8
138 0.0 [0,2,1] 0
139 0.1 [2,0,1] 2
140 0.2 [0,1,2] 0
141 0.3 [2,0,1] 2
142 0.4 [0,2,1] 0
143 0.5 [0,2,1] 0
144 0.6 [0,1,2] 0
145 0.7 [1,0,2] 1
146 #osd count first primary c wt wt
147 osd.0 8 5 5 1 1
148 osd.1 8 1 1 1 1
149 osd.2 8 2 2 1 1
150 in 3
151 avg 8 stddev 0 (0x) (expected 2.3094 0.288675x))
152 min osd.0 8
153 max osd.0 8
154 size 0 0
155 size 1 0
156 size 2 0
157 size 3 8
158
159 In which,
160 #. pool 0 has 8 placement groups. And two tables follow:
161 #. A table for placement groups. Each row presents a placement group. With columns of:
162
163 * placement group id,
164 * acting set, and
165 * primary OSD.
166 #. A table for all OSDs. Each row presents an OSD. With columns of:
167
168 * count of placement groups being mapped to this OSD,
169 * count of placement groups where this OSD is the first one in their acting sets,
170 * count of placement groups where this OSD is the primary of them,
171 * the CRUSH weight of this OSD, and
172 * the weight of this OSD.
173 #. Looking at the number of placement groups held by 3 OSDs. We have
174
175 * avarge, stddev, stddev/average, expected stddev, expected stddev / average
176 * min and max
177 #. The number of placement groups mapping to n OSDs. In this case, all 8 placement
178 groups are mapping to 3 different OSDs.
179
180 In a less-balanced cluster, we could have following output for the statistics of
181 placement group distribution, whose standard deviation is 1.41421::
182
183 #osd count first primary c wt wt
184 osd.0 8 5 5 1 1
185 osd.1 8 1 1 1 1
186 osd.2 8 2 2 1 1
187
188 #osd count first primary c wt wt
189 osd.0 33 9 9 0.0145874 1
190 osd.1 34 14 14 0.0145874 1
191 osd.2 31 7 7 0.0145874 1
192 osd.3 31 13 13 0.0145874 1
193 osd.4 30 14 14 0.0145874 1
194 osd.5 33 7 7 0.0145874 1
195 in 6
196 avg 32 stddev 1.41421 (0.0441942x) (expected 5.16398 0.161374x))
197 min osd.4 30
198 max osd.1 34
199 size 00
200 size 10
201 size 20
202 size 364
203
204
205 Availability
206 ============
207
208 **osdmaptool** is part of Ceph, a massively scalable, open-source, distributed storage system. Please
209 refer to the Ceph documentation at http://ceph.com/docs for more
210 information.
211
212
213 See also
214 ========
215
216 :doc:`ceph <ceph>`\(8),
217 :doc:`crushtool <crushtool>`\(8),