]> git.proxmox.com Git - ceph.git/blame - ceph/doc/man/8/osdmaptool.rst
import new upstream nautilus stable release 14.2.8
[ceph.git] / ceph / doc / man / 8 / osdmaptool.rst
CommitLineData
7c673cae
FG
1:orphan:
2
11fdf7f2
TL
3.. _osdmaptool:
4
7c673cae
FG
5======================================================
6 osdmaptool -- ceph osd cluster map manipulation tool
7======================================================
8
9.. program:: osdmaptool
10
11Synopsis
12========
13
14| **osdmaptool** *mapfilename* [--print] [--createsimple *numosd*
15 [--pgbits *bitsperosd* ] ] [--clobber]
92f5a8d4
TL
16| **osdmaptool** *mapfilename* [--import-crush *crushmap*]
17| **osdmaptool** *mapfilename* [--export-crush *crushmap*]
18| **osdmaptool** *mapfilename* [--upmap *file*] [--upmap-max *max-optimizations*]
19 [--upmap-deviation *max-deviation*] [--upmap-pool *poolname*]
20 [--upmap-save *file*] [--upmap-save *newosdmap*] [--upmap-active]
21| **osdmaptool** *mapfilename* [--upmap-cleanup] [--upmap-save *newosdmap*]
7c673cae
FG
22
23
24Description
25===========
26
27**osdmaptool** is a utility that lets you create, view, and manipulate
28OSD cluster maps from the Ceph distributed storage system. Notably, it
29lets you extract the embedded CRUSH map or import a new CRUSH map.
92f5a8d4
TL
30It can also simulate the upmap balancer mode so you can get a sense of
31what is needed to balance your PGs.
7c673cae
FG
32
33
34Options
35=======
36
37.. option:: --print
38
39 will simply make the tool print a plaintext dump of the map, after
40 any modifications are made.
41
11fdf7f2
TL
42.. option:: --dump <format>
43
44 displays the map in plain text when <format> is 'plain', 'json' if specified
45 format is not supported. This is an alternative to the print option.
46
7c673cae
FG
47.. option:: --clobber
48
49 will allow osdmaptool to overwrite mapfilename if changes are made.
50
51.. option:: --import-crush mapfile
52
53 will load the CRUSH map from mapfile and embed it in the OSD map.
54
55.. option:: --export-crush mapfile
56
57 will extract the CRUSH map from the OSD map and write it to
58 mapfile.
59
11fdf7f2 60.. option:: --createsimple numosd [--pg-bits bitsperosd] [--pgp-bits bits]
7c673cae
FG
61
62 will create a relatively generic OSD map with the numosd devices.
11fdf7f2 63 If --pg-bits is specified, the initial placement group counts will
7c673cae
FG
64 be set with bitsperosd bits per OSD. That is, the pg_num map
65 attribute will be set to numosd shifted by bitsperosd.
11fdf7f2
TL
66 If --pgp-bits is specified, then the pgp_num map attribute will
67 be set to numosd shifted by bits.
68
69.. option:: --create-from-conf
70
71 creates an osd map with default configurations.
7c673cae 72
11fdf7f2 73.. option:: --test-map-pgs [--pool poolid] [--range-first <first> --range-last <last>]
7c673cae
FG
74
75 will print out the mappings from placement groups to OSDs.
11fdf7f2
TL
76 If range is specified, then it iterates from first to last in the directory
77 specified by argument to osdmaptool.
78 Eg: **osdmaptool --test-map-pgs --range-first 0 --range-last 2 osdmap_dir**.
79 This will iterate through the files named 0,1,2 in osdmap_dir.
7c673cae 80
11fdf7f2
TL
81.. option:: --test-map-pgs-dump [--pool poolid] [--range-first <first> --range-last <last>]
82
83 will print out the summary of all placement groups and the mappings from them to the mapped OSDs.
84 If range is specified, then it iterates from first to last in the directory
85 specified by argument to osdmaptool.
86 Eg: **osdmaptool --test-map-pgs-dump --range-first 0 --range-last 2 osdmap_dir**.
87 This will iterate through the files named 0,1,2 in osdmap_dir.
88
89.. option:: --test-map-pgs-dump-all [--pool poolid] [--range-first <first> --range-last <last>]
7c673cae
FG
90
91 will print out the summary of all placement groups and the mappings
11fdf7f2
TL
92 from them to all the OSDs.
93 If range is specified, then it iterates from first to last in the directory
94 specified by argument to osdmaptool.
95 Eg: **osdmaptool --test-map-pgs-dump-all --range-first 0 --range-last 2 osdmap_dir**.
96 This will iterate through the files named 0,1,2 in osdmap_dir.
97
98.. option:: --test-random
99
100 does a random mapping of placement groups to the OSDs.
101
102.. option:: --test-map-pg <pgid>
103
104 map a particular placement group(specified by pgid) to the OSDs.
105
106.. option:: --test-map-object <objectname> [--pool <poolid>]
107
108 map a particular placement group(specified by objectname) to the OSDs.
109
110.. option:: --test-crush [--range-first <first> --range-last <last>]
111
112 map placement groups to acting OSDs.
113 If range is specified, then it iterates from first to last in the directory
114 specified by argument to osdmaptool.
115 Eg: **osdmaptool --test-crush --range-first 0 --range-last 2 osdmap_dir**.
116 This will iterate through the files named 0,1,2 in osdmap_dir.
117
118.. option:: --mark-up-in
119
120 mark osds up and in (but do not persist).
121
92f5a8d4
TL
122.. option:: --mark-out
123
124 mark an osd as out (but do not persist)
125
11fdf7f2
TL
126.. option:: --tree
127
128 Displays a hierarchical tree of the map.
129
130.. option:: --clear-temp
7c673cae 131
11fdf7f2 132 clears pg_temp and primary_temp variables.
7c673cae 133
92f5a8d4
TL
134.. option:: --health
135
136 dump health checks
137
138.. option:: --with-default-pool
139
140 include default pool when creating map
141
142.. option:: --upmap-cleanup <file>
143
144 clean up pg_upmap[_items] entries, writing commands to <file> [default: - for stdout]
145
146.. option:: --upmap <file>
147
148 calculate pg upmap entries to balance pg layout writing commands to <file> [default: - for stdout]
149
150.. option:: --upmap-max <max-optimizations>
151
152 set max upmap entries to calculate [default: 10]
153
154.. option:: --upmap-deviation <max-deviation>
155
156 max deviation from target [default: 5]
157
158.. option:: --upmap-pool <poolname>
159
160 restrict upmap balancing to 1 pool or the option can be repeated for multiple pools
161
162.. option:: --upmap-save
163
164 write modified OSDMap with upmap changes
165
166.. option:: --upmap-active
167
168 Act like an active balancer, keep applying changes until balanced
169
170
7c673cae
FG
171Example
172=======
173
174To create a simple map with 16 devices::
175
176 osdmaptool --createsimple 16 osdmap --clobber
177
178To view the result::
179
180 osdmaptool --print osdmap
181
92f5a8d4 182To view the mappings of placement groups for pool 1::
7c673cae 183
92f5a8d4 184 osdmaptool osdmap --test-map-pgs-dump --pool 1
7c673cae
FG
185
186 pool 0 pg_num 8
92f5a8d4
TL
187 1.0 [0,2,1] 0
188 1.1 [2,0,1] 2
189 1.2 [0,1,2] 0
190 1.3 [2,0,1] 2
191 1.4 [0,2,1] 0
192 1.5 [0,2,1] 0
193 1.6 [0,1,2] 0
194 1.7 [1,0,2] 1
7c673cae
FG
195 #osd count first primary c wt wt
196 osd.0 8 5 5 1 1
197 osd.1 8 1 1 1 1
198 osd.2 8 2 2 1 1
199 in 3
200 avg 8 stddev 0 (0x) (expected 2.3094 0.288675x))
201 min osd.0 8
202 max osd.0 8
203 size 0 0
204 size 1 0
205 size 2 0
206 size 3 8
207
208In which,
92f5a8d4 209 #. pool 1 has 8 placement groups. And two tables follow:
7c673cae
FG
210 #. A table for placement groups. Each row presents a placement group. With columns of:
211
212 * placement group id,
213 * acting set, and
214 * primary OSD.
215 #. A table for all OSDs. Each row presents an OSD. With columns of:
216
217 * count of placement groups being mapped to this OSD,
218 * count of placement groups where this OSD is the first one in their acting sets,
219 * count of placement groups where this OSD is the primary of them,
220 * the CRUSH weight of this OSD, and
221 * the weight of this OSD.
222 #. Looking at the number of placement groups held by 3 OSDs. We have
223
224 * avarge, stddev, stddev/average, expected stddev, expected stddev / average
225 * min and max
226 #. The number of placement groups mapping to n OSDs. In this case, all 8 placement
227 groups are mapping to 3 different OSDs.
228
229In a less-balanced cluster, we could have following output for the statistics of
230placement group distribution, whose standard deviation is 1.41421::
231
232 #osd count first primary c wt wt
233 osd.0 8 5 5 1 1
234 osd.1 8 1 1 1 1
235 osd.2 8 2 2 1 1
236
237 #osd count first primary c wt wt
238 osd.0 33 9 9 0.0145874 1
239 osd.1 34 14 14 0.0145874 1
240 osd.2 31 7 7 0.0145874 1
241 osd.3 31 13 13 0.0145874 1
242 osd.4 30 14 14 0.0145874 1
243 osd.5 33 7 7 0.0145874 1
244 in 6
245 avg 32 stddev 1.41421 (0.0441942x) (expected 5.16398 0.161374x))
246 min osd.4 30
247 max osd.1 34
248 size 00
249 size 10
250 size 20
251 size 364
252
92f5a8d4
TL
253 To simulate the active balancer in upmap mode::
254
255 osdmaptool --upmap upmaps.out --upmap-active --upmap-deviation 6 --upmap-max 11 osdmap
256
257 osdmaptool: osdmap file 'osdmap'
258 writing upmap command output to: upmaps.out
259 checking for upmap cleanups
260 upmap, max-count 11, max deviation 6
261 pools movies photos metadata data
262 prepared 11/11 changes
263 Time elapsed 0.00310404 secs
264 pools movies photos metadata data
265 prepared 11/11 changes
266 Time elapsed 0.00283402 secs
267 pools data metadata movies photos
268 prepared 11/11 changes
269 Time elapsed 0.003122 secs
270 pools photos metadata data movies
271 prepared 11/11 changes
272 Time elapsed 0.00324372 secs
273 pools movies metadata data photos
274 prepared 1/11 changes
275 Time elapsed 0.00222609 secs
276 pools data movies photos metadata
277 prepared 0/11 changes
278 Time elapsed 0.00209916 secs
279 Unable to find further optimization, or distribution is already perfect
280 osd.0 pgs 41
281 osd.1 pgs 42
282 osd.2 pgs 42
283 osd.3 pgs 41
284 osd.4 pgs 46
285 osd.5 pgs 39
286 osd.6 pgs 39
287 osd.7 pgs 43
288 osd.8 pgs 41
289 osd.9 pgs 46
290 osd.10 pgs 46
291 osd.11 pgs 46
292 osd.12 pgs 46
293 osd.13 pgs 41
294 osd.14 pgs 40
295 osd.15 pgs 40
296 osd.16 pgs 39
297 osd.17 pgs 46
298 osd.18 pgs 46
299 osd.19 pgs 39
300 osd.20 pgs 42
301 Total time elapsed 0.0167765 secs, 5 rounds
302
7c673cae
FG
303
304Availability
305============
306
307**osdmaptool** is part of Ceph, a massively scalable, open-source, distributed storage system. Please
308refer to the Ceph documentation at http://ceph.com/docs for more
309information.
310
311
312See also
313========
314
315:doc:`ceph <ceph>`\(8),
316:doc:`crushtool <crushtool>`\(8),