]> git.proxmox.com Git - ceph.git/blame - ceph/doc/man/8/osdmaptool.rst
update ceph source to reef 18.1.2
[ceph.git] / ceph / doc / man / 8 / osdmaptool.rst
CommitLineData
7c673cae
FG
1:orphan:
2
11fdf7f2
TL
3.. _osdmaptool:
4
7c673cae
FG
5======================================================
6 osdmaptool -- ceph osd cluster map manipulation tool
7======================================================
8
9.. program:: osdmaptool
10
11Synopsis
12========
13
14| **osdmaptool** *mapfilename* [--print] [--createsimple *numosd*
15 [--pgbits *bitsperosd* ] ] [--clobber]
92f5a8d4
TL
16| **osdmaptool** *mapfilename* [--import-crush *crushmap*]
17| **osdmaptool** *mapfilename* [--export-crush *crushmap*]
18| **osdmaptool** *mapfilename* [--upmap *file*] [--upmap-max *max-optimizations*]
19 [--upmap-deviation *max-deviation*] [--upmap-pool *poolname*]
f67539c2
TL
20 [--save] [--upmap-active]
21| **osdmaptool** *mapfilename* [--upmap-cleanup] [--upmap *file*]
7c673cae
FG
22
23
24Description
25===========
26
27**osdmaptool** is a utility that lets you create, view, and manipulate
28OSD cluster maps from the Ceph distributed storage system. Notably, it
29lets you extract the embedded CRUSH map or import a new CRUSH map.
92f5a8d4
TL
30It can also simulate the upmap balancer mode so you can get a sense of
31what is needed to balance your PGs.
7c673cae
FG
32
33
34Options
35=======
36
37.. option:: --print
38
39 will simply make the tool print a plaintext dump of the map, after
40 any modifications are made.
41
11fdf7f2
TL
42.. option:: --dump <format>
43
44 displays the map in plain text when <format> is 'plain', 'json' if specified
45 format is not supported. This is an alternative to the print option.
46
7c673cae
FG
47.. option:: --clobber
48
49 will allow osdmaptool to overwrite mapfilename if changes are made.
50
51.. option:: --import-crush mapfile
52
53 will load the CRUSH map from mapfile and embed it in the OSD map.
54
55.. option:: --export-crush mapfile
56
57 will extract the CRUSH map from the OSD map and write it to
58 mapfile.
59
11fdf7f2 60.. option:: --createsimple numosd [--pg-bits bitsperosd] [--pgp-bits bits]
7c673cae
FG
61
62 will create a relatively generic OSD map with the numosd devices.
11fdf7f2 63 If --pg-bits is specified, the initial placement group counts will
7c673cae
FG
64 be set with bitsperosd bits per OSD. That is, the pg_num map
65 attribute will be set to numosd shifted by bitsperosd.
11fdf7f2
TL
66 If --pgp-bits is specified, then the pgp_num map attribute will
67 be set to numosd shifted by bits.
68
69.. option:: --create-from-conf
70
71 creates an osd map with default configurations.
7c673cae 72
11fdf7f2 73.. option:: --test-map-pgs [--pool poolid] [--range-first <first> --range-last <last>]
7c673cae
FG
74
75 will print out the mappings from placement groups to OSDs.
11fdf7f2
TL
76 If range is specified, then it iterates from first to last in the directory
77 specified by argument to osdmaptool.
78 Eg: **osdmaptool --test-map-pgs --range-first 0 --range-last 2 osdmap_dir**.
79 This will iterate through the files named 0,1,2 in osdmap_dir.
7c673cae 80
11fdf7f2
TL
81.. option:: --test-map-pgs-dump [--pool poolid] [--range-first <first> --range-last <last>]
82
83 will print out the summary of all placement groups and the mappings from them to the mapped OSDs.
84 If range is specified, then it iterates from first to last in the directory
85 specified by argument to osdmaptool.
86 Eg: **osdmaptool --test-map-pgs-dump --range-first 0 --range-last 2 osdmap_dir**.
87 This will iterate through the files named 0,1,2 in osdmap_dir.
88
89.. option:: --test-map-pgs-dump-all [--pool poolid] [--range-first <first> --range-last <last>]
7c673cae
FG
90
91 will print out the summary of all placement groups and the mappings
11fdf7f2
TL
92 from them to all the OSDs.
93 If range is specified, then it iterates from first to last in the directory
94 specified by argument to osdmaptool.
95 Eg: **osdmaptool --test-map-pgs-dump-all --range-first 0 --range-last 2 osdmap_dir**.
96 This will iterate through the files named 0,1,2 in osdmap_dir.
97
98.. option:: --test-random
99
100 does a random mapping of placement groups to the OSDs.
101
102.. option:: --test-map-pg <pgid>
103
104 map a particular placement group(specified by pgid) to the OSDs.
105
106.. option:: --test-map-object <objectname> [--pool <poolid>]
107
108 map a particular placement group(specified by objectname) to the OSDs.
109
110.. option:: --test-crush [--range-first <first> --range-last <last>]
111
112 map placement groups to acting OSDs.
113 If range is specified, then it iterates from first to last in the directory
114 specified by argument to osdmaptool.
115 Eg: **osdmaptool --test-crush --range-first 0 --range-last 2 osdmap_dir**.
116 This will iterate through the files named 0,1,2 in osdmap_dir.
117
118.. option:: --mark-up-in
119
120 mark osds up and in (but do not persist).
121
92f5a8d4
TL
122.. option:: --mark-out
123
124 mark an osd as out (but do not persist)
125
f67539c2
TL
126.. option:: --mark-up <osdid>
127
128 mark an osd as up (but do not persist)
129
130.. option:: --mark-in <osdid>
131
132 mark an osd as in (but do not persist)
133
11fdf7f2
TL
134.. option:: --tree
135
136 Displays a hierarchical tree of the map.
137
138.. option:: --clear-temp
7c673cae 139
11fdf7f2 140 clears pg_temp and primary_temp variables.
7c673cae 141
f91f0fd5
TL
142.. option:: --clean-temps
143
144 clean pg_temps.
145
92f5a8d4
TL
146.. option:: --health
147
148 dump health checks
149
150.. option:: --with-default-pool
151
152 include default pool when creating map
153
154.. option:: --upmap-cleanup <file>
155
156 clean up pg_upmap[_items] entries, writing commands to <file> [default: - for stdout]
157
158.. option:: --upmap <file>
159
160 calculate pg upmap entries to balance pg layout writing commands to <file> [default: - for stdout]
161
162.. option:: --upmap-max <max-optimizations>
163
164 set max upmap entries to calculate [default: 10]
165
166.. option:: --upmap-deviation <max-deviation>
167
168 max deviation from target [default: 5]
169
170.. option:: --upmap-pool <poolname>
171
172 restrict upmap balancing to 1 pool or the option can be repeated for multiple pools
173
92f5a8d4
TL
174.. option:: --upmap-active
175
176 Act like an active balancer, keep applying changes until balanced
177
f67539c2
TL
178.. option:: --adjust-crush-weight <osdid:weight>[,<osdid:weight>,<...>]
179
180 Change CRUSH weight of <osdid>
181
182.. option:: --save
183
184 write modified osdmap with upmap or crush-adjust changes
92f5a8d4 185
7c673cae
FG
186Example
187=======
188
189To create a simple map with 16 devices::
190
191 osdmaptool --createsimple 16 osdmap --clobber
192
193To view the result::
194
195 osdmaptool --print osdmap
196
92f5a8d4 197To view the mappings of placement groups for pool 1::
7c673cae 198
92f5a8d4 199 osdmaptool osdmap --test-map-pgs-dump --pool 1
7c673cae 200
f67539c2 201 pool 1 pg_num 8
92f5a8d4
TL
202 1.0 [0,2,1] 0
203 1.1 [2,0,1] 2
204 1.2 [0,1,2] 0
205 1.3 [2,0,1] 2
206 1.4 [0,2,1] 0
207 1.5 [0,2,1] 0
208 1.6 [0,1,2] 0
209 1.7 [1,0,2] 1
7c673cae
FG
210 #osd count first primary c wt wt
211 osd.0 8 5 5 1 1
212 osd.1 8 1 1 1 1
213 osd.2 8 2 2 1 1
214 in 3
215 avg 8 stddev 0 (0x) (expected 2.3094 0.288675x))
216 min osd.0 8
217 max osd.0 8
218 size 0 0
219 size 1 0
220 size 2 0
221 size 3 8
222
223In which,
92f5a8d4 224 #. pool 1 has 8 placement groups. And two tables follow:
7c673cae
FG
225 #. A table for placement groups. Each row presents a placement group. With columns of:
226
227 * placement group id,
228 * acting set, and
229 * primary OSD.
230 #. A table for all OSDs. Each row presents an OSD. With columns of:
231
232 * count of placement groups being mapped to this OSD,
233 * count of placement groups where this OSD is the first one in their acting sets,
234 * count of placement groups where this OSD is the primary of them,
235 * the CRUSH weight of this OSD, and
236 * the weight of this OSD.
237 #. Looking at the number of placement groups held by 3 OSDs. We have
238
1e59de90 239 * average, stddev, stddev/average, expected stddev, expected stddev / average
7c673cae
FG
240 * min and max
241 #. The number of placement groups mapping to n OSDs. In this case, all 8 placement
242 groups are mapping to 3 different OSDs.
243
244In a less-balanced cluster, we could have following output for the statistics of
245placement group distribution, whose standard deviation is 1.41421::
246
247 #osd count first primary c wt wt
248 osd.0 8 5 5 1 1
249 osd.1 8 1 1 1 1
250 osd.2 8 2 2 1 1
251
252 #osd count first primary c wt wt
253 osd.0 33 9 9 0.0145874 1
254 osd.1 34 14 14 0.0145874 1
255 osd.2 31 7 7 0.0145874 1
256 osd.3 31 13 13 0.0145874 1
257 osd.4 30 14 14 0.0145874 1
258 osd.5 33 7 7 0.0145874 1
259 in 6
260 avg 32 stddev 1.41421 (0.0441942x) (expected 5.16398 0.161374x))
261 min osd.4 30
262 max osd.1 34
263 size 00
264 size 10
265 size 20
266 size 364
267
f67539c2 268To simulate the active balancer in upmap mode::
92f5a8d4
TL
269
270 osdmaptool --upmap upmaps.out --upmap-active --upmap-deviation 6 --upmap-max 11 osdmap
271
272 osdmaptool: osdmap file 'osdmap'
273 writing upmap command output to: upmaps.out
274 checking for upmap cleanups
275 upmap, max-count 11, max deviation 6
276 pools movies photos metadata data
277 prepared 11/11 changes
278 Time elapsed 0.00310404 secs
279 pools movies photos metadata data
280 prepared 11/11 changes
281 Time elapsed 0.00283402 secs
282 pools data metadata movies photos
283 prepared 11/11 changes
284 Time elapsed 0.003122 secs
285 pools photos metadata data movies
286 prepared 11/11 changes
287 Time elapsed 0.00324372 secs
288 pools movies metadata data photos
289 prepared 1/11 changes
290 Time elapsed 0.00222609 secs
291 pools data movies photos metadata
292 prepared 0/11 changes
293 Time elapsed 0.00209916 secs
294 Unable to find further optimization, or distribution is already perfect
295 osd.0 pgs 41
296 osd.1 pgs 42
297 osd.2 pgs 42
298 osd.3 pgs 41
299 osd.4 pgs 46
300 osd.5 pgs 39
301 osd.6 pgs 39
302 osd.7 pgs 43
303 osd.8 pgs 41
304 osd.9 pgs 46
305 osd.10 pgs 46
306 osd.11 pgs 46
307 osd.12 pgs 46
308 osd.13 pgs 41
309 osd.14 pgs 40
310 osd.15 pgs 40
311 osd.16 pgs 39
312 osd.17 pgs 46
313 osd.18 pgs 46
314 osd.19 pgs 39
315 osd.20 pgs 42
316 Total time elapsed 0.0167765 secs, 5 rounds
317
7c673cae
FG
318
319Availability
320============
321
322**osdmaptool** is part of Ceph, a massively scalable, open-source, distributed storage system. Please
20effc67 323refer to the Ceph documentation at https://docs.ceph.com for more
7c673cae
FG
324information.
325
326
327See also
328========
329
330:doc:`ceph <ceph>`\(8),
331:doc:`crushtool <crushtool>`\(8),