]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | .. index:: control, commands |
2 | ||
3 | ================== | |
4 | Control Commands | |
5 | ================== | |
6 | ||
7 | ||
8 | Monitor Commands | |
9 | ================ | |
10 | ||
39ae355f | 11 | Monitor commands are issued using the ``ceph`` utility: |
7c673cae | 12 | |
39ae355f | 13 | .. prompt:: bash $ |
7c673cae | 14 | |
39ae355f | 15 | ceph [-m monhost] {command} |
7c673cae | 16 | |
39ae355f TL |
17 | The command is usually (though not always) of the form: |
18 | ||
19 | .. prompt:: bash $ | |
20 | ||
21 | ceph {subsystem} {command} | |
7c673cae FG |
22 | |
23 | ||
24 | System Commands | |
25 | =============== | |
26 | ||
39ae355f | 27 | Execute the following to display the current cluster status. : |
7c673cae | 28 | |
39ae355f TL |
29 | .. prompt:: bash $ |
30 | ||
31 | ceph -s | |
32 | ceph status | |
7c673cae | 33 | |
f67539c2 | 34 | Execute the following to display a running summary of cluster status |
39ae355f TL |
35 | and major events. : |
36 | ||
37 | .. prompt:: bash $ | |
7c673cae | 38 | |
39ae355f | 39 | ceph -w |
7c673cae FG |
40 | |
41 | Execute the following to show the monitor quorum, including which monitors are | |
39ae355f | 42 | participating and which one is the leader. : |
7c673cae | 43 | |
39ae355f TL |
44 | .. prompt:: bash $ |
45 | ||
46 | ceph mon stat | |
47 | ceph quorum_status | |
7c673cae FG |
48 | |
49 | Execute the following to query the status of a single monitor, including whether | |
39ae355f TL |
50 | or not it is in the quorum. : |
51 | ||
52 | .. prompt:: bash $ | |
7c673cae | 53 | |
39ae355f | 54 | ceph tell mon.[id] mon_status |
9f95a23c TL |
55 | |
56 | where the value of ``[id]`` can be determined, e.g., from ``ceph -s``. | |
7c673cae FG |
57 | |
58 | ||
59 | Authentication Subsystem | |
60 | ======================== | |
61 | ||
39ae355f TL |
62 | To add a keyring for an OSD, execute the following: |
63 | ||
64 | .. prompt:: bash $ | |
7c673cae | 65 | |
39ae355f | 66 | ceph auth add {osd} {--in-file|-i} {path-to-osd-keyring} |
7c673cae | 67 | |
39ae355f | 68 | To list the cluster's keys and their capabilities, execute the following: |
7c673cae | 69 | |
39ae355f TL |
70 | .. prompt:: bash $ |
71 | ||
72 | ceph auth ls | |
7c673cae FG |
73 | |
74 | ||
75 | Placement Group Subsystem | |
76 | ========================= | |
77 | ||
39ae355f TL |
78 | To display the statistics for all placement groups (PGs), execute the following: |
79 | ||
80 | .. prompt:: bash $ | |
7c673cae | 81 | |
39ae355f | 82 | ceph pg dump [--format {format}] |
7c673cae | 83 | |
9f95a23c | 84 | The valid formats are ``plain`` (default), ``json`` ``json-pretty``, ``xml``, and ``xml-pretty``. |
f67539c2 TL |
85 | When implementing monitoring and other tools, it is best to use ``json`` format. |
86 | JSON parsing is more deterministic than the human-oriented ``plain``, and the layout is much | |
87 | less variable from release to release. The ``jq`` utility can be invaluable when extracting | |
88 | data from JSON output. | |
7c673cae FG |
89 | |
90 | To display the statistics for all placement groups stuck in a specified state, | |
39ae355f | 91 | execute the following: |
7c673cae | 92 | |
39ae355f TL |
93 | .. prompt:: bash $ |
94 | ||
95 | ceph pg dump_stuck inactive|unclean|stale|undersized|degraded [--format {format}] [-t|--threshold {seconds}] | |
7c673cae FG |
96 | |
97 | ||
9f95a23c | 98 | ``--format`` may be ``plain`` (default), ``json``, ``json-pretty``, ``xml``, or ``xml-pretty``. |
7c673cae FG |
99 | |
100 | ``--threshold`` defines how many seconds "stuck" is (default: 300) | |
101 | ||
102 | **Inactive** Placement groups cannot process reads or writes because they are waiting for an OSD | |
103 | with the most up-to-date data to come back. | |
104 | ||
105 | **Unclean** Placement groups contain objects that are not replicated the desired number | |
106 | of times. They should be recovering. | |
107 | ||
108 | **Stale** Placement groups are in an unknown state - the OSDs that host them have not | |
109 | reported to the monitor cluster in a while (configured by | |
110 | ``mon_osd_report_timeout``). | |
111 | ||
112 | Delete "lost" objects or revert them to their prior state, either a previous version | |
39ae355f TL |
113 | or delete them if they were just created. : |
114 | ||
115 | .. prompt:: bash $ | |
7c673cae | 116 | |
39ae355f | 117 | ceph pg {pgid} mark_unfound_lost revert|delete |
7c673cae FG |
118 | |
119 | ||
a4b75251 TL |
120 | .. _osd-subsystem: |
121 | ||
7c673cae FG |
122 | OSD Subsystem |
123 | ============= | |
124 | ||
39ae355f | 125 | Query OSD subsystem status. : |
7c673cae | 126 | |
39ae355f TL |
127 | .. prompt:: bash $ |
128 | ||
129 | ceph osd stat | |
7c673cae FG |
130 | |
131 | Write a copy of the most recent OSD map to a file. See | |
39ae355f TL |
132 | :ref:`osdmaptool <osdmaptool>`. : |
133 | ||
134 | .. prompt:: bash $ | |
7c673cae | 135 | |
39ae355f | 136 | ceph osd getmap -o file |
7c673cae | 137 | |
7c673cae | 138 | Write a copy of the crush map from the most recent OSD map to |
39ae355f TL |
139 | file. : |
140 | ||
141 | .. prompt:: bash $ | |
7c673cae | 142 | |
39ae355f | 143 | ceph osd getcrushmap -o file |
7c673cae | 144 | |
39ae355f | 145 | The foregoing is functionally equivalent to : |
7c673cae | 146 | |
39ae355f TL |
147 | .. prompt:: bash $ |
148 | ||
149 | ceph osd getmap -o /tmp/osdmap | |
150 | osdmaptool /tmp/osdmap --export-crush file | |
7c673cae | 151 | |
9f95a23c TL |
152 | Dump the OSD map. Valid formats for ``-f`` are ``plain``, ``json``, ``json-pretty``, |
153 | ``xml``, and ``xml-pretty``. If no ``--format`` option is given, the OSD map is | |
39ae355f TL |
154 | dumped as plain text. As above, JSON format is best for tools, scripting, and other automation. : |
155 | ||
156 | .. prompt:: bash $ | |
7c673cae | 157 | |
39ae355f | 158 | ceph osd dump [--format {format}] |
7c673cae FG |
159 | |
160 | Dump the OSD map as a tree with one line per OSD containing weight | |
39ae355f TL |
161 | and state. : |
162 | ||
163 | .. prompt:: bash $ | |
164 | ||
165 | ceph osd tree [--format {format}] | |
7c673cae | 166 | |
39ae355f | 167 | Find out where a specific object is or would be stored in the system: |
7c673cae | 168 | |
39ae355f | 169 | .. prompt:: bash $ |
7c673cae | 170 | |
39ae355f | 171 | ceph osd map <pool-name> <object-name> |
7c673cae FG |
172 | |
173 | Add or move a new item (OSD) with the given id/name/weight at the specified | |
39ae355f TL |
174 | location. : |
175 | ||
176 | .. prompt:: bash $ | |
177 | ||
178 | ceph osd crush set {id} {weight} [{loc1} [{loc2} ...]] | |
179 | ||
180 | Remove an existing item (OSD) from the CRUSH map. : | |
7c673cae | 181 | |
39ae355f | 182 | .. prompt:: bash $ |
7c673cae | 183 | |
39ae355f | 184 | ceph osd crush remove {name} |
7c673cae | 185 | |
39ae355f | 186 | Remove an existing bucket from the CRUSH map. : |
7c673cae | 187 | |
39ae355f | 188 | .. prompt:: bash $ |
7c673cae | 189 | |
39ae355f | 190 | ceph osd crush remove {bucket-name} |
7c673cae | 191 | |
39ae355f | 192 | Move an existing bucket from one position in the hierarchy to another. : |
7c673cae | 193 | |
39ae355f | 194 | .. prompt:: bash $ |
7c673cae | 195 | |
39ae355f | 196 | ceph osd crush move {id} {loc1} [{loc2} ...] |
7c673cae | 197 | |
39ae355f | 198 | Set the weight of the item given by ``{name}`` to ``{weight}``. : |
7c673cae | 199 | |
39ae355f | 200 | .. prompt:: bash $ |
7c673cae | 201 | |
39ae355f TL |
202 | ceph osd crush reweight {name} {weight} |
203 | ||
204 | Mark an OSD as ``lost``. This may result in permanent data loss. Use with caution. : | |
205 | ||
206 | .. prompt:: bash $ | |
207 | ||
208 | ceph osd lost {id} [--yes-i-really-mean-it] | |
7c673cae FG |
209 | |
210 | Create a new OSD. If no UUID is given, it will be set automatically when the OSD | |
39ae355f TL |
211 | starts up. : |
212 | ||
213 | .. prompt:: bash $ | |
214 | ||
215 | ceph osd create [{uuid}] | |
216 | ||
217 | Remove the given OSD(s). : | |
218 | ||
219 | .. prompt:: bash $ | |
7c673cae | 220 | |
39ae355f | 221 | ceph osd rm [{id}...] |
7c673cae | 222 | |
39ae355f | 223 | Query the current ``max_osd`` parameter in the OSD map. : |
7c673cae | 224 | |
39ae355f | 225 | .. prompt:: bash $ |
7c673cae | 226 | |
39ae355f | 227 | ceph osd getmaxosd |
7c673cae | 228 | |
39ae355f | 229 | Import the given crush map. : |
7c673cae | 230 | |
39ae355f | 231 | .. prompt:: bash $ |
7c673cae | 232 | |
39ae355f | 233 | ceph osd setcrushmap -i file |
7c673cae | 234 | |
f67539c2 | 235 | Set the ``max_osd`` parameter in the OSD map. This defaults to 10000 now so |
39ae355f | 236 | most admins will never need to adjust this. : |
7c673cae | 237 | |
39ae355f | 238 | .. prompt:: bash $ |
7c673cae | 239 | |
39ae355f | 240 | ceph osd setmaxosd |
7c673cae | 241 | |
39ae355f | 242 | Mark OSD ``{osd-num}`` down. : |
7c673cae | 243 | |
39ae355f | 244 | .. prompt:: bash $ |
7c673cae | 245 | |
39ae355f | 246 | ceph osd down {osd-num} |
7c673cae | 247 | |
39ae355f | 248 | Mark OSD ``{osd-num}`` out of the distribution (i.e. allocated no data). : |
7c673cae | 249 | |
39ae355f TL |
250 | .. prompt:: bash $ |
251 | ||
252 | ceph osd out {osd-num} | |
253 | ||
254 | Mark ``{osd-num}`` in the distribution (i.e. allocated data). : | |
255 | ||
256 | .. prompt:: bash $ | |
257 | ||
258 | ceph osd in {osd-num} | |
7c673cae | 259 | |
7c673cae FG |
260 | Set or clear the pause flags in the OSD map. If set, no IO requests |
261 | will be sent to any OSD. Clearing the flags via unpause results in | |
39ae355f | 262 | resending pending requests. : |
7c673cae | 263 | |
39ae355f TL |
264 | .. prompt:: bash $ |
265 | ||
266 | ceph osd pause | |
267 | ceph osd unpause | |
7c673cae | 268 | |
9f95a23c | 269 | Set the override weight (reweight) of ``{osd-num}`` to ``{weight}``. Two OSDs with the |
7c673cae FG |
270 | same weight will receive roughly the same number of I/O requests and |
271 | store approximately the same amount of data. ``ceph osd reweight`` | |
272 | sets an override weight on the OSD. This value is in the range 0 to 1, | |
273 | and forces CRUSH to re-place (1-weight) of the data that would | |
9f95a23c | 274 | otherwise live on this drive. It does not change weights assigned |
7c673cae | 275 | to the buckets above the OSD in the crush map, and is a corrective |
c07f9fc5 | 276 | measure in case the normal CRUSH distribution is not working out quite |
7c673cae | 277 | right. For instance, if one of your OSDs is at 90% and the others are |
39ae355f TL |
278 | at 50%, you could reduce this weight to compensate. : |
279 | ||
280 | .. prompt:: bash $ | |
7c673cae | 281 | |
39ae355f | 282 | ceph osd reweight {osd-num} {weight} |
7c673cae | 283 | |
9f95a23c TL |
284 | Balance OSD fullness by reducing the override weight of OSDs which are |
285 | overly utilized. Note that these override aka ``reweight`` values | |
286 | default to 1.00000 and are relative only to each other; they not absolute. | |
287 | It is crucial to distinguish them from CRUSH weights, which reflect the | |
288 | absolute capacity of a bucket in TiB. By default this command adjusts | |
289 | override weight on OSDs which have + or - 20% of the average utilization, | |
39ae355f TL |
290 | but if you include a ``threshold`` that percentage will be used instead. : |
291 | ||
292 | .. prompt:: bash $ | |
9f95a23c | 293 | |
39ae355f | 294 | ceph osd reweight-by-utilization [threshold [max_change [max_osds]]] [--no-increasing] |
7c673cae | 295 | |
9f95a23c TL |
296 | To limit the step by which any OSD's reweight will be changed, specify |
297 | ``max_change`` which defaults to 0.05. To limit the number of OSDs that will | |
298 | be adjusted, specify ``max_osds`` as well; the default is 4. Increasing these | |
299 | parameters can speed leveling of OSD utilization, at the potential cost of | |
300 | greater impact on client operations due to more data moving at once. | |
7c673cae | 301 | |
9f95a23c | 302 | To determine which and how many PGs and OSDs will be affected by a given invocation |
39ae355f | 303 | you can test before executing. : |
7c673cae | 304 | |
39ae355f TL |
305 | .. prompt:: bash $ |
306 | ||
307 | ceph osd test-reweight-by-utilization [threshold [max_change max_osds]] [--no-increasing] | |
7c673cae | 308 | |
9f95a23c TL |
309 | Adding ``--no-increasing`` to either command prevents increasing any |
310 | override weights that are currently < 1.00000. This can be useful when | |
311 | you are balancing in a hurry to remedy ``full`` or ``nearful`` OSDs or | |
312 | when some OSDs are being evacuated or slowly brought into service. | |
313 | ||
314 | Deployments utilizing Nautilus (or later revisions of Luminous and Mimic) | |
315 | that have no pre-Luminous cients may instead wish to instead enable the | |
316 | `balancer`` module for ``ceph-mgr``. | |
317 | ||
33c7a0ef TL |
318 | Add/remove an IP address or CIDR range to/from the blocklist. |
319 | When adding to the blocklist, | |
f67539c2 TL |
320 | you can specify how long it should be blocklisted in seconds; otherwise, |
321 | it will default to 1 hour. A blocklisted address is prevented from | |
33c7a0ef TL |
322 | connecting to any OSD. If you blocklist an IP or range containing an OSD, be aware |
323 | that OSD will also be prevented from performing operations on its peers where it | |
324 | acts as a client. (This includes tiering and copy-from functionality.) | |
325 | ||
326 | If you want to blocklist a range (in CIDR format), you may do so by | |
327 | including the ``range`` keyword. | |
7c673cae FG |
328 | |
329 | These commands are mostly only useful for failure testing, as | |
f67539c2 | 330 | blocklists are normally maintained automatically and shouldn't need |
39ae355f TL |
331 | manual intervention. : |
332 | ||
333 | .. prompt:: bash $ | |
334 | ||
335 | ceph osd blocklist ["range"] add ADDRESS[:source_port][/netmask_bits] [TIME] | |
336 | ceph osd blocklist ["range"] rm ADDRESS[:source_port][/netmask_bits] | |
337 | ||
338 | Creates/deletes a snapshot of a pool. : | |
7c673cae | 339 | |
39ae355f | 340 | .. prompt:: bash $ |
7c673cae | 341 | |
39ae355f TL |
342 | ceph osd pool mksnap {pool-name} {snap-name} |
343 | ceph osd pool rmsnap {pool-name} {snap-name} | |
7c673cae | 344 | |
39ae355f | 345 | Creates/deletes/renames a storage pool. : |
7c673cae | 346 | |
39ae355f | 347 | .. prompt:: bash $ |
7c673cae | 348 | |
39ae355f TL |
349 | ceph osd pool create {pool-name} [pg_num [pgp_num]] |
350 | ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it] | |
351 | ceph osd pool rename {old-name} {new-name} | |
7c673cae | 352 | |
39ae355f | 353 | Changes a pool setting. : |
7c673cae | 354 | |
39ae355f TL |
355 | .. prompt:: bash $ |
356 | ||
357 | ceph osd pool set {pool-name} {field} {value} | |
7c673cae FG |
358 | |
359 | Valid fields are: | |
360 | ||
361 | * ``size``: Sets the number of copies of data in the pool. | |
362 | * ``pg_num``: The placement group number. | |
363 | * ``pgp_num``: Effective number when calculating pg placement. | |
b32b8144 | 364 | * ``crush_rule``: rule number for mapping placement. |
7c673cae | 365 | |
39ae355f TL |
366 | Get the value of a pool setting. : |
367 | ||
368 | .. prompt:: bash $ | |
7c673cae | 369 | |
39ae355f | 370 | ceph osd pool get {pool-name} {field} |
7c673cae FG |
371 | |
372 | Valid fields are: | |
373 | ||
374 | * ``pg_num``: The placement group number. | |
375 | * ``pgp_num``: Effective number of placement groups when calculating placement. | |
7c673cae FG |
376 | |
377 | ||
39ae355f TL |
378 | Sends a scrub command to OSD ``{osd-num}``. To send the command to all OSDs, use ``*``. : |
379 | ||
380 | .. prompt:: bash $ | |
7c673cae | 381 | |
39ae355f | 382 | ceph osd scrub {osd-num} |
7c673cae | 383 | |
39ae355f | 384 | Sends a repair command to OSD.N. To send the command to all OSDs, use ``*``. : |
7c673cae | 385 | |
39ae355f TL |
386 | .. prompt:: bash $ |
387 | ||
388 | ceph osd repair N | |
7c673cae FG |
389 | |
390 | Runs a simple throughput benchmark against OSD.N, writing ``TOTAL_DATA_BYTES`` | |
391 | in write requests of ``BYTES_PER_WRITE`` each. By default, the test | |
392 | writes 1 GB in total in 4-MB increments. | |
393 | The benchmark is non-destructive and will not overwrite existing live | |
394 | OSD data, but might temporarily affect the performance of clients | |
39ae355f TL |
395 | concurrently accessing the OSD. : |
396 | ||
397 | .. prompt:: bash $ | |
398 | ||
399 | ceph tell osd.N bench [TOTAL_DATA_BYTES] [BYTES_PER_WRITE] | |
400 | ||
401 | To clear an OSD's caches between benchmark runs, use the 'cache drop' command : | |
7c673cae | 402 | |
39ae355f | 403 | .. prompt:: bash $ |
7c673cae | 404 | |
39ae355f | 405 | ceph tell osd.N cache drop |
11fdf7f2 | 406 | |
39ae355f | 407 | To get the cache statistics of an OSD, use the 'cache status' command : |
11fdf7f2 | 408 | |
39ae355f | 409 | .. prompt:: bash $ |
11fdf7f2 | 410 | |
39ae355f | 411 | ceph tell osd.N cache status |
7c673cae FG |
412 | |
413 | MDS Subsystem | |
414 | ============= | |
415 | ||
39ae355f TL |
416 | Change configuration parameters on a running mds. : |
417 | ||
418 | .. prompt:: bash $ | |
419 | ||
420 | ceph tell mds.{mds-id} config set {setting} {value} | |
421 | ||
422 | Example: | |
7c673cae | 423 | |
39ae355f | 424 | .. prompt:: bash $ |
7c673cae | 425 | |
39ae355f | 426 | ceph tell mds.0 config set debug_ms 1 |
7c673cae | 427 | |
39ae355f | 428 | Enables debug messages. : |
7c673cae | 429 | |
39ae355f | 430 | .. prompt:: bash $ |
7c673cae | 431 | |
39ae355f | 432 | ceph mds stat |
7c673cae | 433 | |
39ae355f | 434 | Displays the status of all metadata servers. : |
7c673cae | 435 | |
39ae355f TL |
436 | .. prompt:: bash $ |
437 | ||
438 | ceph mds fail 0 | |
7c673cae FG |
439 | |
440 | Marks the active MDS as failed, triggering failover to a standby if present. | |
441 | ||
442 | .. todo:: ``ceph mds`` subcommands missing docs: set, dump, getmap, stop, setmap | |
443 | ||
444 | ||
445 | Mon Subsystem | |
446 | ============= | |
447 | ||
39ae355f TL |
448 | Show monitor stats: |
449 | ||
450 | .. prompt:: bash $ | |
7c673cae | 451 | |
39ae355f TL |
452 | ceph mon stat |
453 | ||
454 | :: | |
7c673cae FG |
455 | |
456 | e2: 3 mons at {a=127.0.0.1:40000/0,b=127.0.0.1:40001/0,c=127.0.0.1:40002/0}, election epoch 6, quorum 0,1,2 a,b,c | |
457 | ||
458 | ||
459 | The ``quorum`` list at the end lists monitor nodes that are part of the current quorum. | |
460 | ||
39ae355f TL |
461 | This is also available more directly: |
462 | ||
463 | .. prompt:: bash $ | |
7c673cae | 464 | |
39ae355f | 465 | ceph quorum_status -f json-pretty |
7c673cae FG |
466 | |
467 | .. code-block:: javascript | |
468 | ||
469 | { | |
470 | "election_epoch": 6, | |
471 | "quorum": [ | |
472 | 0, | |
473 | 1, | |
474 | 2 | |
475 | ], | |
476 | "quorum_names": [ | |
477 | "a", | |
478 | "b", | |
479 | "c" | |
480 | ], | |
481 | "quorum_leader_name": "a", | |
482 | "monmap": { | |
483 | "epoch": 2, | |
484 | "fsid": "ba807e74-b64f-4b72-b43f-597dfe60ddbc", | |
485 | "modified": "2016-12-26 14:42:09.288066", | |
486 | "created": "2016-12-26 14:42:03.573585", | |
487 | "features": { | |
488 | "persistent": [ | |
489 | "kraken" | |
490 | ], | |
491 | "optional": [] | |
492 | }, | |
493 | "mons": [ | |
494 | { | |
495 | "rank": 0, | |
496 | "name": "a", | |
497 | "addr": "127.0.0.1:40000\/0", | |
498 | "public_addr": "127.0.0.1:40000\/0" | |
499 | }, | |
500 | { | |
501 | "rank": 1, | |
502 | "name": "b", | |
503 | "addr": "127.0.0.1:40001\/0", | |
504 | "public_addr": "127.0.0.1:40001\/0" | |
505 | }, | |
506 | { | |
507 | "rank": 2, | |
508 | "name": "c", | |
509 | "addr": "127.0.0.1:40002\/0", | |
510 | "public_addr": "127.0.0.1:40002\/0" | |
511 | } | |
512 | ] | |
513 | } | |
514 | } | |
515 | ||
516 | ||
517 | The above will block until a quorum is reached. | |
518 | ||
39ae355f | 519 | For a status of just a single monitor: |
7c673cae | 520 | |
39ae355f TL |
521 | .. prompt:: bash $ |
522 | ||
523 | ceph tell mon.[name] mon_status | |
7c673cae | 524 | |
9f95a23c TL |
525 | where the value of ``[name]`` can be taken from ``ceph quorum_status``. Sample |
526 | output:: | |
7c673cae FG |
527 | |
528 | { | |
529 | "name": "b", | |
530 | "rank": 1, | |
531 | "state": "peon", | |
532 | "election_epoch": 6, | |
533 | "quorum": [ | |
534 | 0, | |
535 | 1, | |
536 | 2 | |
537 | ], | |
538 | "features": { | |
539 | "required_con": "9025616074522624", | |
540 | "required_mon": [ | |
541 | "kraken" | |
542 | ], | |
543 | "quorum_con": "1152921504336314367", | |
544 | "quorum_mon": [ | |
545 | "kraken" | |
546 | ] | |
547 | }, | |
548 | "outside_quorum": [], | |
549 | "extra_probe_peers": [], | |
550 | "sync_provider": [], | |
551 | "monmap": { | |
552 | "epoch": 2, | |
553 | "fsid": "ba807e74-b64f-4b72-b43f-597dfe60ddbc", | |
554 | "modified": "2016-12-26 14:42:09.288066", | |
555 | "created": "2016-12-26 14:42:03.573585", | |
556 | "features": { | |
557 | "persistent": [ | |
558 | "kraken" | |
559 | ], | |
560 | "optional": [] | |
561 | }, | |
562 | "mons": [ | |
563 | { | |
564 | "rank": 0, | |
565 | "name": "a", | |
566 | "addr": "127.0.0.1:40000\/0", | |
567 | "public_addr": "127.0.0.1:40000\/0" | |
568 | }, | |
569 | { | |
570 | "rank": 1, | |
571 | "name": "b", | |
572 | "addr": "127.0.0.1:40001\/0", | |
573 | "public_addr": "127.0.0.1:40001\/0" | |
574 | }, | |
575 | { | |
576 | "rank": 2, | |
577 | "name": "c", | |
578 | "addr": "127.0.0.1:40002\/0", | |
579 | "public_addr": "127.0.0.1:40002\/0" | |
580 | } | |
581 | ] | |
582 | } | |
583 | } | |
584 | ||
39ae355f TL |
585 | A dump of the monitor state: |
586 | ||
587 | .. prompt:: bash $ | |
588 | ||
589 | ceph mon dump | |
7c673cae | 590 | |
39ae355f | 591 | :: |
7c673cae FG |
592 | |
593 | dumped monmap epoch 2 | |
594 | epoch 2 | |
595 | fsid ba807e74-b64f-4b72-b43f-597dfe60ddbc | |
596 | last_changed 2016-12-26 14:42:09.288066 | |
597 | created 2016-12-26 14:42:03.573585 | |
598 | 0: 127.0.0.1:40000/0 mon.a | |
599 | 1: 127.0.0.1:40001/0 mon.b | |
600 | 2: 127.0.0.1:40002/0 mon.c | |
601 |