]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | :orphan: |
2 | ||
3 | ================================== | |
4 | ceph -- ceph administration tool | |
5 | ================================== | |
6 | ||
7 | .. program:: ceph | |
8 | ||
9 | Synopsis | |
10 | ======== | |
11 | ||
12 | | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ... | |
13 | ||
14 | | **ceph** **compact** | |
15 | ||
16 | | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ... | |
17 | ||
18 | | **ceph** **daemon** *<name>* \| *<path>* *<command>* ... | |
19 | ||
20 | | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ] | |
21 | ||
22 | | **ceph** **df** *{detail}* | |
23 | ||
24 | | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ... | |
25 | ||
26 | | **ceph** **fsid** | |
27 | ||
28 | | **ceph** **health** *{detail}* | |
29 | ||
30 | | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ... | |
31 | ||
32 | | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ] | |
33 | ||
34 | | **ceph** **log** *<logtext>* [ *<logtext>*... ] | |
35 | ||
36 | | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ... | |
37 | ||
38 | | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ... | |
39 | ||
40 | | **ceph** **mon_status** | |
41 | ||
35e4c445 | 42 | | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ... |
7c673cae FG |
43 | |
44 | | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ... | |
45 | ||
46 | | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ... | |
47 | ||
48 | | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ... | |
49 | ||
50 | | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ... | |
51 | ||
52 | | **ceph** **quorum** [ *enter* \| *exit* ] | |
53 | ||
54 | | **ceph** **quorum_status** | |
55 | ||
56 | | **ceph** **report** { *<tags>* [ *<tags>...* ] } | |
57 | ||
58 | | **ceph** **scrub** | |
59 | ||
60 | | **ceph** **status** | |
61 | ||
62 | | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing} | |
63 | ||
64 | | **ceph** **tell** *<name (type.id)> <args> [<args>...]* | |
65 | ||
66 | | **ceph** **version** | |
67 | ||
68 | Description | |
69 | =========== | |
70 | ||
71 | :program:`ceph` is a control utility which is used for manual deployment and maintenance | |
72 | of a Ceph cluster. It provides a diverse set of commands that allows deployment of | |
73 | monitors, OSDs, placement groups, MDS and overall maintenance, administration | |
74 | of the cluster. | |
75 | ||
76 | Commands | |
77 | ======== | |
78 | ||
79 | auth | |
80 | ---- | |
81 | ||
82 | Manage authentication keys. It is used for adding, removing, exporting | |
83 | or updating of authentication keys for a particular entity such as a monitor or | |
84 | OSD. It uses some additional subcommands. | |
85 | ||
86 | Subcommand ``add`` adds authentication info for a particular entity from input | |
87 | file, or random key if no input is given and/or any caps specified in the command. | |
88 | ||
89 | Usage:: | |
90 | ||
91 | ceph auth add <entity> {<caps> [<caps>...]} | |
92 | ||
93 | Subcommand ``caps`` updates caps for **name** from caps specified in the command. | |
94 | ||
95 | Usage:: | |
96 | ||
97 | ceph auth caps <entity> <caps> [<caps>...] | |
98 | ||
99 | Subcommand ``del`` deletes all caps for ``name``. | |
100 | ||
101 | Usage:: | |
102 | ||
103 | ceph auth del <entity> | |
104 | ||
105 | Subcommand ``export`` writes keyring for requested entity, or master keyring if | |
106 | none given. | |
107 | ||
108 | Usage:: | |
109 | ||
110 | ceph auth export {<entity>} | |
111 | ||
112 | Subcommand ``get`` writes keyring file with requested key. | |
113 | ||
114 | Usage:: | |
115 | ||
116 | ceph auth get <entity> | |
117 | ||
118 | Subcommand ``get-key`` displays requested key. | |
119 | ||
120 | Usage:: | |
121 | ||
122 | ceph auth get-key <entity> | |
123 | ||
124 | Subcommand ``get-or-create`` adds authentication info for a particular entity | |
125 | from input file, or random key if no input given and/or any caps specified in the | |
126 | command. | |
127 | ||
128 | Usage:: | |
129 | ||
130 | ceph auth get-or-create <entity> {<caps> [<caps>...]} | |
131 | ||
132 | Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps | |
133 | pairs specified in the command. If key already exists, any given caps must match | |
134 | the existing caps for that key. | |
135 | ||
136 | Usage:: | |
137 | ||
138 | ceph auth get-or-create-key <entity> {<caps> [<caps>...]} | |
139 | ||
140 | Subcommand ``import`` reads keyring from input file. | |
141 | ||
142 | Usage:: | |
143 | ||
144 | ceph auth import | |
145 | ||
c07f9fc5 | 146 | Subcommand ``ls`` lists authentication state. |
7c673cae FG |
147 | |
148 | Usage:: | |
149 | ||
c07f9fc5 | 150 | ceph auth ls |
7c673cae FG |
151 | |
152 | Subcommand ``print-key`` displays requested key. | |
153 | ||
154 | Usage:: | |
155 | ||
156 | ceph auth print-key <entity> | |
157 | ||
158 | Subcommand ``print_key`` displays requested key. | |
159 | ||
160 | Usage:: | |
161 | ||
162 | ceph auth print_key <entity> | |
163 | ||
164 | ||
165 | compact | |
166 | ------- | |
167 | ||
168 | Causes compaction of monitor's leveldb storage. | |
169 | ||
170 | Usage:: | |
171 | ||
172 | ceph compact | |
173 | ||
174 | ||
175 | config-key | |
176 | ---------- | |
177 | ||
178 | Manage configuration key. It uses some additional subcommands. | |
179 | ||
180 | Subcommand ``del`` deletes configuration key. | |
181 | ||
182 | Usage:: | |
183 | ||
184 | ceph config-key del <key> | |
185 | ||
186 | Subcommand ``exists`` checks for configuration keys existence. | |
187 | ||
188 | Usage:: | |
189 | ||
190 | ceph config-key exists <key> | |
191 | ||
192 | Subcommand ``get`` gets the configuration key. | |
193 | ||
194 | Usage:: | |
195 | ||
196 | ceph config-key get <key> | |
197 | ||
198 | Subcommand ``list`` lists configuration keys. | |
199 | ||
200 | Usage:: | |
201 | ||
c07f9fc5 | 202 | ceph config-key ls |
7c673cae FG |
203 | |
204 | Subcommand ``dump`` dumps configuration keys and values. | |
205 | ||
206 | Usage:: | |
207 | ||
208 | ceph config-key dump | |
209 | ||
c07f9fc5 | 210 | Subcommand ``set`` puts configuration key and value. |
7c673cae FG |
211 | |
212 | Usage:: | |
213 | ||
c07f9fc5 | 214 | ceph config-key set <key> {<val>} |
7c673cae FG |
215 | |
216 | ||
217 | daemon | |
218 | ------ | |
219 | ||
220 | Submit admin-socket commands. | |
221 | ||
222 | Usage:: | |
223 | ||
224 | ceph daemon {daemon_name|socket_path} {command} ... | |
225 | ||
226 | Example:: | |
227 | ||
228 | ceph daemon osd.0 help | |
229 | ||
230 | ||
231 | daemonperf | |
232 | ---------- | |
233 | ||
234 | Watch performance counters from a Ceph daemon. | |
235 | ||
236 | Usage:: | |
237 | ||
238 | ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]] | |
239 | ||
240 | ||
241 | df | |
242 | -- | |
243 | ||
244 | Show cluster's free space status. | |
245 | ||
246 | Usage:: | |
247 | ||
248 | ceph df {detail} | |
249 | ||
c07f9fc5 FG |
250 | .. _ceph features: |
251 | ||
252 | features | |
253 | -------- | |
254 | ||
255 | Show the releases and features of all connected daemons and clients connected | |
256 | to the cluster, along with the numbers of them in each bucket grouped by the | |
257 | corresponding features/releases. Each release of Ceph supports a different set | |
258 | of features, expressed by the features bitmask. New cluster features require | |
259 | that clients support the feature, or else they are not allowed to connect to | |
260 | these new features. As new features or capabilities are enabled after an | |
261 | upgrade, older clients are prevented from connecting. | |
262 | ||
263 | Usage:: | |
264 | ||
265 | ceph features | |
7c673cae FG |
266 | |
267 | fs | |
268 | -- | |
269 | ||
270 | Manage cephfs filesystems. It uses some additional subcommands. | |
271 | ||
272 | Subcommand ``ls`` to list filesystems | |
273 | ||
274 | Usage:: | |
275 | ||
276 | ceph fs ls | |
277 | ||
278 | Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data> | |
279 | ||
280 | Usage:: | |
281 | ||
282 | ceph fs new <fs_name> <metadata> <data> | |
283 | ||
284 | Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map | |
285 | ||
286 | Usage:: | |
287 | ||
288 | ceph fs reset <fs_name> {--yes-i-really-mean-it} | |
289 | ||
290 | Subcommand ``rm`` to disable the named filesystem | |
291 | ||
292 | Usage:: | |
293 | ||
294 | ceph fs rm <fs_name> {--yes-i-really-mean-it} | |
295 | ||
296 | ||
297 | fsid | |
298 | ---- | |
299 | ||
300 | Show cluster's FSID/UUID. | |
301 | ||
302 | Usage:: | |
303 | ||
304 | ceph fsid | |
305 | ||
306 | ||
307 | health | |
308 | ------ | |
309 | ||
310 | Show cluster's health. | |
311 | ||
312 | Usage:: | |
313 | ||
314 | ceph health {detail} | |
315 | ||
316 | ||
317 | heap | |
318 | ---- | |
319 | ||
320 | Show heap usage info (available only if compiled with tcmalloc) | |
321 | ||
322 | Usage:: | |
323 | ||
324 | ceph heap dump|start_profiler|stop_profiler|release|stats | |
325 | ||
326 | ||
327 | injectargs | |
328 | ---------- | |
329 | ||
330 | Inject configuration arguments into monitor. | |
331 | ||
332 | Usage:: | |
333 | ||
334 | ceph injectargs <injected_args> [<injected_args>...] | |
335 | ||
336 | ||
337 | log | |
338 | --- | |
339 | ||
340 | Log supplied text to the monitor log. | |
341 | ||
342 | Usage:: | |
343 | ||
344 | ceph log <logtext> [<logtext>...] | |
345 | ||
346 | ||
347 | mds | |
348 | --- | |
349 | ||
350 | Manage metadata server configuration and administration. It uses some | |
351 | additional subcommands. | |
352 | ||
353 | Subcommand ``compat`` manages compatible features. It uses some additional | |
354 | subcommands. | |
355 | ||
356 | Subcommand ``rm_compat`` removes compatible feature. | |
357 | ||
358 | Usage:: | |
359 | ||
360 | ceph mds compat rm_compat <int[0-]> | |
361 | ||
362 | Subcommand ``rm_incompat`` removes incompatible feature. | |
363 | ||
364 | Usage:: | |
365 | ||
366 | ceph mds compat rm_incompat <int[0-]> | |
367 | ||
368 | Subcommand ``show`` shows mds compatibility settings. | |
369 | ||
370 | Usage:: | |
371 | ||
372 | ceph mds compat show | |
373 | ||
374 | Subcommand ``deactivate`` stops mds. | |
375 | ||
376 | Usage:: | |
377 | ||
378 | ceph mds deactivate <who> | |
379 | ||
380 | Subcommand ``fail`` forces mds to status fail. | |
381 | ||
382 | Usage:: | |
383 | ||
384 | ceph mds fail <who> | |
385 | ||
386 | Subcommand ``rm`` removes inactive mds. | |
387 | ||
388 | Usage:: | |
389 | ||
390 | ceph mds rm <int[0-]> <name> (type.id)> | |
391 | ||
392 | Subcommand ``rmfailed`` removes failed mds. | |
393 | ||
394 | Usage:: | |
395 | ||
396 | ceph mds rmfailed <int[0-]> | |
397 | ||
398 | Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>. | |
399 | ||
400 | Usage:: | |
401 | ||
402 | ceph mds set_state <int[0-]> <int[0-20]> | |
403 | ||
404 | Subcommand ``stat`` shows MDS status. | |
405 | ||
406 | Usage:: | |
407 | ||
408 | ceph mds stat | |
409 | ||
410 | Subcommand ``tell`` sends command to particular mds. | |
411 | ||
412 | Usage:: | |
413 | ||
414 | ceph mds tell <who> <args> [<args>...] | |
415 | ||
416 | mon | |
417 | --- | |
418 | ||
419 | Manage monitor configuration and administration. It uses some additional | |
420 | subcommands. | |
421 | ||
422 | Subcommand ``add`` adds new monitor named <name> at <addr>. | |
423 | ||
424 | Usage:: | |
425 | ||
426 | ceph mon add <name> <IPaddr[:port]> | |
427 | ||
428 | Subcommand ``dump`` dumps formatted monmap (optionally from epoch) | |
429 | ||
430 | Usage:: | |
431 | ||
432 | ceph mon dump {<int[0-]>} | |
433 | ||
434 | Subcommand ``getmap`` gets monmap. | |
435 | ||
436 | Usage:: | |
437 | ||
438 | ceph mon getmap {<int[0-]>} | |
439 | ||
440 | Subcommand ``remove`` removes monitor named <name>. | |
441 | ||
442 | Usage:: | |
443 | ||
444 | ceph mon remove <name> | |
445 | ||
446 | Subcommand ``stat`` summarizes monitor status. | |
447 | ||
448 | Usage:: | |
449 | ||
450 | ceph mon stat | |
451 | ||
452 | mon_status | |
453 | ---------- | |
454 | ||
455 | Reports status of monitors. | |
456 | ||
457 | Usage:: | |
458 | ||
459 | ceph mon_status | |
460 | ||
c07f9fc5 FG |
461 | mgr |
462 | --- | |
463 | ||
464 | Ceph manager daemon configuration and management. | |
465 | ||
466 | Subcommand ``dump`` dumps the latest MgrMap, which describes the active | |
467 | and standby manager daemons. | |
468 | ||
469 | Usage:: | |
470 | ||
471 | ceph mgr dump | |
472 | ||
473 | Subcommand ``fail`` will mark a manager daemon as failed, removing it | |
474 | from the manager map. If it is the active manager daemon a standby | |
475 | will take its place. | |
476 | ||
477 | Usage:: | |
478 | ||
479 | ceph mgr fail <name> | |
480 | ||
481 | Subcommand ``module ls`` will list currently enabled manager modules (plugins). | |
482 | ||
483 | Usage:: | |
484 | ||
485 | ceph mgr module ls | |
486 | ||
487 | Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``. | |
488 | ||
489 | Usage:: | |
490 | ||
491 | ceph mgr module enable <module> | |
492 | ||
493 | Subcommand ``module disable`` will disable an active manager module. | |
494 | ||
495 | Usage:: | |
496 | ||
497 | ceph mgr module disable <module> | |
498 | ||
499 | Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon. | |
500 | ||
501 | Usage:: | |
502 | ||
503 | ceph mgr metadata [name] | |
504 | ||
505 | Subcommand ``versions`` will report a count of running daemon versions. | |
506 | ||
507 | Usage:: | |
508 | ||
509 | ceph mgr versions | |
510 | ||
511 | Subcommand ``count-metadata`` will report a count of any daemon metadata field. | |
512 | ||
513 | Usage:: | |
514 | ||
515 | ceph mgr count-metadata <field> | |
516 | ||
517 | ||
7c673cae FG |
518 | osd |
519 | --- | |
520 | ||
521 | Manage OSD configuration and administration. It uses some additional | |
522 | subcommands. | |
523 | ||
524 | Subcommand ``blacklist`` manage blacklisted clients. It uses some additional | |
525 | subcommands. | |
526 | ||
527 | Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds | |
528 | from now) | |
529 | ||
530 | Usage:: | |
531 | ||
532 | ceph osd blacklist add <EntityAddr> {<float[0.0-]>} | |
533 | ||
534 | Subcommand ``ls`` show blacklisted clients | |
535 | ||
536 | Usage:: | |
537 | ||
538 | ceph osd blacklist ls | |
539 | ||
540 | Subcommand ``rm`` remove <addr> from blacklist | |
541 | ||
542 | Usage:: | |
543 | ||
544 | ceph osd blacklist rm <EntityAddr> | |
545 | ||
546 | Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers | |
547 | ||
548 | Usage:: | |
549 | ||
550 | ceph osd blocked-by | |
551 | ||
552 | Subcommand ``create`` creates new osd (with optional UUID and ID). | |
553 | ||
31f18b77 FG |
554 | This command is DEPRECATED as of the Luminous release, and will be removed in |
555 | a future release. | |
556 | ||
557 | Subcommand ``new`` should instead be used. | |
558 | ||
7c673cae FG |
559 | Usage:: |
560 | ||
561 | ceph osd create {<uuid>} {<id>} | |
562 | ||
b5b8bbf5 FG |
563 | Subcommand ``new`` can be used to create a new OSD or to recreate a previously |
564 | destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*, | |
565 | and the command expects a JSON file containing the base64 cephx key for auth | |
566 | entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt | |
567 | lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying | |
568 | the accompanying lockbox cephx key. | |
31f18b77 FG |
569 | |
570 | Usage:: | |
571 | ||
3a9019d9 | 572 | ceph osd new {<uuid>} {<id>} -i {<params.json>} |
31f18b77 | 573 | |
3a9019d9 | 574 | The parameters JSON file is optional but if provided, is expected to maintain |
b5b8bbf5 | 575 | a form of the following format:: |
31f18b77 FG |
576 | |
577 | { | |
3a9019d9 FG |
578 | "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==", |
579 | "crush_device_class": "myclass" | |
31f18b77 FG |
580 | } |
581 | ||
582 | Or:: | |
583 | ||
584 | { | |
585 | "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==", | |
586 | "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==", | |
3a9019d9 FG |
587 | "dmcrypt_key": "<dm-crypt key>", |
588 | "crush_device_class": "myclass" | |
589 | } | |
590 | ||
591 | Or:: | |
592 | ||
593 | { | |
594 | "crush_device_class": "myclass" | |
31f18b77 | 595 | } |
3a9019d9 FG |
596 | |
597 | The "crush_device_class" property is optional. If specified, it will set the | |
598 | initial CRUSH device class for the new OSD. | |
599 | ||
31f18b77 | 600 | |
7c673cae FG |
601 | Subcommand ``crush`` is used for CRUSH management. It uses some additional |
602 | subcommands. | |
603 | ||
604 | Subcommand ``add`` adds or updates crushmap position and weight for <name> with | |
605 | <weight> and location <args>. | |
606 | ||
607 | Usage:: | |
608 | ||
609 | ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] | |
610 | ||
611 | Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of | |
612 | type <type>. | |
613 | ||
614 | Usage:: | |
615 | ||
616 | ceph osd crush add-bucket <name> <type> | |
617 | ||
618 | Subcommand ``create-or-move`` creates entry or moves existing entry for <name> | |
619 | <weight> at/to location <args>. | |
620 | ||
621 | Usage:: | |
622 | ||
623 | ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args> | |
624 | [<args>...] | |
625 | ||
626 | Subcommand ``dump`` dumps crush map. | |
627 | ||
628 | Usage:: | |
629 | ||
630 | ceph osd crush dump | |
631 | ||
632 | Subcommand ``get-tunable`` get crush tunable straw_calc_version | |
633 | ||
634 | Usage:: | |
635 | ||
636 | ceph osd crush get-tunable straw_calc_version | |
637 | ||
638 | Subcommand ``link`` links existing entry for <name> under location <args>. | |
639 | ||
640 | Usage:: | |
641 | ||
642 | ceph osd crush link <name> <args> [<args>...] | |
643 | ||
644 | Subcommand ``move`` moves existing entry for <name> to location <args>. | |
645 | ||
646 | Usage:: | |
647 | ||
648 | ceph osd crush move <name> <args> [<args>...] | |
649 | ||
650 | Subcommand ``remove`` removes <name> from crush map (everywhere, or just at | |
651 | <ancestor>). | |
652 | ||
653 | Usage:: | |
654 | ||
655 | ceph osd crush remove <name> {<ancestor>} | |
656 | ||
657 | Subcommand ``rename-bucket`` renames buchket <srcname> to <stname> | |
658 | ||
659 | Usage:: | |
660 | ||
661 | ceph osd crush rename-bucket <srcname> <dstname> | |
662 | ||
663 | Subcommand ``reweight`` change <name>'s weight to <weight> in crush map. | |
664 | ||
665 | Usage:: | |
666 | ||
667 | ceph osd crush reweight <name> <float[0.0-]> | |
668 | ||
669 | Subcommand ``reweight-all`` recalculate the weights for the tree to | |
670 | ensure they sum correctly | |
671 | ||
672 | Usage:: | |
673 | ||
674 | ceph osd crush reweight-all | |
675 | ||
676 | Subcommand ``reweight-subtree`` changes all leaf items beneath <name> | |
677 | to <weight> in crush map | |
678 | ||
679 | Usage:: | |
680 | ||
681 | ceph osd crush reweight-subtree <name> <weight> | |
682 | ||
683 | Subcommand ``rm`` removes <name> from crush map (everywhere, or just at | |
684 | <ancestor>). | |
685 | ||
686 | Usage:: | |
687 | ||
688 | ceph osd crush rm <name> {<ancestor>} | |
689 | ||
690 | Subcommand ``rule`` is used for creating crush rules. It uses some additional | |
691 | subcommands. | |
692 | ||
693 | Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool | |
694 | created with <profile> (default default). | |
695 | ||
696 | Usage:: | |
697 | ||
698 | ceph osd crush rule create-erasure <name> {<profile>} | |
699 | ||
700 | Subcommand ``create-simple`` creates crush rule <name> to start from <root>, | |
701 | replicate across buckets of type <type>, using a choose mode of <firstn|indep> | |
702 | (default firstn; indep best for erasure pools). | |
703 | ||
704 | Usage:: | |
705 | ||
706 | ceph osd crush rule create-simple <name> <root> <type> {firstn|indep} | |
707 | ||
708 | Subcommand ``dump`` dumps crush rule <name> (default all). | |
709 | ||
710 | Usage:: | |
711 | ||
712 | ceph osd crush rule dump {<name>} | |
713 | ||
7c673cae FG |
714 | Subcommand ``ls`` lists crush rules. |
715 | ||
716 | Usage:: | |
717 | ||
718 | ceph osd crush rule ls | |
719 | ||
720 | Subcommand ``rm`` removes crush rule <name>. | |
721 | ||
722 | Usage:: | |
723 | ||
724 | ceph osd crush rule rm <name> | |
725 | ||
726 | Subcommand ``set`` used alone, sets crush map from input file. | |
727 | ||
728 | Usage:: | |
729 | ||
730 | ceph osd crush set | |
731 | ||
732 | Subcommand ``set`` with osdname/osd.id update crushmap position and weight | |
733 | for <name> to <weight> with location <args>. | |
734 | ||
735 | Usage:: | |
736 | ||
737 | ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] | |
738 | ||
739 | Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only | |
740 | tunable that can be set is straw_calc_version. | |
741 | ||
742 | Usage:: | |
743 | ||
744 | ceph osd crush set-tunable straw_calc_version <value> | |
745 | ||
746 | Subcommand ``show-tunables`` shows current crush tunables. | |
747 | ||
748 | Usage:: | |
749 | ||
750 | ceph osd crush show-tunables | |
751 | ||
752 | Subcommand ``tree`` shows the crush buckets and items in a tree view. | |
753 | ||
754 | Usage:: | |
755 | ||
756 | ceph osd crush tree | |
757 | ||
758 | Subcommand ``tunables`` sets crush tunables values to <profile>. | |
759 | ||
760 | Usage:: | |
761 | ||
762 | ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default | |
763 | ||
764 | Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at | |
765 | <ancestor>). | |
766 | ||
767 | Usage:: | |
768 | ||
769 | ceph osd crush unlink <name> {<ancestor>} | |
770 | ||
771 | Subcommand ``df`` shows OSD utilization | |
772 | ||
773 | Usage:: | |
774 | ||
775 | ceph osd df {plain|tree} | |
776 | ||
777 | Subcommand ``deep-scrub`` initiates deep scrub on specified osd. | |
778 | ||
779 | Usage:: | |
780 | ||
781 | ceph osd deep-scrub <who> | |
782 | ||
783 | Subcommand ``down`` sets osd(s) <id> [<id>...] down. | |
784 | ||
785 | Usage:: | |
786 | ||
787 | ceph osd down <ids> [<ids>...] | |
788 | ||
789 | Subcommand ``dump`` prints summary of OSD map. | |
790 | ||
791 | Usage:: | |
792 | ||
793 | ceph osd dump {<int[0-]>} | |
794 | ||
795 | Subcommand ``erasure-code-profile`` is used for managing the erasure code | |
796 | profiles. It uses some additional subcommands. | |
797 | ||
798 | Subcommand ``get`` gets erasure code profile <name>. | |
799 | ||
800 | Usage:: | |
801 | ||
802 | ceph osd erasure-code-profile get <name> | |
803 | ||
804 | Subcommand ``ls`` lists all erasure code profiles. | |
805 | ||
806 | Usage:: | |
807 | ||
808 | ceph osd erasure-code-profile ls | |
809 | ||
810 | Subcommand ``rm`` removes erasure code profile <name>. | |
811 | ||
812 | Usage:: | |
813 | ||
814 | ceph osd erasure-code-profile rm <name> | |
815 | ||
816 | Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...] | |
817 | pairs. Add a --force at the end to override an existing profile (IT IS RISKY). | |
818 | ||
819 | Usage:: | |
820 | ||
821 | ceph osd erasure-code-profile set <name> {<profile> [<profile>...]} | |
822 | ||
823 | Subcommand ``find`` find osd <id> in the CRUSH map and shows its location. | |
824 | ||
825 | Usage:: | |
826 | ||
827 | ceph osd find <int[0-]> | |
828 | ||
829 | Subcommand ``getcrushmap`` gets CRUSH map. | |
830 | ||
831 | Usage:: | |
832 | ||
833 | ceph osd getcrushmap {<int[0-]>} | |
834 | ||
835 | Subcommand ``getmap`` gets OSD map. | |
836 | ||
837 | Usage:: | |
838 | ||
839 | ceph osd getmap {<int[0-]>} | |
840 | ||
841 | Subcommand ``getmaxosd`` shows largest OSD id. | |
842 | ||
843 | Usage:: | |
844 | ||
845 | ceph osd getmaxosd | |
846 | ||
847 | Subcommand ``in`` sets osd(s) <id> [<id>...] in. | |
848 | ||
849 | Usage:: | |
850 | ||
851 | ceph osd in <ids> [<ids>...] | |
852 | ||
853 | Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO | |
854 | MORE REPLICAS EXIST, BE CAREFUL. | |
855 | ||
856 | Usage:: | |
857 | ||
858 | ceph osd lost <int[0-]> {--yes-i-really-mean-it} | |
859 | ||
860 | Subcommand ``ls`` shows all OSD ids. | |
861 | ||
862 | Usage:: | |
863 | ||
864 | ceph osd ls {<int[0-]>} | |
865 | ||
866 | Subcommand ``lspools`` lists pools. | |
867 | ||
868 | Usage:: | |
869 | ||
870 | ceph osd lspools {<int>} | |
871 | ||
872 | Subcommand ``map`` finds pg for <object> in <pool>. | |
873 | ||
874 | Usage:: | |
875 | ||
876 | ceph osd map <poolname> <objectname> | |
877 | ||
878 | Subcommand ``metadata`` fetches metadata for osd <id>. | |
879 | ||
880 | Usage:: | |
881 | ||
882 | ceph osd metadata {int[0-]} (default all) | |
883 | ||
884 | Subcommand ``out`` sets osd(s) <id> [<id>...] out. | |
885 | ||
886 | Usage:: | |
887 | ||
888 | ceph osd out <ids> [<ids>...] | |
889 | ||
35e4c445 FG |
890 | Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be |
891 | stopped without immediately making data unavailable. That is, all | |
892 | data should remain readable and writeable, although data redundancy | |
893 | may be reduced as some PGs may end up in a degraded (but active) | |
894 | state. It will return a success code if it is okay to stop the | |
895 | OSD(s), or an error code and informative message if it is not or if no | |
896 | conclusion can be drawn at the current time. | |
897 | ||
898 | Usage:: | |
899 | ||
900 | ceph osd ok-to-stop <id> [<ids>...] | |
901 | ||
7c673cae FG |
902 | Subcommand ``pause`` pauses osd. |
903 | ||
904 | Usage:: | |
905 | ||
906 | ceph osd pause | |
907 | ||
908 | Subcommand ``perf`` prints dump of OSD perf summary stats. | |
909 | ||
910 | Usage:: | |
911 | ||
912 | ceph osd perf | |
913 | ||
914 | Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers | |
915 | only). | |
916 | ||
917 | Usage:: | |
918 | ||
919 | ceph osd pg-temp <pgid> {<id> [<id>...]} | |
920 | ||
c07f9fc5 FG |
921 | Subcommand ``force-create-pg`` forces creation of pg <pgid>. |
922 | ||
923 | Usage:: | |
924 | ||
925 | ceph osd force-create-pg <pgid> | |
926 | ||
927 | ||
7c673cae FG |
928 | Subcommand ``pool`` is used for managing data pools. It uses some additional |
929 | subcommands. | |
930 | ||
931 | Subcommand ``create`` creates pool. | |
932 | ||
933 | Usage:: | |
934 | ||
935 | ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} | |
b32b8144 | 936 | {<erasure_code_profile>} {<rule>} {<int>} |
7c673cae FG |
937 | |
938 | Subcommand ``delete`` deletes pool. | |
939 | ||
940 | Usage:: | |
941 | ||
942 | ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it} | |
943 | ||
944 | Subcommand ``get`` gets pool parameter <var>. | |
945 | ||
946 | Usage:: | |
947 | ||
948 | ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num| | |
b32b8144 | 949 | pgp_num|crush_rule|auid|write_fadvise_dontneed |
7c673cae FG |
950 | |
951 | Only for tiered pools:: | |
952 | ||
953 | ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp| | |
954 | target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio| | |
955 | cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age| | |
956 | min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n | |
957 | ||
958 | Only for erasure coded pools:: | |
959 | ||
960 | ceph osd pool get <poolname> erasure_code_profile | |
961 | ||
962 | Use ``all`` to get all pool parameters that apply to the pool's type:: | |
963 | ||
964 | ceph osd pool get <poolname> all | |
965 | ||
966 | Subcommand ``get-quota`` obtains object or byte limits for pool. | |
967 | ||
968 | Usage:: | |
969 | ||
970 | ceph osd pool get-quota <poolname> | |
971 | ||
972 | Subcommand ``ls`` list pools | |
973 | ||
974 | Usage:: | |
975 | ||
976 | ceph osd pool ls {detail} | |
977 | ||
978 | Subcommand ``mksnap`` makes snapshot <snap> in <pool>. | |
979 | ||
980 | Usage:: | |
981 | ||
982 | ceph osd pool mksnap <poolname> <snap> | |
983 | ||
984 | Subcommand ``rename`` renames <srcpool> to <destpool>. | |
985 | ||
986 | Usage:: | |
987 | ||
988 | ceph osd pool rename <poolname> <poolname> | |
989 | ||
990 | Subcommand ``rmsnap`` removes snapshot <snap> from <pool>. | |
991 | ||
992 | Usage:: | |
993 | ||
994 | ceph osd pool rmsnap <poolname> <snap> | |
995 | ||
996 | Subcommand ``set`` sets pool parameter <var> to <val>. | |
997 | ||
998 | Usage:: | |
999 | ||
1000 | ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num| | |
b32b8144 | 1001 | pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange| |
7c673cae FG |
1002 | hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool| |
1003 | target_max_bytes|target_max_objects|cache_target_dirty_ratio| | |
1004 | cache_target_dirty_high_ratio| | |
1005 | cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid| | |
1006 | min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate| | |
1007 | hit_set_search_last_n | |
1008 | <val> {--yes-i-really-mean-it} | |
1009 | ||
1010 | Subcommand ``set-quota`` sets object or byte limit on pool. | |
1011 | ||
1012 | Usage:: | |
1013 | ||
1014 | ceph osd pool set-quota <poolname> max_objects|max_bytes <val> | |
1015 | ||
1016 | Subcommand ``stats`` obtain stats from all pools, or from specified pool. | |
1017 | ||
1018 | Usage:: | |
1019 | ||
1020 | ceph osd pool stats {<name>} | |
1021 | ||
1022 | Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight> | |
1023 | <= 1.0 | |
1024 | ||
1025 | Usage:: | |
1026 | ||
1027 | ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]> | |
1028 | ||
1029 | Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers | |
1030 | only). | |
1031 | ||
1032 | Usage:: | |
1033 | ||
1034 | ceph osd primary-temp <pgid> <id> | |
1035 | ||
1036 | Subcommand ``repair`` initiates repair on a specified osd. | |
1037 | ||
1038 | Usage:: | |
1039 | ||
1040 | ceph osd repair <who> | |
1041 | ||
1042 | Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0. | |
1043 | ||
1044 | Usage:: | |
1045 | ||
1046 | osd reweight <int[0-]> <float[0.0-1.0]> | |
1047 | ||
1048 | Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution | |
1049 | [overload-percentage-for-consideration, default 120]. | |
1050 | ||
1051 | Usage:: | |
1052 | ||
1053 | ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]} | |
1054 | {--no-increasing} | |
1055 | ||
1056 | Subcommand ``reweight-by-utilization`` reweight OSDs by utilization | |
1057 | [overload-percentage-for-consideration, default 120]. | |
1058 | ||
1059 | Usage:: | |
1060 | ||
1061 | ceph osd reweight-by-utilization {<int[100-]>} | |
1062 | {--no-increasing} | |
1063 | ||
c07f9fc5 FG |
1064 | Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map. |
1065 | ||
7c673cae FG |
1066 | |
1067 | Usage:: | |
1068 | ||
1069 | ceph osd rm <ids> [<ids>...] | |
1070 | ||
31f18b77 FG |
1071 | Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx |
1072 | entity's keys and all of its dm-crypt and daemon-private config key | |
1073 | entries. | |
1074 | ||
1075 | This command will not remove the OSD from crush, nor will it remove the | |
1076 | OSD from the OSD map. Instead, once the command successfully completes, | |
1077 | the OSD will show marked as *destroyed*. | |
1078 | ||
1079 | In order to mark an OSD as destroyed, the OSD must first be marked as | |
1080 | **lost**. | |
1081 | ||
1082 | Usage:: | |
1083 | ||
1084 | ceph osd destroy <id> {--yes-i-really-mean-it} | |
1085 | ||
1086 | ||
1087 | Subcommand ``purge`` performs a combination of ``osd destroy``, | |
1088 | ``osd rm`` and ``osd crush remove``. | |
1089 | ||
1090 | Usage:: | |
1091 | ||
1092 | ceph osd purge <id> {--yes-i-really-mean-it} | |
1093 | ||
35e4c445 FG |
1094 | Subcommand ``safe-to-destroy`` checks whether it is safe to remove or |
1095 | destroy an OSD without reducing overall data redundancy or durability. | |
1096 | It will return a success code if it is definitely safe, or an error | |
1097 | code and informative message if it is not or if no conclusion can be | |
1098 | drawn at the current time. | |
1099 | ||
1100 | Usage:: | |
1101 | ||
1102 | ceph osd safe-to-destroy <id> [<ids>...] | |
1103 | ||
7c673cae FG |
1104 | Subcommand ``scrub`` initiates scrub on specified osd. |
1105 | ||
1106 | Usage:: | |
1107 | ||
1108 | ceph osd scrub <who> | |
1109 | ||
1110 | Subcommand ``set`` sets <key>. | |
1111 | ||
1112 | Usage:: | |
1113 | ||
1114 | ceph osd set full|pause|noup|nodown|noout|noin|nobackfill| | |
1115 | norebalance|norecover|noscrub|nodeep-scrub|notieragent | |
1116 | ||
1117 | Subcommand ``setcrushmap`` sets crush map from input file. | |
1118 | ||
1119 | Usage:: | |
1120 | ||
1121 | ceph osd setcrushmap | |
1122 | ||
1123 | Subcommand ``setmaxosd`` sets new maximum osd value. | |
1124 | ||
1125 | Usage:: | |
1126 | ||
1127 | ceph osd setmaxosd <int[0-]> | |
1128 | ||
c07f9fc5 FG |
1129 | Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward |
1130 | compatible with the specified client version. This subcommand prevents you from | |
1131 | making any changes (e.g., crush tunables, or using new features) that | |
1132 | would violate the current setting. Please note, This subcommand will fail if | |
1133 | any connected daemon or client is not compatible with the features offered by | |
1134 | the given <version>. To see the features and releases of all clients connected | |
1135 | to cluster, please see `ceph features`_. | |
1136 | ||
1137 | Usage:: | |
1138 | ||
1139 | ceph osd set-require-min-compat-client <version> | |
1140 | ||
7c673cae FG |
1141 | Subcommand ``stat`` prints summary of OSD map. |
1142 | ||
1143 | Usage:: | |
1144 | ||
1145 | ceph osd stat | |
1146 | ||
1147 | Subcommand ``tier`` is used for managing tiers. It uses some additional | |
1148 | subcommands. | |
1149 | ||
1150 | Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool> | |
1151 | (the first one). | |
1152 | ||
1153 | Usage:: | |
1154 | ||
1155 | ceph osd tier add <poolname> <poolname> {--force-nonempty} | |
1156 | ||
1157 | Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size> | |
1158 | to existing pool <pool> (the first one). | |
1159 | ||
1160 | Usage:: | |
1161 | ||
1162 | ceph osd tier add-cache <poolname> <poolname> <int[0-]> | |
1163 | ||
1164 | Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>. | |
1165 | ||
1166 | Usage:: | |
1167 | ||
1168 | ceph osd tier cache-mode <poolname> none|writeback|forward|readonly| | |
1169 | readforward|readproxy | |
1170 | ||
1171 | Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool | |
1172 | <pool> (the first one). | |
1173 | ||
1174 | Usage:: | |
1175 | ||
1176 | ceph osd tier remove <poolname> <poolname> | |
1177 | ||
1178 | Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>. | |
1179 | ||
1180 | Usage:: | |
1181 | ||
1182 | ceph osd tier remove-overlay <poolname> | |
1183 | ||
1184 | Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be | |
1185 | <overlaypool>. | |
1186 | ||
1187 | Usage:: | |
1188 | ||
1189 | ceph osd tier set-overlay <poolname> <poolname> | |
1190 | ||
1191 | Subcommand ``tree`` prints OSD tree. | |
1192 | ||
1193 | Usage:: | |
1194 | ||
1195 | ceph osd tree {<int[0-]>} | |
1196 | ||
1197 | Subcommand ``unpause`` unpauses osd. | |
1198 | ||
1199 | Usage:: | |
1200 | ||
1201 | ceph osd unpause | |
1202 | ||
1203 | Subcommand ``unset`` unsets <key>. | |
1204 | ||
1205 | Usage:: | |
1206 | ||
1207 | ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill| | |
1208 | norebalance|norecover|noscrub|nodeep-scrub|notieragent | |
1209 | ||
1210 | ||
1211 | pg | |
1212 | -- | |
1213 | ||
1214 | It is used for managing the placement groups in OSDs. It uses some | |
1215 | additional subcommands. | |
1216 | ||
1217 | Subcommand ``debug`` shows debug info about pgs. | |
1218 | ||
1219 | Usage:: | |
1220 | ||
1221 | ceph pg debug unfound_objects_exist|degraded_pgs_exist | |
1222 | ||
1223 | Subcommand ``deep-scrub`` starts deep-scrub on <pgid>. | |
1224 | ||
1225 | Usage:: | |
1226 | ||
1227 | ceph pg deep-scrub <pgid> | |
1228 | ||
1229 | Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid | |
1230 | with plain). | |
1231 | ||
1232 | Usage:: | |
1233 | ||
1234 | ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} | |
1235 | ||
1236 | Subcommand ``dump_json`` shows human-readable version of pg map in json only. | |
1237 | ||
1238 | Usage:: | |
1239 | ||
1240 | ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} | |
1241 | ||
1242 | Subcommand ``dump_pools_json`` shows pg pools info in json only. | |
1243 | ||
1244 | Usage:: | |
1245 | ||
1246 | ceph pg dump_pools_json | |
1247 | ||
1248 | Subcommand ``dump_stuck`` shows information about stuck pgs. | |
1249 | ||
1250 | Usage:: | |
1251 | ||
1252 | ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} | |
1253 | {<int>} | |
1254 | ||
7c673cae FG |
1255 | Subcommand ``getmap`` gets binary pg map to -o/stdout. |
1256 | ||
1257 | Usage:: | |
1258 | ||
1259 | ceph pg getmap | |
1260 | ||
1261 | Subcommand ``ls`` lists pg with specific pool, osd, state | |
1262 | ||
1263 | Usage:: | |
1264 | ||
1265 | ceph pg ls {<int>} {active|clean|down|replay|splitting| | |
91327a77 | 1266 | scrubbing|degraded|inconsistent|peering|repair| |
7c673cae FG |
1267 | recovery|backfill_wait|incomplete|stale| remapped| |
1268 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1269 | undersized [active|clean|down|replay|splitting| | |
91327a77 | 1270 | scrubbing|degraded|inconsistent|peering|repair| |
7c673cae FG |
1271 | recovery|backfill_wait|incomplete|stale|remapped| |
1272 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1273 | undersized...]} | |
1274 | ||
1275 | Subcommand ``ls-by-osd`` lists pg on osd [osd] | |
1276 | ||
1277 | Usage:: | |
1278 | ||
1279 | ceph pg ls-by-osd <osdname (id|osd.id)> {<int>} | |
1280 | {active|clean|down|replay|splitting| | |
91327a77 | 1281 | scrubbing|degraded|inconsistent|peering|repair| |
7c673cae FG |
1282 | recovery|backfill_wait|incomplete|stale| remapped| |
1283 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1284 | undersized [active|clean|down|replay|splitting| | |
91327a77 | 1285 | scrubbing|degraded|inconsistent|peering|repair| |
7c673cae FG |
1286 | recovery|backfill_wait|incomplete|stale|remapped| |
1287 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1288 | undersized...]} | |
1289 | ||
1290 | Subcommand ``ls-by-pool`` lists pg with pool = [poolname] | |
1291 | ||
1292 | Usage:: | |
1293 | ||
1294 | ceph pg ls-by-pool <poolstr> {<int>} {active| | |
1295 | clean|down|replay|splitting| | |
91327a77 | 1296 | scrubbing|degraded|inconsistent|peering|repair| |
7c673cae FG |
1297 | recovery|backfill_wait|incomplete|stale| remapped| |
1298 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1299 | undersized [active|clean|down|replay|splitting| | |
91327a77 | 1300 | scrubbing|degraded|inconsistent|peering|repair| |
7c673cae FG |
1301 | recovery|backfill_wait|incomplete|stale|remapped| |
1302 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1303 | undersized...]} | |
1304 | ||
1305 | Subcommand ``ls-by-primary`` lists pg with primary = [osd] | |
1306 | ||
1307 | Usage:: | |
1308 | ||
1309 | ceph pg ls-by-primary <osdname (id|osd.id)> {<int>} | |
1310 | {active|clean|down|replay|splitting| | |
91327a77 | 1311 | scrubbing|degraded|inconsistent|peering|repair| |
7c673cae FG |
1312 | recovery|backfill_wait|incomplete|stale| remapped| |
1313 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1314 | undersized [active|clean|down|replay|splitting| | |
91327a77 | 1315 | scrubbing|degraded|inconsistent|peering|repair| |
7c673cae FG |
1316 | recovery|backfill_wait|incomplete|stale|remapped| |
1317 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1318 | undersized...]} | |
1319 | ||
1320 | Subcommand ``map`` shows mapping of pg to osds. | |
1321 | ||
1322 | Usage:: | |
1323 | ||
1324 | ceph pg map <pgid> | |
1325 | ||
1326 | Subcommand ``repair`` starts repair on <pgid>. | |
1327 | ||
1328 | Usage:: | |
1329 | ||
1330 | ceph pg repair <pgid> | |
1331 | ||
1332 | Subcommand ``scrub`` starts scrub on <pgid>. | |
1333 | ||
1334 | Usage:: | |
1335 | ||
1336 | ceph pg scrub <pgid> | |
1337 | ||
1338 | Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full. | |
1339 | ||
1340 | Usage:: | |
1341 | ||
1342 | ceph pg set_full_ratio <float[0.0-1.0]> | |
1343 | ||
1344 | Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill. | |
1345 | ||
1346 | Usage:: | |
1347 | ||
1348 | ceph pg set_backfillfull_ratio <float[0.0-1.0]> | |
1349 | ||
1350 | Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly | |
1351 | full. | |
1352 | ||
1353 | Usage:: | |
1354 | ||
1355 | ceph pg set_nearfull_ratio <float[0.0-1.0]> | |
1356 | ||
1357 | Subcommand ``stat`` shows placement group status. | |
1358 | ||
1359 | Usage:: | |
1360 | ||
1361 | ceph pg stat | |
1362 | ||
1363 | ||
1364 | quorum | |
1365 | ------ | |
1366 | ||
1367 | Cause MON to enter or exit quorum. | |
1368 | ||
1369 | Usage:: | |
1370 | ||
1371 | ceph quorum enter|exit | |
1372 | ||
1373 | Note: this only works on the MON to which the ``ceph`` command is connected. | |
1374 | If you want a specific MON to enter or exit quorum, use this syntax:: | |
1375 | ||
1376 | ceph tell mon.<id> quorum enter|exit | |
1377 | ||
1378 | quorum_status | |
1379 | ------------- | |
1380 | ||
1381 | Reports status of monitor quorum. | |
1382 | ||
1383 | Usage:: | |
1384 | ||
1385 | ceph quorum_status | |
1386 | ||
1387 | ||
1388 | report | |
1389 | ------ | |
1390 | ||
1391 | Reports full status of cluster, optional title tag strings. | |
1392 | ||
1393 | Usage:: | |
1394 | ||
1395 | ceph report {<tags> [<tags>...]} | |
1396 | ||
1397 | ||
1398 | scrub | |
1399 | ----- | |
1400 | ||
1401 | Scrubs the monitor stores. | |
1402 | ||
1403 | Usage:: | |
1404 | ||
1405 | ceph scrub | |
1406 | ||
1407 | ||
1408 | status | |
1409 | ------ | |
1410 | ||
1411 | Shows cluster status. | |
1412 | ||
1413 | Usage:: | |
1414 | ||
1415 | ceph status | |
1416 | ||
1417 | ||
1418 | sync force | |
1419 | ---------- | |
1420 | ||
1421 | Forces sync of and clear monitor store. | |
1422 | ||
1423 | Usage:: | |
1424 | ||
1425 | ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing} | |
1426 | ||
1427 | ||
1428 | tell | |
1429 | ---- | |
1430 | ||
1431 | Sends a command to a specific daemon. | |
1432 | ||
1433 | Usage:: | |
1434 | ||
1435 | ceph tell <name (type.id)> <args> [<args>...] | |
1436 | ||
31f18b77 FG |
1437 | |
1438 | List all available commands. | |
1439 | ||
1440 | Usage:: | |
1441 | ||
1442 | ceph tell <name (type.id)> help | |
1443 | ||
7c673cae FG |
1444 | version |
1445 | ------- | |
1446 | ||
1447 | Show mon daemon version | |
1448 | ||
1449 | Usage:: | |
1450 | ||
1451 | ceph version | |
1452 | ||
1453 | Options | |
1454 | ======= | |
1455 | ||
1456 | .. option:: -i infile | |
1457 | ||
1458 | will specify an input file to be passed along as a payload with the | |
1459 | command to the monitor cluster. This is only used for specific | |
1460 | monitor commands. | |
1461 | ||
1462 | .. option:: -o outfile | |
1463 | ||
1464 | will write any payload returned by the monitor cluster with its | |
1465 | reply to outfile. Only specific monitor commands (e.g. osd getmap) | |
1466 | return a payload. | |
1467 | ||
1468 | .. option:: -c ceph.conf, --conf=ceph.conf | |
1469 | ||
1470 | Use ceph.conf configuration file instead of the default | |
1471 | ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup. | |
1472 | ||
1473 | .. option:: --id CLIENT_ID, --user CLIENT_ID | |
1474 | ||
1475 | Client id for authentication. | |
1476 | ||
1477 | .. option:: --name CLIENT_NAME, -n CLIENT_NAME | |
1478 | ||
1479 | Client name for authentication. | |
1480 | ||
1481 | .. option:: --cluster CLUSTER | |
1482 | ||
1483 | Name of the Ceph cluster. | |
1484 | ||
1485 | .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME | |
1486 | ||
1487 | Submit admin-socket commands via admin sockets in /var/run/ceph. | |
1488 | ||
1489 | .. option:: --admin-socket ADMIN_SOCKET_NOPE | |
1490 | ||
1491 | You probably mean --admin-daemon | |
1492 | ||
1493 | .. option:: -s, --status | |
1494 | ||
1495 | Show cluster status. | |
1496 | ||
1497 | .. option:: -w, --watch | |
1498 | ||
1499 | Watch live cluster changes. | |
1500 | ||
1501 | .. option:: --watch-debug | |
1502 | ||
1503 | Watch debug events. | |
1504 | ||
1505 | .. option:: --watch-info | |
1506 | ||
1507 | Watch info events. | |
1508 | ||
1509 | .. option:: --watch-sec | |
1510 | ||
1511 | Watch security events. | |
1512 | ||
1513 | .. option:: --watch-warn | |
1514 | ||
1515 | Watch warning events. | |
1516 | ||
1517 | .. option:: --watch-error | |
1518 | ||
1519 | Watch error events. | |
1520 | ||
1521 | .. option:: --version, -v | |
1522 | ||
1523 | Display version. | |
1524 | ||
1525 | .. option:: --verbose | |
1526 | ||
1527 | Make verbose. | |
1528 | ||
1529 | .. option:: --concise | |
1530 | ||
1531 | Make less verbose. | |
1532 | ||
1533 | .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format | |
1534 | ||
1535 | Format of output. | |
1536 | ||
1537 | .. option:: --connect-timeout CLUSTER_TIMEOUT | |
1538 | ||
1539 | Set a timeout for connecting to the cluster. | |
1540 | ||
1541 | .. option:: --no-increasing | |
1542 | ||
1543 | ``--no-increasing`` is off by default. So increasing the osd weight is allowed | |
1544 | using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands. | |
1545 | If this option is used with these commands, it will help not to increase osd weight | |
1546 | even the osd is under utilized. | |
1547 | ||
1548 | ||
1549 | Availability | |
1550 | ============ | |
1551 | ||
1552 | :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to | |
1553 | the Ceph documentation at http://ceph.com/docs for more information. | |
1554 | ||
1555 | ||
1556 | See also | |
1557 | ======== | |
1558 | ||
1559 | :doc:`ceph-mon <ceph-mon>`\(8), | |
1560 | :doc:`ceph-osd <ceph-osd>`\(8), | |
1561 | :doc:`ceph-mds <ceph-mds>`\(8) |