]>
Commit | Line | Data |
---|---|---|
1 | :orphan: | |
2 | ||
3 | ================================== | |
4 | ceph -- ceph administration tool | |
5 | ================================== | |
6 | ||
7 | .. program:: ceph | |
8 | ||
9 | Synopsis | |
10 | ======== | |
11 | ||
12 | | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ... | |
13 | ||
14 | | **ceph** **compact** | |
15 | ||
16 | | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ... | |
17 | ||
18 | | **ceph** **daemon** *<name>* \| *<path>* *<command>* ... | |
19 | ||
20 | | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ] | |
21 | ||
22 | | **ceph** **df** *{detail}* | |
23 | ||
24 | | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ... | |
25 | ||
26 | | **ceph** **fsid** | |
27 | ||
28 | | **ceph** **health** *{detail}* | |
29 | ||
30 | | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ... | |
31 | ||
32 | | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ] | |
33 | ||
34 | | **ceph** **log** *<logtext>* [ *<logtext>*... ] | |
35 | ||
36 | | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ... | |
37 | ||
38 | | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ... | |
39 | ||
40 | | **ceph** **mon_status** | |
41 | ||
42 | | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ... | |
43 | ||
44 | | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ... | |
45 | ||
46 | | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ... | |
47 | ||
48 | | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ... | |
49 | ||
50 | | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ... | |
51 | ||
52 | | **ceph** **quorum** [ *enter* \| *exit* ] | |
53 | ||
54 | | **ceph** **quorum_status** | |
55 | ||
56 | | **ceph** **report** { *<tags>* [ *<tags>...* ] } | |
57 | ||
58 | | **ceph** **scrub** | |
59 | ||
60 | | **ceph** **status** | |
61 | ||
62 | | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing} | |
63 | ||
64 | | **ceph** **tell** *<name (type.id)> <args> [<args>...]* | |
65 | ||
66 | | **ceph** **version** | |
67 | ||
68 | Description | |
69 | =========== | |
70 | ||
71 | :program:`ceph` is a control utility which is used for manual deployment and maintenance | |
72 | of a Ceph cluster. It provides a diverse set of commands that allows deployment of | |
73 | monitors, OSDs, placement groups, MDS and overall maintenance, administration | |
74 | of the cluster. | |
75 | ||
76 | Commands | |
77 | ======== | |
78 | ||
79 | auth | |
80 | ---- | |
81 | ||
82 | Manage authentication keys. It is used for adding, removing, exporting | |
83 | or updating of authentication keys for a particular entity such as a monitor or | |
84 | OSD. It uses some additional subcommands. | |
85 | ||
86 | Subcommand ``add`` adds authentication info for a particular entity from input | |
87 | file, or random key if no input is given and/or any caps specified in the command. | |
88 | ||
89 | Usage:: | |
90 | ||
91 | ceph auth add <entity> {<caps> [<caps>...]} | |
92 | ||
93 | Subcommand ``caps`` updates caps for **name** from caps specified in the command. | |
94 | ||
95 | Usage:: | |
96 | ||
97 | ceph auth caps <entity> <caps> [<caps>...] | |
98 | ||
99 | Subcommand ``del`` deletes all caps for ``name``. | |
100 | ||
101 | Usage:: | |
102 | ||
103 | ceph auth del <entity> | |
104 | ||
105 | Subcommand ``export`` writes keyring for requested entity, or master keyring if | |
106 | none given. | |
107 | ||
108 | Usage:: | |
109 | ||
110 | ceph auth export {<entity>} | |
111 | ||
112 | Subcommand ``get`` writes keyring file with requested key. | |
113 | ||
114 | Usage:: | |
115 | ||
116 | ceph auth get <entity> | |
117 | ||
118 | Subcommand ``get-key`` displays requested key. | |
119 | ||
120 | Usage:: | |
121 | ||
122 | ceph auth get-key <entity> | |
123 | ||
124 | Subcommand ``get-or-create`` adds authentication info for a particular entity | |
125 | from input file, or random key if no input given and/or any caps specified in the | |
126 | command. | |
127 | ||
128 | Usage:: | |
129 | ||
130 | ceph auth get-or-create <entity> {<caps> [<caps>...]} | |
131 | ||
132 | Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps | |
133 | pairs specified in the command. If key already exists, any given caps must match | |
134 | the existing caps for that key. | |
135 | ||
136 | Usage:: | |
137 | ||
138 | ceph auth get-or-create-key <entity> {<caps> [<caps>...]} | |
139 | ||
140 | Subcommand ``import`` reads keyring from input file. | |
141 | ||
142 | Usage:: | |
143 | ||
144 | ceph auth import | |
145 | ||
146 | Subcommand ``ls`` lists authentication state. | |
147 | ||
148 | Usage:: | |
149 | ||
150 | ceph auth ls | |
151 | ||
152 | Subcommand ``print-key`` displays requested key. | |
153 | ||
154 | Usage:: | |
155 | ||
156 | ceph auth print-key <entity> | |
157 | ||
158 | Subcommand ``print_key`` displays requested key. | |
159 | ||
160 | Usage:: | |
161 | ||
162 | ceph auth print_key <entity> | |
163 | ||
164 | ||
165 | compact | |
166 | ------- | |
167 | ||
168 | Causes compaction of monitor's leveldb storage. | |
169 | ||
170 | Usage:: | |
171 | ||
172 | ceph compact | |
173 | ||
174 | ||
175 | config-key | |
176 | ---------- | |
177 | ||
178 | Manage configuration key. It uses some additional subcommands. | |
179 | ||
180 | Subcommand ``del`` deletes configuration key. | |
181 | ||
182 | Usage:: | |
183 | ||
184 | ceph config-key del <key> | |
185 | ||
186 | Subcommand ``exists`` checks for configuration keys existence. | |
187 | ||
188 | Usage:: | |
189 | ||
190 | ceph config-key exists <key> | |
191 | ||
192 | Subcommand ``get`` gets the configuration key. | |
193 | ||
194 | Usage:: | |
195 | ||
196 | ceph config-key get <key> | |
197 | ||
198 | Subcommand ``list`` lists configuration keys. | |
199 | ||
200 | Usage:: | |
201 | ||
202 | ceph config-key ls | |
203 | ||
204 | Subcommand ``dump`` dumps configuration keys and values. | |
205 | ||
206 | Usage:: | |
207 | ||
208 | ceph config-key dump | |
209 | ||
210 | Subcommand ``set`` puts configuration key and value. | |
211 | ||
212 | Usage:: | |
213 | ||
214 | ceph config-key set <key> {<val>} | |
215 | ||
216 | ||
217 | daemon | |
218 | ------ | |
219 | ||
220 | Submit admin-socket commands. | |
221 | ||
222 | Usage:: | |
223 | ||
224 | ceph daemon {daemon_name|socket_path} {command} ... | |
225 | ||
226 | Example:: | |
227 | ||
228 | ceph daemon osd.0 help | |
229 | ||
230 | ||
231 | daemonperf | |
232 | ---------- | |
233 | ||
234 | Watch performance counters from a Ceph daemon. | |
235 | ||
236 | Usage:: | |
237 | ||
238 | ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]] | |
239 | ||
240 | ||
241 | df | |
242 | -- | |
243 | ||
244 | Show cluster's free space status. | |
245 | ||
246 | Usage:: | |
247 | ||
248 | ceph df {detail} | |
249 | ||
250 | .. _ceph features: | |
251 | ||
252 | features | |
253 | -------- | |
254 | ||
255 | Show the releases and features of all connected daemons and clients connected | |
256 | to the cluster, along with the numbers of them in each bucket grouped by the | |
257 | corresponding features/releases. Each release of Ceph supports a different set | |
258 | of features, expressed by the features bitmask. New cluster features require | |
259 | that clients support the feature, or else they are not allowed to connect to | |
260 | these new features. As new features or capabilities are enabled after an | |
261 | upgrade, older clients are prevented from connecting. | |
262 | ||
263 | Usage:: | |
264 | ||
265 | ceph features | |
266 | ||
267 | fs | |
268 | -- | |
269 | ||
270 | Manage cephfs filesystems. It uses some additional subcommands. | |
271 | ||
272 | Subcommand ``ls`` to list filesystems | |
273 | ||
274 | Usage:: | |
275 | ||
276 | ceph fs ls | |
277 | ||
278 | Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data> | |
279 | ||
280 | Usage:: | |
281 | ||
282 | ceph fs new <fs_name> <metadata> <data> | |
283 | ||
284 | Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map | |
285 | ||
286 | Usage:: | |
287 | ||
288 | ceph fs reset <fs_name> {--yes-i-really-mean-it} | |
289 | ||
290 | Subcommand ``rm`` to disable the named filesystem | |
291 | ||
292 | Usage:: | |
293 | ||
294 | ceph fs rm <fs_name> {--yes-i-really-mean-it} | |
295 | ||
296 | ||
297 | fsid | |
298 | ---- | |
299 | ||
300 | Show cluster's FSID/UUID. | |
301 | ||
302 | Usage:: | |
303 | ||
304 | ceph fsid | |
305 | ||
306 | ||
307 | health | |
308 | ------ | |
309 | ||
310 | Show cluster's health. | |
311 | ||
312 | Usage:: | |
313 | ||
314 | ceph health {detail} | |
315 | ||
316 | ||
317 | heap | |
318 | ---- | |
319 | ||
320 | Show heap usage info (available only if compiled with tcmalloc) | |
321 | ||
322 | Usage:: | |
323 | ||
324 | ceph heap dump|start_profiler|stop_profiler|release|stats | |
325 | ||
326 | ||
327 | injectargs | |
328 | ---------- | |
329 | ||
330 | Inject configuration arguments into monitor. | |
331 | ||
332 | Usage:: | |
333 | ||
334 | ceph injectargs <injected_args> [<injected_args>...] | |
335 | ||
336 | ||
337 | log | |
338 | --- | |
339 | ||
340 | Log supplied text to the monitor log. | |
341 | ||
342 | Usage:: | |
343 | ||
344 | ceph log <logtext> [<logtext>...] | |
345 | ||
346 | ||
347 | mds | |
348 | --- | |
349 | ||
350 | Manage metadata server configuration and administration. It uses some | |
351 | additional subcommands. | |
352 | ||
353 | Subcommand ``compat`` manages compatible features. It uses some additional | |
354 | subcommands. | |
355 | ||
356 | Subcommand ``rm_compat`` removes compatible feature. | |
357 | ||
358 | Usage:: | |
359 | ||
360 | ceph mds compat rm_compat <int[0-]> | |
361 | ||
362 | Subcommand ``rm_incompat`` removes incompatible feature. | |
363 | ||
364 | Usage:: | |
365 | ||
366 | ceph mds compat rm_incompat <int[0-]> | |
367 | ||
368 | Subcommand ``show`` shows mds compatibility settings. | |
369 | ||
370 | Usage:: | |
371 | ||
372 | ceph mds compat show | |
373 | ||
374 | Subcommand ``deactivate`` stops mds. | |
375 | ||
376 | Usage:: | |
377 | ||
378 | ceph mds deactivate <who> | |
379 | ||
380 | Subcommand ``fail`` forces mds to status fail. | |
381 | ||
382 | Usage:: | |
383 | ||
384 | ceph mds fail <who> | |
385 | ||
386 | Subcommand ``rm`` removes inactive mds. | |
387 | ||
388 | Usage:: | |
389 | ||
390 | ceph mds rm <int[0-]> <name> (type.id)> | |
391 | ||
392 | Subcommand ``rmfailed`` removes failed mds. | |
393 | ||
394 | Usage:: | |
395 | ||
396 | ceph mds rmfailed <int[0-]> | |
397 | ||
398 | Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>. | |
399 | ||
400 | Usage:: | |
401 | ||
402 | ceph mds set_state <int[0-]> <int[0-20]> | |
403 | ||
404 | Subcommand ``stat`` shows MDS status. | |
405 | ||
406 | Usage:: | |
407 | ||
408 | ceph mds stat | |
409 | ||
410 | Subcommand ``tell`` sends command to particular mds. | |
411 | ||
412 | Usage:: | |
413 | ||
414 | ceph mds tell <who> <args> [<args>...] | |
415 | ||
416 | mon | |
417 | --- | |
418 | ||
419 | Manage monitor configuration and administration. It uses some additional | |
420 | subcommands. | |
421 | ||
422 | Subcommand ``add`` adds new monitor named <name> at <addr>. | |
423 | ||
424 | Usage:: | |
425 | ||
426 | ceph mon add <name> <IPaddr[:port]> | |
427 | ||
428 | Subcommand ``dump`` dumps formatted monmap (optionally from epoch) | |
429 | ||
430 | Usage:: | |
431 | ||
432 | ceph mon dump {<int[0-]>} | |
433 | ||
434 | Subcommand ``getmap`` gets monmap. | |
435 | ||
436 | Usage:: | |
437 | ||
438 | ceph mon getmap {<int[0-]>} | |
439 | ||
440 | Subcommand ``remove`` removes monitor named <name>. | |
441 | ||
442 | Usage:: | |
443 | ||
444 | ceph mon remove <name> | |
445 | ||
446 | Subcommand ``stat`` summarizes monitor status. | |
447 | ||
448 | Usage:: | |
449 | ||
450 | ceph mon stat | |
451 | ||
452 | mon_status | |
453 | ---------- | |
454 | ||
455 | Reports status of monitors. | |
456 | ||
457 | Usage:: | |
458 | ||
459 | ceph mon_status | |
460 | ||
461 | mgr | |
462 | --- | |
463 | ||
464 | Ceph manager daemon configuration and management. | |
465 | ||
466 | Subcommand ``dump`` dumps the latest MgrMap, which describes the active | |
467 | and standby manager daemons. | |
468 | ||
469 | Usage:: | |
470 | ||
471 | ceph mgr dump | |
472 | ||
473 | Subcommand ``fail`` will mark a manager daemon as failed, removing it | |
474 | from the manager map. If it is the active manager daemon a standby | |
475 | will take its place. | |
476 | ||
477 | Usage:: | |
478 | ||
479 | ceph mgr fail <name> | |
480 | ||
481 | Subcommand ``module ls`` will list currently enabled manager modules (plugins). | |
482 | ||
483 | Usage:: | |
484 | ||
485 | ceph mgr module ls | |
486 | ||
487 | Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``. | |
488 | ||
489 | Usage:: | |
490 | ||
491 | ceph mgr module enable <module> | |
492 | ||
493 | Subcommand ``module disable`` will disable an active manager module. | |
494 | ||
495 | Usage:: | |
496 | ||
497 | ceph mgr module disable <module> | |
498 | ||
499 | Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon. | |
500 | ||
501 | Usage:: | |
502 | ||
503 | ceph mgr metadata [name] | |
504 | ||
505 | Subcommand ``versions`` will report a count of running daemon versions. | |
506 | ||
507 | Usage:: | |
508 | ||
509 | ceph mgr versions | |
510 | ||
511 | Subcommand ``count-metadata`` will report a count of any daemon metadata field. | |
512 | ||
513 | Usage:: | |
514 | ||
515 | ceph mgr count-metadata <field> | |
516 | ||
517 | ||
518 | osd | |
519 | --- | |
520 | ||
521 | Manage OSD configuration and administration. It uses some additional | |
522 | subcommands. | |
523 | ||
524 | Subcommand ``blacklist`` manage blacklisted clients. It uses some additional | |
525 | subcommands. | |
526 | ||
527 | Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds | |
528 | from now) | |
529 | ||
530 | Usage:: | |
531 | ||
532 | ceph osd blacklist add <EntityAddr> {<float[0.0-]>} | |
533 | ||
534 | Subcommand ``ls`` show blacklisted clients | |
535 | ||
536 | Usage:: | |
537 | ||
538 | ceph osd blacklist ls | |
539 | ||
540 | Subcommand ``rm`` remove <addr> from blacklist | |
541 | ||
542 | Usage:: | |
543 | ||
544 | ceph osd blacklist rm <EntityAddr> | |
545 | ||
546 | Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers | |
547 | ||
548 | Usage:: | |
549 | ||
550 | ceph osd blocked-by | |
551 | ||
552 | Subcommand ``create`` creates new osd (with optional UUID and ID). | |
553 | ||
554 | This command is DEPRECATED as of the Luminous release, and will be removed in | |
555 | a future release. | |
556 | ||
557 | Subcommand ``new`` should instead be used. | |
558 | ||
559 | Usage:: | |
560 | ||
561 | ceph osd create {<uuid>} {<id>} | |
562 | ||
563 | Subcommand ``new`` can be used to create a new OSD or to recreate a previously | |
564 | destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*, | |
565 | and the command expects a JSON file containing the base64 cephx key for auth | |
566 | entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt | |
567 | lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying | |
568 | the accompanying lockbox cephx key. | |
569 | ||
570 | Usage:: | |
571 | ||
572 | ceph osd new {<id>} {<uuid>} -i {<secrets.json>} | |
573 | ||
574 | The secrets JSON file is optional but if provided, is expected to maintain | |
575 | a form of the following format:: | |
576 | ||
577 | { | |
578 | "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==" | |
579 | } | |
580 | ||
581 | Or:: | |
582 | ||
583 | { | |
584 | "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==", | |
585 | "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==", | |
586 | "dmcrypt_key": "<dm-crypt key>" | |
587 | } | |
588 | ||
589 | ||
590 | Subcommand ``crush`` is used for CRUSH management. It uses some additional | |
591 | subcommands. | |
592 | ||
593 | Subcommand ``add`` adds or updates crushmap position and weight for <name> with | |
594 | <weight> and location <args>. | |
595 | ||
596 | Usage:: | |
597 | ||
598 | ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] | |
599 | ||
600 | Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of | |
601 | type <type>. | |
602 | ||
603 | Usage:: | |
604 | ||
605 | ceph osd crush add-bucket <name> <type> | |
606 | ||
607 | Subcommand ``create-or-move`` creates entry or moves existing entry for <name> | |
608 | <weight> at/to location <args>. | |
609 | ||
610 | Usage:: | |
611 | ||
612 | ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args> | |
613 | [<args>...] | |
614 | ||
615 | Subcommand ``dump`` dumps crush map. | |
616 | ||
617 | Usage:: | |
618 | ||
619 | ceph osd crush dump | |
620 | ||
621 | Subcommand ``get-tunable`` get crush tunable straw_calc_version | |
622 | ||
623 | Usage:: | |
624 | ||
625 | ceph osd crush get-tunable straw_calc_version | |
626 | ||
627 | Subcommand ``link`` links existing entry for <name> under location <args>. | |
628 | ||
629 | Usage:: | |
630 | ||
631 | ceph osd crush link <name> <args> [<args>...] | |
632 | ||
633 | Subcommand ``move`` moves existing entry for <name> to location <args>. | |
634 | ||
635 | Usage:: | |
636 | ||
637 | ceph osd crush move <name> <args> [<args>...] | |
638 | ||
639 | Subcommand ``remove`` removes <name> from crush map (everywhere, or just at | |
640 | <ancestor>). | |
641 | ||
642 | Usage:: | |
643 | ||
644 | ceph osd crush remove <name> {<ancestor>} | |
645 | ||
646 | Subcommand ``rename-bucket`` renames buchket <srcname> to <stname> | |
647 | ||
648 | Usage:: | |
649 | ||
650 | ceph osd crush rename-bucket <srcname> <dstname> | |
651 | ||
652 | Subcommand ``reweight`` change <name>'s weight to <weight> in crush map. | |
653 | ||
654 | Usage:: | |
655 | ||
656 | ceph osd crush reweight <name> <float[0.0-]> | |
657 | ||
658 | Subcommand ``reweight-all`` recalculate the weights for the tree to | |
659 | ensure they sum correctly | |
660 | ||
661 | Usage:: | |
662 | ||
663 | ceph osd crush reweight-all | |
664 | ||
665 | Subcommand ``reweight-subtree`` changes all leaf items beneath <name> | |
666 | to <weight> in crush map | |
667 | ||
668 | Usage:: | |
669 | ||
670 | ceph osd crush reweight-subtree <name> <weight> | |
671 | ||
672 | Subcommand ``rm`` removes <name> from crush map (everywhere, or just at | |
673 | <ancestor>). | |
674 | ||
675 | Usage:: | |
676 | ||
677 | ceph osd crush rm <name> {<ancestor>} | |
678 | ||
679 | Subcommand ``rule`` is used for creating crush rules. It uses some additional | |
680 | subcommands. | |
681 | ||
682 | Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool | |
683 | created with <profile> (default default). | |
684 | ||
685 | Usage:: | |
686 | ||
687 | ceph osd crush rule create-erasure <name> {<profile>} | |
688 | ||
689 | Subcommand ``create-simple`` creates crush rule <name> to start from <root>, | |
690 | replicate across buckets of type <type>, using a choose mode of <firstn|indep> | |
691 | (default firstn; indep best for erasure pools). | |
692 | ||
693 | Usage:: | |
694 | ||
695 | ceph osd crush rule create-simple <name> <root> <type> {firstn|indep} | |
696 | ||
697 | Subcommand ``dump`` dumps crush rule <name> (default all). | |
698 | ||
699 | Usage:: | |
700 | ||
701 | ceph osd crush rule dump {<name>} | |
702 | ||
703 | Subcommand ``ls`` lists crush rules. | |
704 | ||
705 | Usage:: | |
706 | ||
707 | ceph osd crush rule ls | |
708 | ||
709 | Subcommand ``rm`` removes crush rule <name>. | |
710 | ||
711 | Usage:: | |
712 | ||
713 | ceph osd crush rule rm <name> | |
714 | ||
715 | Subcommand ``set`` used alone, sets crush map from input file. | |
716 | ||
717 | Usage:: | |
718 | ||
719 | ceph osd crush set | |
720 | ||
721 | Subcommand ``set`` with osdname/osd.id update crushmap position and weight | |
722 | for <name> to <weight> with location <args>. | |
723 | ||
724 | Usage:: | |
725 | ||
726 | ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] | |
727 | ||
728 | Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only | |
729 | tunable that can be set is straw_calc_version. | |
730 | ||
731 | Usage:: | |
732 | ||
733 | ceph osd crush set-tunable straw_calc_version <value> | |
734 | ||
735 | Subcommand ``show-tunables`` shows current crush tunables. | |
736 | ||
737 | Usage:: | |
738 | ||
739 | ceph osd crush show-tunables | |
740 | ||
741 | Subcommand ``tree`` shows the crush buckets and items in a tree view. | |
742 | ||
743 | Usage:: | |
744 | ||
745 | ceph osd crush tree | |
746 | ||
747 | Subcommand ``tunables`` sets crush tunables values to <profile>. | |
748 | ||
749 | Usage:: | |
750 | ||
751 | ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default | |
752 | ||
753 | Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at | |
754 | <ancestor>). | |
755 | ||
756 | Usage:: | |
757 | ||
758 | ceph osd crush unlink <name> {<ancestor>} | |
759 | ||
760 | Subcommand ``df`` shows OSD utilization | |
761 | ||
762 | Usage:: | |
763 | ||
764 | ceph osd df {plain|tree} | |
765 | ||
766 | Subcommand ``deep-scrub`` initiates deep scrub on specified osd. | |
767 | ||
768 | Usage:: | |
769 | ||
770 | ceph osd deep-scrub <who> | |
771 | ||
772 | Subcommand ``down`` sets osd(s) <id> [<id>...] down. | |
773 | ||
774 | Usage:: | |
775 | ||
776 | ceph osd down <ids> [<ids>...] | |
777 | ||
778 | Subcommand ``dump`` prints summary of OSD map. | |
779 | ||
780 | Usage:: | |
781 | ||
782 | ceph osd dump {<int[0-]>} | |
783 | ||
784 | Subcommand ``erasure-code-profile`` is used for managing the erasure code | |
785 | profiles. It uses some additional subcommands. | |
786 | ||
787 | Subcommand ``get`` gets erasure code profile <name>. | |
788 | ||
789 | Usage:: | |
790 | ||
791 | ceph osd erasure-code-profile get <name> | |
792 | ||
793 | Subcommand ``ls`` lists all erasure code profiles. | |
794 | ||
795 | Usage:: | |
796 | ||
797 | ceph osd erasure-code-profile ls | |
798 | ||
799 | Subcommand ``rm`` removes erasure code profile <name>. | |
800 | ||
801 | Usage:: | |
802 | ||
803 | ceph osd erasure-code-profile rm <name> | |
804 | ||
805 | Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...] | |
806 | pairs. Add a --force at the end to override an existing profile (IT IS RISKY). | |
807 | ||
808 | Usage:: | |
809 | ||
810 | ceph osd erasure-code-profile set <name> {<profile> [<profile>...]} | |
811 | ||
812 | Subcommand ``find`` find osd <id> in the CRUSH map and shows its location. | |
813 | ||
814 | Usage:: | |
815 | ||
816 | ceph osd find <int[0-]> | |
817 | ||
818 | Subcommand ``getcrushmap`` gets CRUSH map. | |
819 | ||
820 | Usage:: | |
821 | ||
822 | ceph osd getcrushmap {<int[0-]>} | |
823 | ||
824 | Subcommand ``getmap`` gets OSD map. | |
825 | ||
826 | Usage:: | |
827 | ||
828 | ceph osd getmap {<int[0-]>} | |
829 | ||
830 | Subcommand ``getmaxosd`` shows largest OSD id. | |
831 | ||
832 | Usage:: | |
833 | ||
834 | ceph osd getmaxosd | |
835 | ||
836 | Subcommand ``in`` sets osd(s) <id> [<id>...] in. | |
837 | ||
838 | Usage:: | |
839 | ||
840 | ceph osd in <ids> [<ids>...] | |
841 | ||
842 | Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO | |
843 | MORE REPLICAS EXIST, BE CAREFUL. | |
844 | ||
845 | Usage:: | |
846 | ||
847 | ceph osd lost <int[0-]> {--yes-i-really-mean-it} | |
848 | ||
849 | Subcommand ``ls`` shows all OSD ids. | |
850 | ||
851 | Usage:: | |
852 | ||
853 | ceph osd ls {<int[0-]>} | |
854 | ||
855 | Subcommand ``lspools`` lists pools. | |
856 | ||
857 | Usage:: | |
858 | ||
859 | ceph osd lspools {<int>} | |
860 | ||
861 | Subcommand ``map`` finds pg for <object> in <pool>. | |
862 | ||
863 | Usage:: | |
864 | ||
865 | ceph osd map <poolname> <objectname> | |
866 | ||
867 | Subcommand ``metadata`` fetches metadata for osd <id>. | |
868 | ||
869 | Usage:: | |
870 | ||
871 | ceph osd metadata {int[0-]} (default all) | |
872 | ||
873 | Subcommand ``out`` sets osd(s) <id> [<id>...] out. | |
874 | ||
875 | Usage:: | |
876 | ||
877 | ceph osd out <ids> [<ids>...] | |
878 | ||
879 | Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be | |
880 | stopped without immediately making data unavailable. That is, all | |
881 | data should remain readable and writeable, although data redundancy | |
882 | may be reduced as some PGs may end up in a degraded (but active) | |
883 | state. It will return a success code if it is okay to stop the | |
884 | OSD(s), or an error code and informative message if it is not or if no | |
885 | conclusion can be drawn at the current time. | |
886 | ||
887 | Usage:: | |
888 | ||
889 | ceph osd ok-to-stop <id> [<ids>...] | |
890 | ||
891 | Subcommand ``pause`` pauses osd. | |
892 | ||
893 | Usage:: | |
894 | ||
895 | ceph osd pause | |
896 | ||
897 | Subcommand ``perf`` prints dump of OSD perf summary stats. | |
898 | ||
899 | Usage:: | |
900 | ||
901 | ceph osd perf | |
902 | ||
903 | Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers | |
904 | only). | |
905 | ||
906 | Usage:: | |
907 | ||
908 | ceph osd pg-temp <pgid> {<id> [<id>...]} | |
909 | ||
910 | Subcommand ``force-create-pg`` forces creation of pg <pgid>. | |
911 | ||
912 | Usage:: | |
913 | ||
914 | ceph osd force-create-pg <pgid> | |
915 | ||
916 | ||
917 | Subcommand ``pool`` is used for managing data pools. It uses some additional | |
918 | subcommands. | |
919 | ||
920 | Subcommand ``create`` creates pool. | |
921 | ||
922 | Usage:: | |
923 | ||
924 | ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} | |
925 | {<erasure_code_profile>} {<ruleset>} {<int>} | |
926 | ||
927 | Subcommand ``delete`` deletes pool. | |
928 | ||
929 | Usage:: | |
930 | ||
931 | ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it} | |
932 | ||
933 | Subcommand ``get`` gets pool parameter <var>. | |
934 | ||
935 | Usage:: | |
936 | ||
937 | ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num| | |
938 | pgp_num|crush_ruleset|auid|write_fadvise_dontneed | |
939 | ||
940 | Only for tiered pools:: | |
941 | ||
942 | ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp| | |
943 | target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio| | |
944 | cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age| | |
945 | min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n | |
946 | ||
947 | Only for erasure coded pools:: | |
948 | ||
949 | ceph osd pool get <poolname> erasure_code_profile | |
950 | ||
951 | Use ``all`` to get all pool parameters that apply to the pool's type:: | |
952 | ||
953 | ceph osd pool get <poolname> all | |
954 | ||
955 | Subcommand ``get-quota`` obtains object or byte limits for pool. | |
956 | ||
957 | Usage:: | |
958 | ||
959 | ceph osd pool get-quota <poolname> | |
960 | ||
961 | Subcommand ``ls`` list pools | |
962 | ||
963 | Usage:: | |
964 | ||
965 | ceph osd pool ls {detail} | |
966 | ||
967 | Subcommand ``mksnap`` makes snapshot <snap> in <pool>. | |
968 | ||
969 | Usage:: | |
970 | ||
971 | ceph osd pool mksnap <poolname> <snap> | |
972 | ||
973 | Subcommand ``rename`` renames <srcpool> to <destpool>. | |
974 | ||
975 | Usage:: | |
976 | ||
977 | ceph osd pool rename <poolname> <poolname> | |
978 | ||
979 | Subcommand ``rmsnap`` removes snapshot <snap> from <pool>. | |
980 | ||
981 | Usage:: | |
982 | ||
983 | ceph osd pool rmsnap <poolname> <snap> | |
984 | ||
985 | Subcommand ``set`` sets pool parameter <var> to <val>. | |
986 | ||
987 | Usage:: | |
988 | ||
989 | ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num| | |
990 | pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange| | |
991 | hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool| | |
992 | target_max_bytes|target_max_objects|cache_target_dirty_ratio| | |
993 | cache_target_dirty_high_ratio| | |
994 | cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid| | |
995 | min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate| | |
996 | hit_set_search_last_n | |
997 | <val> {--yes-i-really-mean-it} | |
998 | ||
999 | Subcommand ``set-quota`` sets object or byte limit on pool. | |
1000 | ||
1001 | Usage:: | |
1002 | ||
1003 | ceph osd pool set-quota <poolname> max_objects|max_bytes <val> | |
1004 | ||
1005 | Subcommand ``stats`` obtain stats from all pools, or from specified pool. | |
1006 | ||
1007 | Usage:: | |
1008 | ||
1009 | ceph osd pool stats {<name>} | |
1010 | ||
1011 | Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight> | |
1012 | <= 1.0 | |
1013 | ||
1014 | Usage:: | |
1015 | ||
1016 | ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]> | |
1017 | ||
1018 | Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers | |
1019 | only). | |
1020 | ||
1021 | Usage:: | |
1022 | ||
1023 | ceph osd primary-temp <pgid> <id> | |
1024 | ||
1025 | Subcommand ``repair`` initiates repair on a specified osd. | |
1026 | ||
1027 | Usage:: | |
1028 | ||
1029 | ceph osd repair <who> | |
1030 | ||
1031 | Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0. | |
1032 | ||
1033 | Usage:: | |
1034 | ||
1035 | osd reweight <int[0-]> <float[0.0-1.0]> | |
1036 | ||
1037 | Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution | |
1038 | [overload-percentage-for-consideration, default 120]. | |
1039 | ||
1040 | Usage:: | |
1041 | ||
1042 | ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]} | |
1043 | {--no-increasing} | |
1044 | ||
1045 | Subcommand ``reweight-by-utilization`` reweight OSDs by utilization | |
1046 | [overload-percentage-for-consideration, default 120]. | |
1047 | ||
1048 | Usage:: | |
1049 | ||
1050 | ceph osd reweight-by-utilization {<int[100-]>} | |
1051 | {--no-increasing} | |
1052 | ||
1053 | Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map. | |
1054 | ||
1055 | ||
1056 | Usage:: | |
1057 | ||
1058 | ceph osd rm <ids> [<ids>...] | |
1059 | ||
1060 | Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx | |
1061 | entity's keys and all of its dm-crypt and daemon-private config key | |
1062 | entries. | |
1063 | ||
1064 | This command will not remove the OSD from crush, nor will it remove the | |
1065 | OSD from the OSD map. Instead, once the command successfully completes, | |
1066 | the OSD will show marked as *destroyed*. | |
1067 | ||
1068 | In order to mark an OSD as destroyed, the OSD must first be marked as | |
1069 | **lost**. | |
1070 | ||
1071 | Usage:: | |
1072 | ||
1073 | ceph osd destroy <id> {--yes-i-really-mean-it} | |
1074 | ||
1075 | ||
1076 | Subcommand ``purge`` performs a combination of ``osd destroy``, | |
1077 | ``osd rm`` and ``osd crush remove``. | |
1078 | ||
1079 | Usage:: | |
1080 | ||
1081 | ceph osd purge <id> {--yes-i-really-mean-it} | |
1082 | ||
1083 | Subcommand ``safe-to-destroy`` checks whether it is safe to remove or | |
1084 | destroy an OSD without reducing overall data redundancy or durability. | |
1085 | It will return a success code if it is definitely safe, or an error | |
1086 | code and informative message if it is not or if no conclusion can be | |
1087 | drawn at the current time. | |
1088 | ||
1089 | Usage:: | |
1090 | ||
1091 | ceph osd safe-to-destroy <id> [<ids>...] | |
1092 | ||
1093 | Subcommand ``scrub`` initiates scrub on specified osd. | |
1094 | ||
1095 | Usage:: | |
1096 | ||
1097 | ceph osd scrub <who> | |
1098 | ||
1099 | Subcommand ``set`` sets <key>. | |
1100 | ||
1101 | Usage:: | |
1102 | ||
1103 | ceph osd set full|pause|noup|nodown|noout|noin|nobackfill| | |
1104 | norebalance|norecover|noscrub|nodeep-scrub|notieragent | |
1105 | ||
1106 | Subcommand ``setcrushmap`` sets crush map from input file. | |
1107 | ||
1108 | Usage:: | |
1109 | ||
1110 | ceph osd setcrushmap | |
1111 | ||
1112 | Subcommand ``setmaxosd`` sets new maximum osd value. | |
1113 | ||
1114 | Usage:: | |
1115 | ||
1116 | ceph osd setmaxosd <int[0-]> | |
1117 | ||
1118 | Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward | |
1119 | compatible with the specified client version. This subcommand prevents you from | |
1120 | making any changes (e.g., crush tunables, or using new features) that | |
1121 | would violate the current setting. Please note, This subcommand will fail if | |
1122 | any connected daemon or client is not compatible with the features offered by | |
1123 | the given <version>. To see the features and releases of all clients connected | |
1124 | to cluster, please see `ceph features`_. | |
1125 | ||
1126 | Usage:: | |
1127 | ||
1128 | ceph osd set-require-min-compat-client <version> | |
1129 | ||
1130 | Subcommand ``stat`` prints summary of OSD map. | |
1131 | ||
1132 | Usage:: | |
1133 | ||
1134 | ceph osd stat | |
1135 | ||
1136 | Subcommand ``tier`` is used for managing tiers. It uses some additional | |
1137 | subcommands. | |
1138 | ||
1139 | Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool> | |
1140 | (the first one). | |
1141 | ||
1142 | Usage:: | |
1143 | ||
1144 | ceph osd tier add <poolname> <poolname> {--force-nonempty} | |
1145 | ||
1146 | Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size> | |
1147 | to existing pool <pool> (the first one). | |
1148 | ||
1149 | Usage:: | |
1150 | ||
1151 | ceph osd tier add-cache <poolname> <poolname> <int[0-]> | |
1152 | ||
1153 | Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>. | |
1154 | ||
1155 | Usage:: | |
1156 | ||
1157 | ceph osd tier cache-mode <poolname> none|writeback|forward|readonly| | |
1158 | readforward|readproxy | |
1159 | ||
1160 | Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool | |
1161 | <pool> (the first one). | |
1162 | ||
1163 | Usage:: | |
1164 | ||
1165 | ceph osd tier remove <poolname> <poolname> | |
1166 | ||
1167 | Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>. | |
1168 | ||
1169 | Usage:: | |
1170 | ||
1171 | ceph osd tier remove-overlay <poolname> | |
1172 | ||
1173 | Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be | |
1174 | <overlaypool>. | |
1175 | ||
1176 | Usage:: | |
1177 | ||
1178 | ceph osd tier set-overlay <poolname> <poolname> | |
1179 | ||
1180 | Subcommand ``tree`` prints OSD tree. | |
1181 | ||
1182 | Usage:: | |
1183 | ||
1184 | ceph osd tree {<int[0-]>} | |
1185 | ||
1186 | Subcommand ``unpause`` unpauses osd. | |
1187 | ||
1188 | Usage:: | |
1189 | ||
1190 | ceph osd unpause | |
1191 | ||
1192 | Subcommand ``unset`` unsets <key>. | |
1193 | ||
1194 | Usage:: | |
1195 | ||
1196 | ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill| | |
1197 | norebalance|norecover|noscrub|nodeep-scrub|notieragent | |
1198 | ||
1199 | ||
1200 | pg | |
1201 | -- | |
1202 | ||
1203 | It is used for managing the placement groups in OSDs. It uses some | |
1204 | additional subcommands. | |
1205 | ||
1206 | Subcommand ``debug`` shows debug info about pgs. | |
1207 | ||
1208 | Usage:: | |
1209 | ||
1210 | ceph pg debug unfound_objects_exist|degraded_pgs_exist | |
1211 | ||
1212 | Subcommand ``deep-scrub`` starts deep-scrub on <pgid>. | |
1213 | ||
1214 | Usage:: | |
1215 | ||
1216 | ceph pg deep-scrub <pgid> | |
1217 | ||
1218 | Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid | |
1219 | with plain). | |
1220 | ||
1221 | Usage:: | |
1222 | ||
1223 | ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} | |
1224 | ||
1225 | Subcommand ``dump_json`` shows human-readable version of pg map in json only. | |
1226 | ||
1227 | Usage:: | |
1228 | ||
1229 | ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} | |
1230 | ||
1231 | Subcommand ``dump_pools_json`` shows pg pools info in json only. | |
1232 | ||
1233 | Usage:: | |
1234 | ||
1235 | ceph pg dump_pools_json | |
1236 | ||
1237 | Subcommand ``dump_stuck`` shows information about stuck pgs. | |
1238 | ||
1239 | Usage:: | |
1240 | ||
1241 | ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} | |
1242 | {<int>} | |
1243 | ||
1244 | Subcommand ``getmap`` gets binary pg map to -o/stdout. | |
1245 | ||
1246 | Usage:: | |
1247 | ||
1248 | ceph pg getmap | |
1249 | ||
1250 | Subcommand ``ls`` lists pg with specific pool, osd, state | |
1251 | ||
1252 | Usage:: | |
1253 | ||
1254 | ceph pg ls {<int>} {active|clean|down|replay|splitting| | |
1255 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1256 | recovery|backfill_wait|incomplete|stale| remapped| | |
1257 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1258 | undersized [active|clean|down|replay|splitting| | |
1259 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1260 | recovery|backfill_wait|incomplete|stale|remapped| | |
1261 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1262 | undersized...]} | |
1263 | ||
1264 | Subcommand ``ls-by-osd`` lists pg on osd [osd] | |
1265 | ||
1266 | Usage:: | |
1267 | ||
1268 | ceph pg ls-by-osd <osdname (id|osd.id)> {<int>} | |
1269 | {active|clean|down|replay|splitting| | |
1270 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1271 | recovery|backfill_wait|incomplete|stale| remapped| | |
1272 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1273 | undersized [active|clean|down|replay|splitting| | |
1274 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1275 | recovery|backfill_wait|incomplete|stale|remapped| | |
1276 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1277 | undersized...]} | |
1278 | ||
1279 | Subcommand ``ls-by-pool`` lists pg with pool = [poolname] | |
1280 | ||
1281 | Usage:: | |
1282 | ||
1283 | ceph pg ls-by-pool <poolstr> {<int>} {active| | |
1284 | clean|down|replay|splitting| | |
1285 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1286 | recovery|backfill_wait|incomplete|stale| remapped| | |
1287 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1288 | undersized [active|clean|down|replay|splitting| | |
1289 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1290 | recovery|backfill_wait|incomplete|stale|remapped| | |
1291 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1292 | undersized...]} | |
1293 | ||
1294 | Subcommand ``ls-by-primary`` lists pg with primary = [osd] | |
1295 | ||
1296 | Usage:: | |
1297 | ||
1298 | ceph pg ls-by-primary <osdname (id|osd.id)> {<int>} | |
1299 | {active|clean|down|replay|splitting| | |
1300 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1301 | recovery|backfill_wait|incomplete|stale| remapped| | |
1302 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1303 | undersized [active|clean|down|replay|splitting| | |
1304 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1305 | recovery|backfill_wait|incomplete|stale|remapped| | |
1306 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1307 | undersized...]} | |
1308 | ||
1309 | Subcommand ``map`` shows mapping of pg to osds. | |
1310 | ||
1311 | Usage:: | |
1312 | ||
1313 | ceph pg map <pgid> | |
1314 | ||
1315 | Subcommand ``repair`` starts repair on <pgid>. | |
1316 | ||
1317 | Usage:: | |
1318 | ||
1319 | ceph pg repair <pgid> | |
1320 | ||
1321 | Subcommand ``scrub`` starts scrub on <pgid>. | |
1322 | ||
1323 | Usage:: | |
1324 | ||
1325 | ceph pg scrub <pgid> | |
1326 | ||
1327 | Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full. | |
1328 | ||
1329 | Usage:: | |
1330 | ||
1331 | ceph pg set_full_ratio <float[0.0-1.0]> | |
1332 | ||
1333 | Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill. | |
1334 | ||
1335 | Usage:: | |
1336 | ||
1337 | ceph pg set_backfillfull_ratio <float[0.0-1.0]> | |
1338 | ||
1339 | Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly | |
1340 | full. | |
1341 | ||
1342 | Usage:: | |
1343 | ||
1344 | ceph pg set_nearfull_ratio <float[0.0-1.0]> | |
1345 | ||
1346 | Subcommand ``stat`` shows placement group status. | |
1347 | ||
1348 | Usage:: | |
1349 | ||
1350 | ceph pg stat | |
1351 | ||
1352 | ||
1353 | quorum | |
1354 | ------ | |
1355 | ||
1356 | Cause MON to enter or exit quorum. | |
1357 | ||
1358 | Usage:: | |
1359 | ||
1360 | ceph quorum enter|exit | |
1361 | ||
1362 | Note: this only works on the MON to which the ``ceph`` command is connected. | |
1363 | If you want a specific MON to enter or exit quorum, use this syntax:: | |
1364 | ||
1365 | ceph tell mon.<id> quorum enter|exit | |
1366 | ||
1367 | quorum_status | |
1368 | ------------- | |
1369 | ||
1370 | Reports status of monitor quorum. | |
1371 | ||
1372 | Usage:: | |
1373 | ||
1374 | ceph quorum_status | |
1375 | ||
1376 | ||
1377 | report | |
1378 | ------ | |
1379 | ||
1380 | Reports full status of cluster, optional title tag strings. | |
1381 | ||
1382 | Usage:: | |
1383 | ||
1384 | ceph report {<tags> [<tags>...]} | |
1385 | ||
1386 | ||
1387 | scrub | |
1388 | ----- | |
1389 | ||
1390 | Scrubs the monitor stores. | |
1391 | ||
1392 | Usage:: | |
1393 | ||
1394 | ceph scrub | |
1395 | ||
1396 | ||
1397 | status | |
1398 | ------ | |
1399 | ||
1400 | Shows cluster status. | |
1401 | ||
1402 | Usage:: | |
1403 | ||
1404 | ceph status | |
1405 | ||
1406 | ||
1407 | sync force | |
1408 | ---------- | |
1409 | ||
1410 | Forces sync of and clear monitor store. | |
1411 | ||
1412 | Usage:: | |
1413 | ||
1414 | ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing} | |
1415 | ||
1416 | ||
1417 | tell | |
1418 | ---- | |
1419 | ||
1420 | Sends a command to a specific daemon. | |
1421 | ||
1422 | Usage:: | |
1423 | ||
1424 | ceph tell <name (type.id)> <args> [<args>...] | |
1425 | ||
1426 | ||
1427 | List all available commands. | |
1428 | ||
1429 | Usage:: | |
1430 | ||
1431 | ceph tell <name (type.id)> help | |
1432 | ||
1433 | version | |
1434 | ------- | |
1435 | ||
1436 | Show mon daemon version | |
1437 | ||
1438 | Usage:: | |
1439 | ||
1440 | ceph version | |
1441 | ||
1442 | Options | |
1443 | ======= | |
1444 | ||
1445 | .. option:: -i infile | |
1446 | ||
1447 | will specify an input file to be passed along as a payload with the | |
1448 | command to the monitor cluster. This is only used for specific | |
1449 | monitor commands. | |
1450 | ||
1451 | .. option:: -o outfile | |
1452 | ||
1453 | will write any payload returned by the monitor cluster with its | |
1454 | reply to outfile. Only specific monitor commands (e.g. osd getmap) | |
1455 | return a payload. | |
1456 | ||
1457 | .. option:: -c ceph.conf, --conf=ceph.conf | |
1458 | ||
1459 | Use ceph.conf configuration file instead of the default | |
1460 | ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup. | |
1461 | ||
1462 | .. option:: --id CLIENT_ID, --user CLIENT_ID | |
1463 | ||
1464 | Client id for authentication. | |
1465 | ||
1466 | .. option:: --name CLIENT_NAME, -n CLIENT_NAME | |
1467 | ||
1468 | Client name for authentication. | |
1469 | ||
1470 | .. option:: --cluster CLUSTER | |
1471 | ||
1472 | Name of the Ceph cluster. | |
1473 | ||
1474 | .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME | |
1475 | ||
1476 | Submit admin-socket commands via admin sockets in /var/run/ceph. | |
1477 | ||
1478 | .. option:: --admin-socket ADMIN_SOCKET_NOPE | |
1479 | ||
1480 | You probably mean --admin-daemon | |
1481 | ||
1482 | .. option:: -s, --status | |
1483 | ||
1484 | Show cluster status. | |
1485 | ||
1486 | .. option:: -w, --watch | |
1487 | ||
1488 | Watch live cluster changes. | |
1489 | ||
1490 | .. option:: --watch-debug | |
1491 | ||
1492 | Watch debug events. | |
1493 | ||
1494 | .. option:: --watch-info | |
1495 | ||
1496 | Watch info events. | |
1497 | ||
1498 | .. option:: --watch-sec | |
1499 | ||
1500 | Watch security events. | |
1501 | ||
1502 | .. option:: --watch-warn | |
1503 | ||
1504 | Watch warning events. | |
1505 | ||
1506 | .. option:: --watch-error | |
1507 | ||
1508 | Watch error events. | |
1509 | ||
1510 | .. option:: --version, -v | |
1511 | ||
1512 | Display version. | |
1513 | ||
1514 | .. option:: --verbose | |
1515 | ||
1516 | Make verbose. | |
1517 | ||
1518 | .. option:: --concise | |
1519 | ||
1520 | Make less verbose. | |
1521 | ||
1522 | .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format | |
1523 | ||
1524 | Format of output. | |
1525 | ||
1526 | .. option:: --connect-timeout CLUSTER_TIMEOUT | |
1527 | ||
1528 | Set a timeout for connecting to the cluster. | |
1529 | ||
1530 | .. option:: --no-increasing | |
1531 | ||
1532 | ``--no-increasing`` is off by default. So increasing the osd weight is allowed | |
1533 | using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands. | |
1534 | If this option is used with these commands, it will help not to increase osd weight | |
1535 | even the osd is under utilized. | |
1536 | ||
1537 | ||
1538 | Availability | |
1539 | ============ | |
1540 | ||
1541 | :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to | |
1542 | the Ceph documentation at http://ceph.com/docs for more information. | |
1543 | ||
1544 | ||
1545 | See also | |
1546 | ======== | |
1547 | ||
1548 | :doc:`ceph-mon <ceph-mon>`\(8), | |
1549 | :doc:`ceph-osd <ceph-osd>`\(8), | |
1550 | :doc:`ceph-mds <ceph-mds>`\(8) |