]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | :orphan: |
2 | ||
3 | ================================== | |
4 | ceph -- ceph administration tool | |
5 | ================================== | |
6 | ||
7 | .. program:: ceph | |
8 | ||
9 | Synopsis | |
10 | ======== | |
11 | ||
12 | | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ... | |
13 | ||
14 | | **ceph** **compact** | |
15 | ||
16 | | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ... | |
17 | ||
18 | | **ceph** **daemon** *<name>* \| *<path>* *<command>* ... | |
19 | ||
20 | | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ] | |
21 | ||
22 | | **ceph** **df** *{detail}* | |
23 | ||
24 | | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ... | |
25 | ||
26 | | **ceph** **fsid** | |
27 | ||
28 | | **ceph** **health** *{detail}* | |
29 | ||
30 | | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ... | |
31 | ||
32 | | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ] | |
33 | ||
34 | | **ceph** **log** *<logtext>* [ *<logtext>*... ] | |
35 | ||
36 | | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ... | |
37 | ||
38 | | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ... | |
39 | ||
40 | | **ceph** **mon_status** | |
41 | ||
c07f9fc5 | 42 | | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ... |
7c673cae FG |
43 | |
44 | | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ... | |
45 | ||
46 | | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ... | |
47 | ||
48 | | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ... | |
49 | ||
50 | | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ... | |
51 | ||
52 | | **ceph** **quorum** [ *enter* \| *exit* ] | |
53 | ||
54 | | **ceph** **quorum_status** | |
55 | ||
56 | | **ceph** **report** { *<tags>* [ *<tags>...* ] } | |
57 | ||
58 | | **ceph** **scrub** | |
59 | ||
60 | | **ceph** **status** | |
61 | ||
62 | | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing} | |
63 | ||
64 | | **ceph** **tell** *<name (type.id)> <args> [<args>...]* | |
65 | ||
66 | | **ceph** **version** | |
67 | ||
68 | Description | |
69 | =========== | |
70 | ||
71 | :program:`ceph` is a control utility which is used for manual deployment and maintenance | |
72 | of a Ceph cluster. It provides a diverse set of commands that allows deployment of | |
73 | monitors, OSDs, placement groups, MDS and overall maintenance, administration | |
74 | of the cluster. | |
75 | ||
76 | Commands | |
77 | ======== | |
78 | ||
79 | auth | |
80 | ---- | |
81 | ||
82 | Manage authentication keys. It is used for adding, removing, exporting | |
83 | or updating of authentication keys for a particular entity such as a monitor or | |
84 | OSD. It uses some additional subcommands. | |
85 | ||
86 | Subcommand ``add`` adds authentication info for a particular entity from input | |
87 | file, or random key if no input is given and/or any caps specified in the command. | |
88 | ||
89 | Usage:: | |
90 | ||
91 | ceph auth add <entity> {<caps> [<caps>...]} | |
92 | ||
93 | Subcommand ``caps`` updates caps for **name** from caps specified in the command. | |
94 | ||
95 | Usage:: | |
96 | ||
97 | ceph auth caps <entity> <caps> [<caps>...] | |
98 | ||
99 | Subcommand ``del`` deletes all caps for ``name``. | |
100 | ||
101 | Usage:: | |
102 | ||
103 | ceph auth del <entity> | |
104 | ||
105 | Subcommand ``export`` writes keyring for requested entity, or master keyring if | |
106 | none given. | |
107 | ||
108 | Usage:: | |
109 | ||
110 | ceph auth export {<entity>} | |
111 | ||
112 | Subcommand ``get`` writes keyring file with requested key. | |
113 | ||
114 | Usage:: | |
115 | ||
116 | ceph auth get <entity> | |
117 | ||
118 | Subcommand ``get-key`` displays requested key. | |
119 | ||
120 | Usage:: | |
121 | ||
122 | ceph auth get-key <entity> | |
123 | ||
124 | Subcommand ``get-or-create`` adds authentication info for a particular entity | |
125 | from input file, or random key if no input given and/or any caps specified in the | |
126 | command. | |
127 | ||
128 | Usage:: | |
129 | ||
130 | ceph auth get-or-create <entity> {<caps> [<caps>...]} | |
131 | ||
132 | Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps | |
133 | pairs specified in the command. If key already exists, any given caps must match | |
134 | the existing caps for that key. | |
135 | ||
136 | Usage:: | |
137 | ||
138 | ceph auth get-or-create-key <entity> {<caps> [<caps>...]} | |
139 | ||
140 | Subcommand ``import`` reads keyring from input file. | |
141 | ||
142 | Usage:: | |
143 | ||
144 | ceph auth import | |
145 | ||
c07f9fc5 | 146 | Subcommand ``ls`` lists authentication state. |
7c673cae FG |
147 | |
148 | Usage:: | |
149 | ||
c07f9fc5 | 150 | ceph auth ls |
7c673cae FG |
151 | |
152 | Subcommand ``print-key`` displays requested key. | |
153 | ||
154 | Usage:: | |
155 | ||
156 | ceph auth print-key <entity> | |
157 | ||
158 | Subcommand ``print_key`` displays requested key. | |
159 | ||
160 | Usage:: | |
161 | ||
162 | ceph auth print_key <entity> | |
163 | ||
164 | ||
165 | compact | |
166 | ------- | |
167 | ||
168 | Causes compaction of monitor's leveldb storage. | |
169 | ||
170 | Usage:: | |
171 | ||
172 | ceph compact | |
173 | ||
174 | ||
175 | config-key | |
176 | ---------- | |
177 | ||
178 | Manage configuration key. It uses some additional subcommands. | |
179 | ||
180 | Subcommand ``del`` deletes configuration key. | |
181 | ||
182 | Usage:: | |
183 | ||
184 | ceph config-key del <key> | |
185 | ||
186 | Subcommand ``exists`` checks for configuration keys existence. | |
187 | ||
188 | Usage:: | |
189 | ||
190 | ceph config-key exists <key> | |
191 | ||
192 | Subcommand ``get`` gets the configuration key. | |
193 | ||
194 | Usage:: | |
195 | ||
196 | ceph config-key get <key> | |
197 | ||
198 | Subcommand ``list`` lists configuration keys. | |
199 | ||
200 | Usage:: | |
201 | ||
c07f9fc5 | 202 | ceph config-key ls |
7c673cae FG |
203 | |
204 | Subcommand ``dump`` dumps configuration keys and values. | |
205 | ||
206 | Usage:: | |
207 | ||
208 | ceph config-key dump | |
209 | ||
c07f9fc5 | 210 | Subcommand ``set`` puts configuration key and value. |
7c673cae FG |
211 | |
212 | Usage:: | |
213 | ||
c07f9fc5 | 214 | ceph config-key set <key> {<val>} |
7c673cae FG |
215 | |
216 | ||
217 | daemon | |
218 | ------ | |
219 | ||
220 | Submit admin-socket commands. | |
221 | ||
222 | Usage:: | |
223 | ||
224 | ceph daemon {daemon_name|socket_path} {command} ... | |
225 | ||
226 | Example:: | |
227 | ||
228 | ceph daemon osd.0 help | |
229 | ||
230 | ||
231 | daemonperf | |
232 | ---------- | |
233 | ||
234 | Watch performance counters from a Ceph daemon. | |
235 | ||
236 | Usage:: | |
237 | ||
238 | ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]] | |
239 | ||
240 | ||
241 | df | |
242 | -- | |
243 | ||
244 | Show cluster's free space status. | |
245 | ||
246 | Usage:: | |
247 | ||
248 | ceph df {detail} | |
249 | ||
c07f9fc5 FG |
250 | .. _ceph features: |
251 | ||
252 | features | |
253 | -------- | |
254 | ||
255 | Show the releases and features of all connected daemons and clients connected | |
256 | to the cluster, along with the numbers of them in each bucket grouped by the | |
257 | corresponding features/releases. Each release of Ceph supports a different set | |
258 | of features, expressed by the features bitmask. New cluster features require | |
259 | that clients support the feature, or else they are not allowed to connect to | |
260 | these new features. As new features or capabilities are enabled after an | |
261 | upgrade, older clients are prevented from connecting. | |
262 | ||
263 | Usage:: | |
264 | ||
265 | ceph features | |
7c673cae FG |
266 | |
267 | fs | |
268 | -- | |
269 | ||
270 | Manage cephfs filesystems. It uses some additional subcommands. | |
271 | ||
272 | Subcommand ``ls`` to list filesystems | |
273 | ||
274 | Usage:: | |
275 | ||
276 | ceph fs ls | |
277 | ||
278 | Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data> | |
279 | ||
280 | Usage:: | |
281 | ||
282 | ceph fs new <fs_name> <metadata> <data> | |
283 | ||
284 | Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map | |
285 | ||
286 | Usage:: | |
287 | ||
288 | ceph fs reset <fs_name> {--yes-i-really-mean-it} | |
289 | ||
290 | Subcommand ``rm`` to disable the named filesystem | |
291 | ||
292 | Usage:: | |
293 | ||
294 | ceph fs rm <fs_name> {--yes-i-really-mean-it} | |
295 | ||
296 | ||
297 | fsid | |
298 | ---- | |
299 | ||
300 | Show cluster's FSID/UUID. | |
301 | ||
302 | Usage:: | |
303 | ||
304 | ceph fsid | |
305 | ||
306 | ||
307 | health | |
308 | ------ | |
309 | ||
310 | Show cluster's health. | |
311 | ||
312 | Usage:: | |
313 | ||
314 | ceph health {detail} | |
315 | ||
316 | ||
317 | heap | |
318 | ---- | |
319 | ||
320 | Show heap usage info (available only if compiled with tcmalloc) | |
321 | ||
322 | Usage:: | |
323 | ||
324 | ceph heap dump|start_profiler|stop_profiler|release|stats | |
325 | ||
326 | ||
327 | injectargs | |
328 | ---------- | |
329 | ||
330 | Inject configuration arguments into monitor. | |
331 | ||
332 | Usage:: | |
333 | ||
334 | ceph injectargs <injected_args> [<injected_args>...] | |
335 | ||
336 | ||
337 | log | |
338 | --- | |
339 | ||
340 | Log supplied text to the monitor log. | |
341 | ||
342 | Usage:: | |
343 | ||
344 | ceph log <logtext> [<logtext>...] | |
345 | ||
346 | ||
347 | mds | |
348 | --- | |
349 | ||
350 | Manage metadata server configuration and administration. It uses some | |
351 | additional subcommands. | |
352 | ||
353 | Subcommand ``compat`` manages compatible features. It uses some additional | |
354 | subcommands. | |
355 | ||
356 | Subcommand ``rm_compat`` removes compatible feature. | |
357 | ||
358 | Usage:: | |
359 | ||
360 | ceph mds compat rm_compat <int[0-]> | |
361 | ||
362 | Subcommand ``rm_incompat`` removes incompatible feature. | |
363 | ||
364 | Usage:: | |
365 | ||
366 | ceph mds compat rm_incompat <int[0-]> | |
367 | ||
368 | Subcommand ``show`` shows mds compatibility settings. | |
369 | ||
370 | Usage:: | |
371 | ||
372 | ceph mds compat show | |
373 | ||
374 | Subcommand ``deactivate`` stops mds. | |
375 | ||
376 | Usage:: | |
377 | ||
378 | ceph mds deactivate <who> | |
379 | ||
380 | Subcommand ``fail`` forces mds to status fail. | |
381 | ||
382 | Usage:: | |
383 | ||
384 | ceph mds fail <who> | |
385 | ||
386 | Subcommand ``rm`` removes inactive mds. | |
387 | ||
388 | Usage:: | |
389 | ||
390 | ceph mds rm <int[0-]> <name> (type.id)> | |
391 | ||
392 | Subcommand ``rmfailed`` removes failed mds. | |
393 | ||
394 | Usage:: | |
395 | ||
396 | ceph mds rmfailed <int[0-]> | |
397 | ||
398 | Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>. | |
399 | ||
400 | Usage:: | |
401 | ||
402 | ceph mds set_state <int[0-]> <int[0-20]> | |
403 | ||
404 | Subcommand ``stat`` shows MDS status. | |
405 | ||
406 | Usage:: | |
407 | ||
408 | ceph mds stat | |
409 | ||
410 | Subcommand ``tell`` sends command to particular mds. | |
411 | ||
412 | Usage:: | |
413 | ||
414 | ceph mds tell <who> <args> [<args>...] | |
415 | ||
416 | mon | |
417 | --- | |
418 | ||
419 | Manage monitor configuration and administration. It uses some additional | |
420 | subcommands. | |
421 | ||
422 | Subcommand ``add`` adds new monitor named <name> at <addr>. | |
423 | ||
424 | Usage:: | |
425 | ||
426 | ceph mon add <name> <IPaddr[:port]> | |
427 | ||
428 | Subcommand ``dump`` dumps formatted monmap (optionally from epoch) | |
429 | ||
430 | Usage:: | |
431 | ||
432 | ceph mon dump {<int[0-]>} | |
433 | ||
434 | Subcommand ``getmap`` gets monmap. | |
435 | ||
436 | Usage:: | |
437 | ||
438 | ceph mon getmap {<int[0-]>} | |
439 | ||
440 | Subcommand ``remove`` removes monitor named <name>. | |
441 | ||
442 | Usage:: | |
443 | ||
444 | ceph mon remove <name> | |
445 | ||
446 | Subcommand ``stat`` summarizes monitor status. | |
447 | ||
448 | Usage:: | |
449 | ||
450 | ceph mon stat | |
451 | ||
452 | mon_status | |
453 | ---------- | |
454 | ||
455 | Reports status of monitors. | |
456 | ||
457 | Usage:: | |
458 | ||
459 | ceph mon_status | |
460 | ||
c07f9fc5 FG |
461 | mgr |
462 | --- | |
463 | ||
464 | Ceph manager daemon configuration and management. | |
465 | ||
466 | Subcommand ``dump`` dumps the latest MgrMap, which describes the active | |
467 | and standby manager daemons. | |
468 | ||
469 | Usage:: | |
470 | ||
471 | ceph mgr dump | |
472 | ||
473 | Subcommand ``fail`` will mark a manager daemon as failed, removing it | |
474 | from the manager map. If it is the active manager daemon a standby | |
475 | will take its place. | |
476 | ||
477 | Usage:: | |
478 | ||
479 | ceph mgr fail <name> | |
480 | ||
481 | Subcommand ``module ls`` will list currently enabled manager modules (plugins). | |
482 | ||
483 | Usage:: | |
484 | ||
485 | ceph mgr module ls | |
486 | ||
487 | Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``. | |
488 | ||
489 | Usage:: | |
490 | ||
491 | ceph mgr module enable <module> | |
492 | ||
493 | Subcommand ``module disable`` will disable an active manager module. | |
494 | ||
495 | Usage:: | |
496 | ||
497 | ceph mgr module disable <module> | |
498 | ||
499 | Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon. | |
500 | ||
501 | Usage:: | |
502 | ||
503 | ceph mgr metadata [name] | |
504 | ||
505 | Subcommand ``versions`` will report a count of running daemon versions. | |
506 | ||
507 | Usage:: | |
508 | ||
509 | ceph mgr versions | |
510 | ||
511 | Subcommand ``count-metadata`` will report a count of any daemon metadata field. | |
512 | ||
513 | Usage:: | |
514 | ||
515 | ceph mgr count-metadata <field> | |
516 | ||
517 | ||
7c673cae FG |
518 | osd |
519 | --- | |
520 | ||
521 | Manage OSD configuration and administration. It uses some additional | |
522 | subcommands. | |
523 | ||
524 | Subcommand ``blacklist`` manage blacklisted clients. It uses some additional | |
525 | subcommands. | |
526 | ||
527 | Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds | |
528 | from now) | |
529 | ||
530 | Usage:: | |
531 | ||
532 | ceph osd blacklist add <EntityAddr> {<float[0.0-]>} | |
533 | ||
534 | Subcommand ``ls`` show blacklisted clients | |
535 | ||
536 | Usage:: | |
537 | ||
538 | ceph osd blacklist ls | |
539 | ||
540 | Subcommand ``rm`` remove <addr> from blacklist | |
541 | ||
542 | Usage:: | |
543 | ||
544 | ceph osd blacklist rm <EntityAddr> | |
545 | ||
546 | Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers | |
547 | ||
548 | Usage:: | |
549 | ||
550 | ceph osd blocked-by | |
551 | ||
552 | Subcommand ``create`` creates new osd (with optional UUID and ID). | |
553 | ||
31f18b77 FG |
554 | This command is DEPRECATED as of the Luminous release, and will be removed in |
555 | a future release. | |
556 | ||
557 | Subcommand ``new`` should instead be used. | |
558 | ||
7c673cae FG |
559 | Usage:: |
560 | ||
561 | ceph osd create {<uuid>} {<id>} | |
562 | ||
31f18b77 FG |
563 | Subcommand ``new`` reuses a previously destroyed OSD *id*. The new OSD will |
564 | have the specified *uuid*, and the command expects a JSON file containing | |
565 | the base64 cephx key for auth entity *client.osd.<id>*, as well as optional | |
566 | base64 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying | |
567 | a dm-crypt requires specifying the accompanying lockbox cephx key. | |
568 | ||
569 | Usage:: | |
570 | ||
571 | ceph osd new {<id>} {<uuid>} -i {<secrets.json>} | |
572 | ||
573 | The secrets JSON file is expected to maintain a form of the following format:: | |
574 | ||
575 | { | |
576 | "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==" | |
577 | } | |
578 | ||
579 | Or:: | |
580 | ||
581 | { | |
582 | "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==", | |
583 | "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==", | |
584 | "dmcrypt_key": "<dm-crypt key>" | |
585 | } | |
586 | ||
587 | ||
7c673cae FG |
588 | Subcommand ``crush`` is used for CRUSH management. It uses some additional |
589 | subcommands. | |
590 | ||
591 | Subcommand ``add`` adds or updates crushmap position and weight for <name> with | |
592 | <weight> and location <args>. | |
593 | ||
594 | Usage:: | |
595 | ||
596 | ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] | |
597 | ||
598 | Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of | |
599 | type <type>. | |
600 | ||
601 | Usage:: | |
602 | ||
603 | ceph osd crush add-bucket <name> <type> | |
604 | ||
605 | Subcommand ``create-or-move`` creates entry or moves existing entry for <name> | |
606 | <weight> at/to location <args>. | |
607 | ||
608 | Usage:: | |
609 | ||
610 | ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args> | |
611 | [<args>...] | |
612 | ||
613 | Subcommand ``dump`` dumps crush map. | |
614 | ||
615 | Usage:: | |
616 | ||
617 | ceph osd crush dump | |
618 | ||
619 | Subcommand ``get-tunable`` get crush tunable straw_calc_version | |
620 | ||
621 | Usage:: | |
622 | ||
623 | ceph osd crush get-tunable straw_calc_version | |
624 | ||
625 | Subcommand ``link`` links existing entry for <name> under location <args>. | |
626 | ||
627 | Usage:: | |
628 | ||
629 | ceph osd crush link <name> <args> [<args>...] | |
630 | ||
631 | Subcommand ``move`` moves existing entry for <name> to location <args>. | |
632 | ||
633 | Usage:: | |
634 | ||
635 | ceph osd crush move <name> <args> [<args>...] | |
636 | ||
637 | Subcommand ``remove`` removes <name> from crush map (everywhere, or just at | |
638 | <ancestor>). | |
639 | ||
640 | Usage:: | |
641 | ||
642 | ceph osd crush remove <name> {<ancestor>} | |
643 | ||
644 | Subcommand ``rename-bucket`` renames buchket <srcname> to <stname> | |
645 | ||
646 | Usage:: | |
647 | ||
648 | ceph osd crush rename-bucket <srcname> <dstname> | |
649 | ||
650 | Subcommand ``reweight`` change <name>'s weight to <weight> in crush map. | |
651 | ||
652 | Usage:: | |
653 | ||
654 | ceph osd crush reweight <name> <float[0.0-]> | |
655 | ||
656 | Subcommand ``reweight-all`` recalculate the weights for the tree to | |
657 | ensure they sum correctly | |
658 | ||
659 | Usage:: | |
660 | ||
661 | ceph osd crush reweight-all | |
662 | ||
663 | Subcommand ``reweight-subtree`` changes all leaf items beneath <name> | |
664 | to <weight> in crush map | |
665 | ||
666 | Usage:: | |
667 | ||
668 | ceph osd crush reweight-subtree <name> <weight> | |
669 | ||
670 | Subcommand ``rm`` removes <name> from crush map (everywhere, or just at | |
671 | <ancestor>). | |
672 | ||
673 | Usage:: | |
674 | ||
675 | ceph osd crush rm <name> {<ancestor>} | |
676 | ||
677 | Subcommand ``rule`` is used for creating crush rules. It uses some additional | |
678 | subcommands. | |
679 | ||
680 | Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool | |
681 | created with <profile> (default default). | |
682 | ||
683 | Usage:: | |
684 | ||
685 | ceph osd crush rule create-erasure <name> {<profile>} | |
686 | ||
687 | Subcommand ``create-simple`` creates crush rule <name> to start from <root>, | |
688 | replicate across buckets of type <type>, using a choose mode of <firstn|indep> | |
689 | (default firstn; indep best for erasure pools). | |
690 | ||
691 | Usage:: | |
692 | ||
693 | ceph osd crush rule create-simple <name> <root> <type> {firstn|indep} | |
694 | ||
695 | Subcommand ``dump`` dumps crush rule <name> (default all). | |
696 | ||
697 | Usage:: | |
698 | ||
699 | ceph osd crush rule dump {<name>} | |
700 | ||
7c673cae FG |
701 | Subcommand ``ls`` lists crush rules. |
702 | ||
703 | Usage:: | |
704 | ||
705 | ceph osd crush rule ls | |
706 | ||
707 | Subcommand ``rm`` removes crush rule <name>. | |
708 | ||
709 | Usage:: | |
710 | ||
711 | ceph osd crush rule rm <name> | |
712 | ||
713 | Subcommand ``set`` used alone, sets crush map from input file. | |
714 | ||
715 | Usage:: | |
716 | ||
717 | ceph osd crush set | |
718 | ||
719 | Subcommand ``set`` with osdname/osd.id update crushmap position and weight | |
720 | for <name> to <weight> with location <args>. | |
721 | ||
722 | Usage:: | |
723 | ||
724 | ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] | |
725 | ||
726 | Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only | |
727 | tunable that can be set is straw_calc_version. | |
728 | ||
729 | Usage:: | |
730 | ||
731 | ceph osd crush set-tunable straw_calc_version <value> | |
732 | ||
733 | Subcommand ``show-tunables`` shows current crush tunables. | |
734 | ||
735 | Usage:: | |
736 | ||
737 | ceph osd crush show-tunables | |
738 | ||
739 | Subcommand ``tree`` shows the crush buckets and items in a tree view. | |
740 | ||
741 | Usage:: | |
742 | ||
743 | ceph osd crush tree | |
744 | ||
745 | Subcommand ``tunables`` sets crush tunables values to <profile>. | |
746 | ||
747 | Usage:: | |
748 | ||
749 | ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default | |
750 | ||
751 | Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at | |
752 | <ancestor>). | |
753 | ||
754 | Usage:: | |
755 | ||
756 | ceph osd crush unlink <name> {<ancestor>} | |
757 | ||
758 | Subcommand ``df`` shows OSD utilization | |
759 | ||
760 | Usage:: | |
761 | ||
762 | ceph osd df {plain|tree} | |
763 | ||
764 | Subcommand ``deep-scrub`` initiates deep scrub on specified osd. | |
765 | ||
766 | Usage:: | |
767 | ||
768 | ceph osd deep-scrub <who> | |
769 | ||
770 | Subcommand ``down`` sets osd(s) <id> [<id>...] down. | |
771 | ||
772 | Usage:: | |
773 | ||
774 | ceph osd down <ids> [<ids>...] | |
775 | ||
776 | Subcommand ``dump`` prints summary of OSD map. | |
777 | ||
778 | Usage:: | |
779 | ||
780 | ceph osd dump {<int[0-]>} | |
781 | ||
782 | Subcommand ``erasure-code-profile`` is used for managing the erasure code | |
783 | profiles. It uses some additional subcommands. | |
784 | ||
785 | Subcommand ``get`` gets erasure code profile <name>. | |
786 | ||
787 | Usage:: | |
788 | ||
789 | ceph osd erasure-code-profile get <name> | |
790 | ||
791 | Subcommand ``ls`` lists all erasure code profiles. | |
792 | ||
793 | Usage:: | |
794 | ||
795 | ceph osd erasure-code-profile ls | |
796 | ||
797 | Subcommand ``rm`` removes erasure code profile <name>. | |
798 | ||
799 | Usage:: | |
800 | ||
801 | ceph osd erasure-code-profile rm <name> | |
802 | ||
803 | Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...] | |
804 | pairs. Add a --force at the end to override an existing profile (IT IS RISKY). | |
805 | ||
806 | Usage:: | |
807 | ||
808 | ceph osd erasure-code-profile set <name> {<profile> [<profile>...]} | |
809 | ||
810 | Subcommand ``find`` find osd <id> in the CRUSH map and shows its location. | |
811 | ||
812 | Usage:: | |
813 | ||
814 | ceph osd find <int[0-]> | |
815 | ||
816 | Subcommand ``getcrushmap`` gets CRUSH map. | |
817 | ||
818 | Usage:: | |
819 | ||
820 | ceph osd getcrushmap {<int[0-]>} | |
821 | ||
822 | Subcommand ``getmap`` gets OSD map. | |
823 | ||
824 | Usage:: | |
825 | ||
826 | ceph osd getmap {<int[0-]>} | |
827 | ||
828 | Subcommand ``getmaxosd`` shows largest OSD id. | |
829 | ||
830 | Usage:: | |
831 | ||
832 | ceph osd getmaxosd | |
833 | ||
834 | Subcommand ``in`` sets osd(s) <id> [<id>...] in. | |
835 | ||
836 | Usage:: | |
837 | ||
838 | ceph osd in <ids> [<ids>...] | |
839 | ||
840 | Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO | |
841 | MORE REPLICAS EXIST, BE CAREFUL. | |
842 | ||
843 | Usage:: | |
844 | ||
845 | ceph osd lost <int[0-]> {--yes-i-really-mean-it} | |
846 | ||
847 | Subcommand ``ls`` shows all OSD ids. | |
848 | ||
849 | Usage:: | |
850 | ||
851 | ceph osd ls {<int[0-]>} | |
852 | ||
853 | Subcommand ``lspools`` lists pools. | |
854 | ||
855 | Usage:: | |
856 | ||
857 | ceph osd lspools {<int>} | |
858 | ||
859 | Subcommand ``map`` finds pg for <object> in <pool>. | |
860 | ||
861 | Usage:: | |
862 | ||
863 | ceph osd map <poolname> <objectname> | |
864 | ||
865 | Subcommand ``metadata`` fetches metadata for osd <id>. | |
866 | ||
867 | Usage:: | |
868 | ||
869 | ceph osd metadata {int[0-]} (default all) | |
870 | ||
871 | Subcommand ``out`` sets osd(s) <id> [<id>...] out. | |
872 | ||
873 | Usage:: | |
874 | ||
875 | ceph osd out <ids> [<ids>...] | |
876 | ||
877 | Subcommand ``pause`` pauses osd. | |
878 | ||
879 | Usage:: | |
880 | ||
881 | ceph osd pause | |
882 | ||
883 | Subcommand ``perf`` prints dump of OSD perf summary stats. | |
884 | ||
885 | Usage:: | |
886 | ||
887 | ceph osd perf | |
888 | ||
889 | Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers | |
890 | only). | |
891 | ||
892 | Usage:: | |
893 | ||
894 | ceph osd pg-temp <pgid> {<id> [<id>...]} | |
895 | ||
c07f9fc5 FG |
896 | Subcommand ``force-create-pg`` forces creation of pg <pgid>. |
897 | ||
898 | Usage:: | |
899 | ||
900 | ceph osd force-create-pg <pgid> | |
901 | ||
902 | ||
7c673cae FG |
903 | Subcommand ``pool`` is used for managing data pools. It uses some additional |
904 | subcommands. | |
905 | ||
906 | Subcommand ``create`` creates pool. | |
907 | ||
908 | Usage:: | |
909 | ||
910 | ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} | |
911 | {<erasure_code_profile>} {<ruleset>} {<int>} | |
912 | ||
913 | Subcommand ``delete`` deletes pool. | |
914 | ||
915 | Usage:: | |
916 | ||
917 | ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it} | |
918 | ||
919 | Subcommand ``get`` gets pool parameter <var>. | |
920 | ||
921 | Usage:: | |
922 | ||
923 | ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num| | |
924 | pgp_num|crush_ruleset|auid|write_fadvise_dontneed | |
925 | ||
926 | Only for tiered pools:: | |
927 | ||
928 | ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp| | |
929 | target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio| | |
930 | cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age| | |
931 | min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n | |
932 | ||
933 | Only for erasure coded pools:: | |
934 | ||
935 | ceph osd pool get <poolname> erasure_code_profile | |
936 | ||
937 | Use ``all`` to get all pool parameters that apply to the pool's type:: | |
938 | ||
939 | ceph osd pool get <poolname> all | |
940 | ||
941 | Subcommand ``get-quota`` obtains object or byte limits for pool. | |
942 | ||
943 | Usage:: | |
944 | ||
945 | ceph osd pool get-quota <poolname> | |
946 | ||
947 | Subcommand ``ls`` list pools | |
948 | ||
949 | Usage:: | |
950 | ||
951 | ceph osd pool ls {detail} | |
952 | ||
953 | Subcommand ``mksnap`` makes snapshot <snap> in <pool>. | |
954 | ||
955 | Usage:: | |
956 | ||
957 | ceph osd pool mksnap <poolname> <snap> | |
958 | ||
959 | Subcommand ``rename`` renames <srcpool> to <destpool>. | |
960 | ||
961 | Usage:: | |
962 | ||
963 | ceph osd pool rename <poolname> <poolname> | |
964 | ||
965 | Subcommand ``rmsnap`` removes snapshot <snap> from <pool>. | |
966 | ||
967 | Usage:: | |
968 | ||
969 | ceph osd pool rmsnap <poolname> <snap> | |
970 | ||
971 | Subcommand ``set`` sets pool parameter <var> to <val>. | |
972 | ||
973 | Usage:: | |
974 | ||
975 | ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num| | |
976 | pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange| | |
977 | hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool| | |
978 | target_max_bytes|target_max_objects|cache_target_dirty_ratio| | |
979 | cache_target_dirty_high_ratio| | |
980 | cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid| | |
981 | min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate| | |
982 | hit_set_search_last_n | |
983 | <val> {--yes-i-really-mean-it} | |
984 | ||
985 | Subcommand ``set-quota`` sets object or byte limit on pool. | |
986 | ||
987 | Usage:: | |
988 | ||
989 | ceph osd pool set-quota <poolname> max_objects|max_bytes <val> | |
990 | ||
991 | Subcommand ``stats`` obtain stats from all pools, or from specified pool. | |
992 | ||
993 | Usage:: | |
994 | ||
995 | ceph osd pool stats {<name>} | |
996 | ||
997 | Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight> | |
998 | <= 1.0 | |
999 | ||
1000 | Usage:: | |
1001 | ||
1002 | ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]> | |
1003 | ||
1004 | Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers | |
1005 | only). | |
1006 | ||
1007 | Usage:: | |
1008 | ||
1009 | ceph osd primary-temp <pgid> <id> | |
1010 | ||
1011 | Subcommand ``repair`` initiates repair on a specified osd. | |
1012 | ||
1013 | Usage:: | |
1014 | ||
1015 | ceph osd repair <who> | |
1016 | ||
1017 | Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0. | |
1018 | ||
1019 | Usage:: | |
1020 | ||
1021 | osd reweight <int[0-]> <float[0.0-1.0]> | |
1022 | ||
1023 | Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution | |
1024 | [overload-percentage-for-consideration, default 120]. | |
1025 | ||
1026 | Usage:: | |
1027 | ||
1028 | ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]} | |
1029 | {--no-increasing} | |
1030 | ||
1031 | Subcommand ``reweight-by-utilization`` reweight OSDs by utilization | |
1032 | [overload-percentage-for-consideration, default 120]. | |
1033 | ||
1034 | Usage:: | |
1035 | ||
1036 | ceph osd reweight-by-utilization {<int[100-]>} | |
1037 | {--no-increasing} | |
1038 | ||
c07f9fc5 FG |
1039 | Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map. |
1040 | ||
7c673cae FG |
1041 | |
1042 | Usage:: | |
1043 | ||
1044 | ceph osd rm <ids> [<ids>...] | |
1045 | ||
31f18b77 FG |
1046 | Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx |
1047 | entity's keys and all of its dm-crypt and daemon-private config key | |
1048 | entries. | |
1049 | ||
1050 | This command will not remove the OSD from crush, nor will it remove the | |
1051 | OSD from the OSD map. Instead, once the command successfully completes, | |
1052 | the OSD will show marked as *destroyed*. | |
1053 | ||
1054 | In order to mark an OSD as destroyed, the OSD must first be marked as | |
1055 | **lost**. | |
1056 | ||
1057 | Usage:: | |
1058 | ||
1059 | ceph osd destroy <id> {--yes-i-really-mean-it} | |
1060 | ||
1061 | ||
1062 | Subcommand ``purge`` performs a combination of ``osd destroy``, | |
1063 | ``osd rm`` and ``osd crush remove``. | |
1064 | ||
1065 | Usage:: | |
1066 | ||
1067 | ceph osd purge <id> {--yes-i-really-mean-it} | |
1068 | ||
7c673cae FG |
1069 | Subcommand ``scrub`` initiates scrub on specified osd. |
1070 | ||
1071 | Usage:: | |
1072 | ||
1073 | ceph osd scrub <who> | |
1074 | ||
1075 | Subcommand ``set`` sets <key>. | |
1076 | ||
1077 | Usage:: | |
1078 | ||
1079 | ceph osd set full|pause|noup|nodown|noout|noin|nobackfill| | |
1080 | norebalance|norecover|noscrub|nodeep-scrub|notieragent | |
1081 | ||
1082 | Subcommand ``setcrushmap`` sets crush map from input file. | |
1083 | ||
1084 | Usage:: | |
1085 | ||
1086 | ceph osd setcrushmap | |
1087 | ||
1088 | Subcommand ``setmaxosd`` sets new maximum osd value. | |
1089 | ||
1090 | Usage:: | |
1091 | ||
1092 | ceph osd setmaxosd <int[0-]> | |
1093 | ||
c07f9fc5 FG |
1094 | Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward |
1095 | compatible with the specified client version. This subcommand prevents you from | |
1096 | making any changes (e.g., crush tunables, or using new features) that | |
1097 | would violate the current setting. Please note, This subcommand will fail if | |
1098 | any connected daemon or client is not compatible with the features offered by | |
1099 | the given <version>. To see the features and releases of all clients connected | |
1100 | to cluster, please see `ceph features`_. | |
1101 | ||
1102 | Usage:: | |
1103 | ||
1104 | ceph osd set-require-min-compat-client <version> | |
1105 | ||
7c673cae FG |
1106 | Subcommand ``stat`` prints summary of OSD map. |
1107 | ||
1108 | Usage:: | |
1109 | ||
1110 | ceph osd stat | |
1111 | ||
1112 | Subcommand ``tier`` is used for managing tiers. It uses some additional | |
1113 | subcommands. | |
1114 | ||
1115 | Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool> | |
1116 | (the first one). | |
1117 | ||
1118 | Usage:: | |
1119 | ||
1120 | ceph osd tier add <poolname> <poolname> {--force-nonempty} | |
1121 | ||
1122 | Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size> | |
1123 | to existing pool <pool> (the first one). | |
1124 | ||
1125 | Usage:: | |
1126 | ||
1127 | ceph osd tier add-cache <poolname> <poolname> <int[0-]> | |
1128 | ||
1129 | Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>. | |
1130 | ||
1131 | Usage:: | |
1132 | ||
1133 | ceph osd tier cache-mode <poolname> none|writeback|forward|readonly| | |
1134 | readforward|readproxy | |
1135 | ||
1136 | Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool | |
1137 | <pool> (the first one). | |
1138 | ||
1139 | Usage:: | |
1140 | ||
1141 | ceph osd tier remove <poolname> <poolname> | |
1142 | ||
1143 | Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>. | |
1144 | ||
1145 | Usage:: | |
1146 | ||
1147 | ceph osd tier remove-overlay <poolname> | |
1148 | ||
1149 | Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be | |
1150 | <overlaypool>. | |
1151 | ||
1152 | Usage:: | |
1153 | ||
1154 | ceph osd tier set-overlay <poolname> <poolname> | |
1155 | ||
1156 | Subcommand ``tree`` prints OSD tree. | |
1157 | ||
1158 | Usage:: | |
1159 | ||
1160 | ceph osd tree {<int[0-]>} | |
1161 | ||
1162 | Subcommand ``unpause`` unpauses osd. | |
1163 | ||
1164 | Usage:: | |
1165 | ||
1166 | ceph osd unpause | |
1167 | ||
1168 | Subcommand ``unset`` unsets <key>. | |
1169 | ||
1170 | Usage:: | |
1171 | ||
1172 | ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill| | |
1173 | norebalance|norecover|noscrub|nodeep-scrub|notieragent | |
1174 | ||
1175 | ||
1176 | pg | |
1177 | -- | |
1178 | ||
1179 | It is used for managing the placement groups in OSDs. It uses some | |
1180 | additional subcommands. | |
1181 | ||
1182 | Subcommand ``debug`` shows debug info about pgs. | |
1183 | ||
1184 | Usage:: | |
1185 | ||
1186 | ceph pg debug unfound_objects_exist|degraded_pgs_exist | |
1187 | ||
1188 | Subcommand ``deep-scrub`` starts deep-scrub on <pgid>. | |
1189 | ||
1190 | Usage:: | |
1191 | ||
1192 | ceph pg deep-scrub <pgid> | |
1193 | ||
1194 | Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid | |
1195 | with plain). | |
1196 | ||
1197 | Usage:: | |
1198 | ||
1199 | ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} | |
1200 | ||
1201 | Subcommand ``dump_json`` shows human-readable version of pg map in json only. | |
1202 | ||
1203 | Usage:: | |
1204 | ||
1205 | ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} | |
1206 | ||
1207 | Subcommand ``dump_pools_json`` shows pg pools info in json only. | |
1208 | ||
1209 | Usage:: | |
1210 | ||
1211 | ceph pg dump_pools_json | |
1212 | ||
1213 | Subcommand ``dump_stuck`` shows information about stuck pgs. | |
1214 | ||
1215 | Usage:: | |
1216 | ||
1217 | ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} | |
1218 | {<int>} | |
1219 | ||
7c673cae FG |
1220 | Subcommand ``getmap`` gets binary pg map to -o/stdout. |
1221 | ||
1222 | Usage:: | |
1223 | ||
1224 | ceph pg getmap | |
1225 | ||
1226 | Subcommand ``ls`` lists pg with specific pool, osd, state | |
1227 | ||
1228 | Usage:: | |
1229 | ||
1230 | ceph pg ls {<int>} {active|clean|down|replay|splitting| | |
1231 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1232 | recovery|backfill_wait|incomplete|stale| remapped| | |
1233 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1234 | undersized [active|clean|down|replay|splitting| | |
1235 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1236 | recovery|backfill_wait|incomplete|stale|remapped| | |
1237 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1238 | undersized...]} | |
1239 | ||
1240 | Subcommand ``ls-by-osd`` lists pg on osd [osd] | |
1241 | ||
1242 | Usage:: | |
1243 | ||
1244 | ceph pg ls-by-osd <osdname (id|osd.id)> {<int>} | |
1245 | {active|clean|down|replay|splitting| | |
1246 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1247 | recovery|backfill_wait|incomplete|stale| remapped| | |
1248 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1249 | undersized [active|clean|down|replay|splitting| | |
1250 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1251 | recovery|backfill_wait|incomplete|stale|remapped| | |
1252 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1253 | undersized...]} | |
1254 | ||
1255 | Subcommand ``ls-by-pool`` lists pg with pool = [poolname] | |
1256 | ||
1257 | Usage:: | |
1258 | ||
1259 | ceph pg ls-by-pool <poolstr> {<int>} {active| | |
1260 | clean|down|replay|splitting| | |
1261 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1262 | recovery|backfill_wait|incomplete|stale| remapped| | |
1263 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1264 | undersized [active|clean|down|replay|splitting| | |
1265 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1266 | recovery|backfill_wait|incomplete|stale|remapped| | |
1267 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1268 | undersized...]} | |
1269 | ||
1270 | Subcommand ``ls-by-primary`` lists pg with primary = [osd] | |
1271 | ||
1272 | Usage:: | |
1273 | ||
1274 | ceph pg ls-by-primary <osdname (id|osd.id)> {<int>} | |
1275 | {active|clean|down|replay|splitting| | |
1276 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1277 | recovery|backfill_wait|incomplete|stale| remapped| | |
1278 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1279 | undersized [active|clean|down|replay|splitting| | |
1280 | scrubbing|scrubq|degraded|inconsistent|peering|repair| | |
1281 | recovery|backfill_wait|incomplete|stale|remapped| | |
1282 | deep_scrub|backfill|backfill_toofull|recovery_wait| | |
1283 | undersized...]} | |
1284 | ||
1285 | Subcommand ``map`` shows mapping of pg to osds. | |
1286 | ||
1287 | Usage:: | |
1288 | ||
1289 | ceph pg map <pgid> | |
1290 | ||
1291 | Subcommand ``repair`` starts repair on <pgid>. | |
1292 | ||
1293 | Usage:: | |
1294 | ||
1295 | ceph pg repair <pgid> | |
1296 | ||
1297 | Subcommand ``scrub`` starts scrub on <pgid>. | |
1298 | ||
1299 | Usage:: | |
1300 | ||
1301 | ceph pg scrub <pgid> | |
1302 | ||
1303 | Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full. | |
1304 | ||
1305 | Usage:: | |
1306 | ||
1307 | ceph pg set_full_ratio <float[0.0-1.0]> | |
1308 | ||
1309 | Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill. | |
1310 | ||
1311 | Usage:: | |
1312 | ||
1313 | ceph pg set_backfillfull_ratio <float[0.0-1.0]> | |
1314 | ||
1315 | Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly | |
1316 | full. | |
1317 | ||
1318 | Usage:: | |
1319 | ||
1320 | ceph pg set_nearfull_ratio <float[0.0-1.0]> | |
1321 | ||
1322 | Subcommand ``stat`` shows placement group status. | |
1323 | ||
1324 | Usage:: | |
1325 | ||
1326 | ceph pg stat | |
1327 | ||
1328 | ||
1329 | quorum | |
1330 | ------ | |
1331 | ||
1332 | Cause MON to enter or exit quorum. | |
1333 | ||
1334 | Usage:: | |
1335 | ||
1336 | ceph quorum enter|exit | |
1337 | ||
1338 | Note: this only works on the MON to which the ``ceph`` command is connected. | |
1339 | If you want a specific MON to enter or exit quorum, use this syntax:: | |
1340 | ||
1341 | ceph tell mon.<id> quorum enter|exit | |
1342 | ||
1343 | quorum_status | |
1344 | ------------- | |
1345 | ||
1346 | Reports status of monitor quorum. | |
1347 | ||
1348 | Usage:: | |
1349 | ||
1350 | ceph quorum_status | |
1351 | ||
1352 | ||
1353 | report | |
1354 | ------ | |
1355 | ||
1356 | Reports full status of cluster, optional title tag strings. | |
1357 | ||
1358 | Usage:: | |
1359 | ||
1360 | ceph report {<tags> [<tags>...]} | |
1361 | ||
1362 | ||
1363 | scrub | |
1364 | ----- | |
1365 | ||
1366 | Scrubs the monitor stores. | |
1367 | ||
1368 | Usage:: | |
1369 | ||
1370 | ceph scrub | |
1371 | ||
1372 | ||
1373 | status | |
1374 | ------ | |
1375 | ||
1376 | Shows cluster status. | |
1377 | ||
1378 | Usage:: | |
1379 | ||
1380 | ceph status | |
1381 | ||
1382 | ||
1383 | sync force | |
1384 | ---------- | |
1385 | ||
1386 | Forces sync of and clear monitor store. | |
1387 | ||
1388 | Usage:: | |
1389 | ||
1390 | ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing} | |
1391 | ||
1392 | ||
1393 | tell | |
1394 | ---- | |
1395 | ||
1396 | Sends a command to a specific daemon. | |
1397 | ||
1398 | Usage:: | |
1399 | ||
1400 | ceph tell <name (type.id)> <args> [<args>...] | |
1401 | ||
31f18b77 FG |
1402 | |
1403 | List all available commands. | |
1404 | ||
1405 | Usage:: | |
1406 | ||
1407 | ceph tell <name (type.id)> help | |
1408 | ||
7c673cae FG |
1409 | version |
1410 | ------- | |
1411 | ||
1412 | Show mon daemon version | |
1413 | ||
1414 | Usage:: | |
1415 | ||
1416 | ceph version | |
1417 | ||
1418 | Options | |
1419 | ======= | |
1420 | ||
1421 | .. option:: -i infile | |
1422 | ||
1423 | will specify an input file to be passed along as a payload with the | |
1424 | command to the monitor cluster. This is only used for specific | |
1425 | monitor commands. | |
1426 | ||
1427 | .. option:: -o outfile | |
1428 | ||
1429 | will write any payload returned by the monitor cluster with its | |
1430 | reply to outfile. Only specific monitor commands (e.g. osd getmap) | |
1431 | return a payload. | |
1432 | ||
1433 | .. option:: -c ceph.conf, --conf=ceph.conf | |
1434 | ||
1435 | Use ceph.conf configuration file instead of the default | |
1436 | ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup. | |
1437 | ||
1438 | .. option:: --id CLIENT_ID, --user CLIENT_ID | |
1439 | ||
1440 | Client id for authentication. | |
1441 | ||
1442 | .. option:: --name CLIENT_NAME, -n CLIENT_NAME | |
1443 | ||
1444 | Client name for authentication. | |
1445 | ||
1446 | .. option:: --cluster CLUSTER | |
1447 | ||
1448 | Name of the Ceph cluster. | |
1449 | ||
1450 | .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME | |
1451 | ||
1452 | Submit admin-socket commands via admin sockets in /var/run/ceph. | |
1453 | ||
1454 | .. option:: --admin-socket ADMIN_SOCKET_NOPE | |
1455 | ||
1456 | You probably mean --admin-daemon | |
1457 | ||
1458 | .. option:: -s, --status | |
1459 | ||
1460 | Show cluster status. | |
1461 | ||
1462 | .. option:: -w, --watch | |
1463 | ||
1464 | Watch live cluster changes. | |
1465 | ||
1466 | .. option:: --watch-debug | |
1467 | ||
1468 | Watch debug events. | |
1469 | ||
1470 | .. option:: --watch-info | |
1471 | ||
1472 | Watch info events. | |
1473 | ||
1474 | .. option:: --watch-sec | |
1475 | ||
1476 | Watch security events. | |
1477 | ||
1478 | .. option:: --watch-warn | |
1479 | ||
1480 | Watch warning events. | |
1481 | ||
1482 | .. option:: --watch-error | |
1483 | ||
1484 | Watch error events. | |
1485 | ||
1486 | .. option:: --version, -v | |
1487 | ||
1488 | Display version. | |
1489 | ||
1490 | .. option:: --verbose | |
1491 | ||
1492 | Make verbose. | |
1493 | ||
1494 | .. option:: --concise | |
1495 | ||
1496 | Make less verbose. | |
1497 | ||
1498 | .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format | |
1499 | ||
1500 | Format of output. | |
1501 | ||
1502 | .. option:: --connect-timeout CLUSTER_TIMEOUT | |
1503 | ||
1504 | Set a timeout for connecting to the cluster. | |
1505 | ||
1506 | .. option:: --no-increasing | |
1507 | ||
1508 | ``--no-increasing`` is off by default. So increasing the osd weight is allowed | |
1509 | using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands. | |
1510 | If this option is used with these commands, it will help not to increase osd weight | |
1511 | even the osd is under utilized. | |
1512 | ||
1513 | ||
1514 | Availability | |
1515 | ============ | |
1516 | ||
1517 | :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to | |
1518 | the Ceph documentation at http://ceph.com/docs for more information. | |
1519 | ||
1520 | ||
1521 | See also | |
1522 | ======== | |
1523 | ||
1524 | :doc:`ceph-mon <ceph-mon>`\(8), | |
1525 | :doc:`ceph-osd <ceph-osd>`\(8), | |
1526 | :doc:`ceph-mds <ceph-mds>`\(8) |