]> git.proxmox.com Git - ceph.git/blob - ceph/doc/man/8/ceph.rst
update sources to v12.1.0
[ceph.git] / ceph / doc / man / 8 / ceph.rst
1 :orphan:
2
3 ==================================
4 ceph -- ceph administration tool
5 ==================================
6
7 .. program:: ceph
8
9 Synopsis
10 ========
11
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
13
14 | **ceph** **compact**
15
16 | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
17
18 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
19
20 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
21
22 | **ceph** **df** *{detail}*
23
24 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
25
26 | **ceph** **fsid**
27
28 | **ceph** **health** *{detail}*
29
30 | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
31
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
33
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
35
36 | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
37
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
39
40 | **ceph** **mon_status**
41
42 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
43
44 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
45
46 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
47
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
49
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
51
52 | **ceph** **quorum** [ *enter* \| *exit* ]
53
54 | **ceph** **quorum_status**
55
56 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
57
58 | **ceph** **scrub**
59
60 | **ceph** **status**
61
62 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
63
64 | **ceph** **tell** *<name (type.id)> <args> [<args>...]*
65
66 | **ceph** **version**
67
68 Description
69 ===========
70
71 :program:`ceph` is a control utility which is used for manual deployment and maintenance
72 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73 monitors, OSDs, placement groups, MDS and overall maintenance, administration
74 of the cluster.
75
76 Commands
77 ========
78
79 auth
80 ----
81
82 Manage authentication keys. It is used for adding, removing, exporting
83 or updating of authentication keys for a particular entity such as a monitor or
84 OSD. It uses some additional subcommands.
85
86 Subcommand ``add`` adds authentication info for a particular entity from input
87 file, or random key if no input is given and/or any caps specified in the command.
88
89 Usage::
90
91 ceph auth add <entity> {<caps> [<caps>...]}
92
93 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
94
95 Usage::
96
97 ceph auth caps <entity> <caps> [<caps>...]
98
99 Subcommand ``del`` deletes all caps for ``name``.
100
101 Usage::
102
103 ceph auth del <entity>
104
105 Subcommand ``export`` writes keyring for requested entity, or master keyring if
106 none given.
107
108 Usage::
109
110 ceph auth export {<entity>}
111
112 Subcommand ``get`` writes keyring file with requested key.
113
114 Usage::
115
116 ceph auth get <entity>
117
118 Subcommand ``get-key`` displays requested key.
119
120 Usage::
121
122 ceph auth get-key <entity>
123
124 Subcommand ``get-or-create`` adds authentication info for a particular entity
125 from input file, or random key if no input given and/or any caps specified in the
126 command.
127
128 Usage::
129
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
131
132 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133 pairs specified in the command. If key already exists, any given caps must match
134 the existing caps for that key.
135
136 Usage::
137
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
139
140 Subcommand ``import`` reads keyring from input file.
141
142 Usage::
143
144 ceph auth import
145
146 Subcommand ``list`` lists authentication state.
147
148 Usage::
149
150 ceph auth list
151
152 Subcommand ``print-key`` displays requested key.
153
154 Usage::
155
156 ceph auth print-key <entity>
157
158 Subcommand ``print_key`` displays requested key.
159
160 Usage::
161
162 ceph auth print_key <entity>
163
164
165 compact
166 -------
167
168 Causes compaction of monitor's leveldb storage.
169
170 Usage::
171
172 ceph compact
173
174
175 config-key
176 ----------
177
178 Manage configuration key. It uses some additional subcommands.
179
180 Subcommand ``del`` deletes configuration key.
181
182 Usage::
183
184 ceph config-key del <key>
185
186 Subcommand ``exists`` checks for configuration keys existence.
187
188 Usage::
189
190 ceph config-key exists <key>
191
192 Subcommand ``get`` gets the configuration key.
193
194 Usage::
195
196 ceph config-key get <key>
197
198 Subcommand ``list`` lists configuration keys.
199
200 Usage::
201
202 ceph config-key list
203
204 Subcommand ``dump`` dumps configuration keys and values.
205
206 Usage::
207
208 ceph config-key dump
209
210 Subcommand ``put`` puts configuration key and value.
211
212 Usage::
213
214 ceph config-key put <key> {<val>}
215
216
217 daemon
218 ------
219
220 Submit admin-socket commands.
221
222 Usage::
223
224 ceph daemon {daemon_name|socket_path} {command} ...
225
226 Example::
227
228 ceph daemon osd.0 help
229
230
231 daemonperf
232 ----------
233
234 Watch performance counters from a Ceph daemon.
235
236 Usage::
237
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
239
240
241 df
242 --
243
244 Show cluster's free space status.
245
246 Usage::
247
248 ceph df {detail}
249
250
251 fs
252 --
253
254 Manage cephfs filesystems. It uses some additional subcommands.
255
256 Subcommand ``ls`` to list filesystems
257
258 Usage::
259
260 ceph fs ls
261
262 Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
263
264 Usage::
265
266 ceph fs new <fs_name> <metadata> <data>
267
268 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
269
270 Usage::
271
272 ceph fs reset <fs_name> {--yes-i-really-mean-it}
273
274 Subcommand ``rm`` to disable the named filesystem
275
276 Usage::
277
278 ceph fs rm <fs_name> {--yes-i-really-mean-it}
279
280
281 fsid
282 ----
283
284 Show cluster's FSID/UUID.
285
286 Usage::
287
288 ceph fsid
289
290
291 health
292 ------
293
294 Show cluster's health.
295
296 Usage::
297
298 ceph health {detail}
299
300
301 heap
302 ----
303
304 Show heap usage info (available only if compiled with tcmalloc)
305
306 Usage::
307
308 ceph heap dump|start_profiler|stop_profiler|release|stats
309
310
311 injectargs
312 ----------
313
314 Inject configuration arguments into monitor.
315
316 Usage::
317
318 ceph injectargs <injected_args> [<injected_args>...]
319
320
321 log
322 ---
323
324 Log supplied text to the monitor log.
325
326 Usage::
327
328 ceph log <logtext> [<logtext>...]
329
330
331 mds
332 ---
333
334 Manage metadata server configuration and administration. It uses some
335 additional subcommands.
336
337 Subcommand ``compat`` manages compatible features. It uses some additional
338 subcommands.
339
340 Subcommand ``rm_compat`` removes compatible feature.
341
342 Usage::
343
344 ceph mds compat rm_compat <int[0-]>
345
346 Subcommand ``rm_incompat`` removes incompatible feature.
347
348 Usage::
349
350 ceph mds compat rm_incompat <int[0-]>
351
352 Subcommand ``show`` shows mds compatibility settings.
353
354 Usage::
355
356 ceph mds compat show
357
358 Subcommand ``deactivate`` stops mds.
359
360 Usage::
361
362 ceph mds deactivate <who>
363
364 Subcommand ``fail`` forces mds to status fail.
365
366 Usage::
367
368 ceph mds fail <who>
369
370 Subcommand ``rm`` removes inactive mds.
371
372 Usage::
373
374 ceph mds rm <int[0-]> <name> (type.id)>
375
376 Subcommand ``rmfailed`` removes failed mds.
377
378 Usage::
379
380 ceph mds rmfailed <int[0-]>
381
382 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
383
384 Usage::
385
386 ceph mds set_state <int[0-]> <int[0-20]>
387
388 Subcommand ``stat`` shows MDS status.
389
390 Usage::
391
392 ceph mds stat
393
394 Subcommand ``tell`` sends command to particular mds.
395
396 Usage::
397
398 ceph mds tell <who> <args> [<args>...]
399
400 mon
401 ---
402
403 Manage monitor configuration and administration. It uses some additional
404 subcommands.
405
406 Subcommand ``add`` adds new monitor named <name> at <addr>.
407
408 Usage::
409
410 ceph mon add <name> <IPaddr[:port]>
411
412 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
413
414 Usage::
415
416 ceph mon dump {<int[0-]>}
417
418 Subcommand ``getmap`` gets monmap.
419
420 Usage::
421
422 ceph mon getmap {<int[0-]>}
423
424 Subcommand ``remove`` removes monitor named <name>.
425
426 Usage::
427
428 ceph mon remove <name>
429
430 Subcommand ``stat`` summarizes monitor status.
431
432 Usage::
433
434 ceph mon stat
435
436 mon_status
437 ----------
438
439 Reports status of monitors.
440
441 Usage::
442
443 ceph mon_status
444
445 osd
446 ---
447
448 Manage OSD configuration and administration. It uses some additional
449 subcommands.
450
451 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
452 subcommands.
453
454 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
455 from now)
456
457 Usage::
458
459 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
460
461 Subcommand ``ls`` show blacklisted clients
462
463 Usage::
464
465 ceph osd blacklist ls
466
467 Subcommand ``rm`` remove <addr> from blacklist
468
469 Usage::
470
471 ceph osd blacklist rm <EntityAddr>
472
473 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
474
475 Usage::
476
477 ceph osd blocked-by
478
479 Subcommand ``create`` creates new osd (with optional UUID and ID).
480
481 This command is DEPRECATED as of the Luminous release, and will be removed in
482 a future release.
483
484 Subcommand ``new`` should instead be used.
485
486 Usage::
487
488 ceph osd create {<uuid>} {<id>}
489
490 Subcommand ``new`` reuses a previously destroyed OSD *id*. The new OSD will
491 have the specified *uuid*, and the command expects a JSON file containing
492 the base64 cephx key for auth entity *client.osd.<id>*, as well as optional
493 base64 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying
494 a dm-crypt requires specifying the accompanying lockbox cephx key.
495
496 Usage::
497
498 ceph osd new {<id>} {<uuid>} -i {<secrets.json>}
499
500 The secrets JSON file is expected to maintain a form of the following format::
501
502 {
503 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg=="
504 }
505
506 Or::
507
508 {
509 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
510 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
511 "dmcrypt_key": "<dm-crypt key>"
512 }
513
514
515 Subcommand ``crush`` is used for CRUSH management. It uses some additional
516 subcommands.
517
518 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
519 <weight> and location <args>.
520
521 Usage::
522
523 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
524
525 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
526 type <type>.
527
528 Usage::
529
530 ceph osd crush add-bucket <name> <type>
531
532 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
533 <weight> at/to location <args>.
534
535 Usage::
536
537 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
538 [<args>...]
539
540 Subcommand ``dump`` dumps crush map.
541
542 Usage::
543
544 ceph osd crush dump
545
546 Subcommand ``get-tunable`` get crush tunable straw_calc_version
547
548 Usage::
549
550 ceph osd crush get-tunable straw_calc_version
551
552 Subcommand ``link`` links existing entry for <name> under location <args>.
553
554 Usage::
555
556 ceph osd crush link <name> <args> [<args>...]
557
558 Subcommand ``move`` moves existing entry for <name> to location <args>.
559
560 Usage::
561
562 ceph osd crush move <name> <args> [<args>...]
563
564 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
565 <ancestor>).
566
567 Usage::
568
569 ceph osd crush remove <name> {<ancestor>}
570
571 Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
572
573 Usage::
574
575 ceph osd crush rename-bucket <srcname> <dstname>
576
577 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
578
579 Usage::
580
581 ceph osd crush reweight <name> <float[0.0-]>
582
583 Subcommand ``reweight-all`` recalculate the weights for the tree to
584 ensure they sum correctly
585
586 Usage::
587
588 ceph osd crush reweight-all
589
590 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
591 to <weight> in crush map
592
593 Usage::
594
595 ceph osd crush reweight-subtree <name> <weight>
596
597 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
598 <ancestor>).
599
600 Usage::
601
602 ceph osd crush rm <name> {<ancestor>}
603
604 Subcommand ``rule`` is used for creating crush rules. It uses some additional
605 subcommands.
606
607 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
608 created with <profile> (default default).
609
610 Usage::
611
612 ceph osd crush rule create-erasure <name> {<profile>}
613
614 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
615 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
616 (default firstn; indep best for erasure pools).
617
618 Usage::
619
620 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
621
622 Subcommand ``dump`` dumps crush rule <name> (default all).
623
624 Usage::
625
626 ceph osd crush rule dump {<name>}
627
628 Subcommand ``list`` lists crush rules.
629
630 Usage::
631
632 ceph osd crush rule list
633
634 Subcommand ``ls`` lists crush rules.
635
636 Usage::
637
638 ceph osd crush rule ls
639
640 Subcommand ``rm`` removes crush rule <name>.
641
642 Usage::
643
644 ceph osd crush rule rm <name>
645
646 Subcommand ``set`` used alone, sets crush map from input file.
647
648 Usage::
649
650 ceph osd crush set
651
652 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
653 for <name> to <weight> with location <args>.
654
655 Usage::
656
657 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
658
659 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
660 tunable that can be set is straw_calc_version.
661
662 Usage::
663
664 ceph osd crush set-tunable straw_calc_version <value>
665
666 Subcommand ``show-tunables`` shows current crush tunables.
667
668 Usage::
669
670 ceph osd crush show-tunables
671
672 Subcommand ``tree`` shows the crush buckets and items in a tree view.
673
674 Usage::
675
676 ceph osd crush tree
677
678 Subcommand ``tunables`` sets crush tunables values to <profile>.
679
680 Usage::
681
682 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
683
684 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
685 <ancestor>).
686
687 Usage::
688
689 ceph osd crush unlink <name> {<ancestor>}
690
691 Subcommand ``df`` shows OSD utilization
692
693 Usage::
694
695 ceph osd df {plain|tree}
696
697 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
698
699 Usage::
700
701 ceph osd deep-scrub <who>
702
703 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
704
705 Usage::
706
707 ceph osd down <ids> [<ids>...]
708
709 Subcommand ``dump`` prints summary of OSD map.
710
711 Usage::
712
713 ceph osd dump {<int[0-]>}
714
715 Subcommand ``erasure-code-profile`` is used for managing the erasure code
716 profiles. It uses some additional subcommands.
717
718 Subcommand ``get`` gets erasure code profile <name>.
719
720 Usage::
721
722 ceph osd erasure-code-profile get <name>
723
724 Subcommand ``ls`` lists all erasure code profiles.
725
726 Usage::
727
728 ceph osd erasure-code-profile ls
729
730 Subcommand ``rm`` removes erasure code profile <name>.
731
732 Usage::
733
734 ceph osd erasure-code-profile rm <name>
735
736 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
737 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
738
739 Usage::
740
741 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
742
743 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
744
745 Usage::
746
747 ceph osd find <int[0-]>
748
749 Subcommand ``getcrushmap`` gets CRUSH map.
750
751 Usage::
752
753 ceph osd getcrushmap {<int[0-]>}
754
755 Subcommand ``getmap`` gets OSD map.
756
757 Usage::
758
759 ceph osd getmap {<int[0-]>}
760
761 Subcommand ``getmaxosd`` shows largest OSD id.
762
763 Usage::
764
765 ceph osd getmaxosd
766
767 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
768
769 Usage::
770
771 ceph osd in <ids> [<ids>...]
772
773 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
774 MORE REPLICAS EXIST, BE CAREFUL.
775
776 Usage::
777
778 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
779
780 Subcommand ``ls`` shows all OSD ids.
781
782 Usage::
783
784 ceph osd ls {<int[0-]>}
785
786 Subcommand ``lspools`` lists pools.
787
788 Usage::
789
790 ceph osd lspools {<int>}
791
792 Subcommand ``map`` finds pg for <object> in <pool>.
793
794 Usage::
795
796 ceph osd map <poolname> <objectname>
797
798 Subcommand ``metadata`` fetches metadata for osd <id>.
799
800 Usage::
801
802 ceph osd metadata {int[0-]} (default all)
803
804 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
805
806 Usage::
807
808 ceph osd out <ids> [<ids>...]
809
810 Subcommand ``pause`` pauses osd.
811
812 Usage::
813
814 ceph osd pause
815
816 Subcommand ``perf`` prints dump of OSD perf summary stats.
817
818 Usage::
819
820 ceph osd perf
821
822 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
823 only).
824
825 Usage::
826
827 ceph osd pg-temp <pgid> {<id> [<id>...]}
828
829 Subcommand ``pool`` is used for managing data pools. It uses some additional
830 subcommands.
831
832 Subcommand ``create`` creates pool.
833
834 Usage::
835
836 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
837 {<erasure_code_profile>} {<ruleset>} {<int>}
838
839 Subcommand ``delete`` deletes pool.
840
841 Usage::
842
843 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
844
845 Subcommand ``get`` gets pool parameter <var>.
846
847 Usage::
848
849 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
850 pgp_num|crush_ruleset|auid|write_fadvise_dontneed
851
852 Only for tiered pools::
853
854 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
855 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
856 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
857 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
858
859 Only for erasure coded pools::
860
861 ceph osd pool get <poolname> erasure_code_profile
862
863 Use ``all`` to get all pool parameters that apply to the pool's type::
864
865 ceph osd pool get <poolname> all
866
867 Subcommand ``get-quota`` obtains object or byte limits for pool.
868
869 Usage::
870
871 ceph osd pool get-quota <poolname>
872
873 Subcommand ``ls`` list pools
874
875 Usage::
876
877 ceph osd pool ls {detail}
878
879 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
880
881 Usage::
882
883 ceph osd pool mksnap <poolname> <snap>
884
885 Subcommand ``rename`` renames <srcpool> to <destpool>.
886
887 Usage::
888
889 ceph osd pool rename <poolname> <poolname>
890
891 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
892
893 Usage::
894
895 ceph osd pool rmsnap <poolname> <snap>
896
897 Subcommand ``set`` sets pool parameter <var> to <val>.
898
899 Usage::
900
901 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
902 pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange|
903 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
904 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
905 cache_target_dirty_high_ratio|
906 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
907 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
908 hit_set_search_last_n
909 <val> {--yes-i-really-mean-it}
910
911 Subcommand ``set-quota`` sets object or byte limit on pool.
912
913 Usage::
914
915 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
916
917 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
918
919 Usage::
920
921 ceph osd pool stats {<name>}
922
923 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
924 <= 1.0
925
926 Usage::
927
928 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
929
930 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
931 only).
932
933 Usage::
934
935 ceph osd primary-temp <pgid> <id>
936
937 Subcommand ``repair`` initiates repair on a specified osd.
938
939 Usage::
940
941 ceph osd repair <who>
942
943 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
944
945 Usage::
946
947 osd reweight <int[0-]> <float[0.0-1.0]>
948
949 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
950 [overload-percentage-for-consideration, default 120].
951
952 Usage::
953
954 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
955 {--no-increasing}
956
957 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
958 [overload-percentage-for-consideration, default 120].
959
960 Usage::
961
962 ceph osd reweight-by-utilization {<int[100-]>}
963 {--no-increasing}
964
965 Subcommand ``rm`` removes osd(s) <id> [<id>...] in the cluster.
966
967 Usage::
968
969 ceph osd rm <ids> [<ids>...]
970
971 Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
972 entity's keys and all of its dm-crypt and daemon-private config key
973 entries.
974
975 This command will not remove the OSD from crush, nor will it remove the
976 OSD from the OSD map. Instead, once the command successfully completes,
977 the OSD will show marked as *destroyed*.
978
979 In order to mark an OSD as destroyed, the OSD must first be marked as
980 **lost**.
981
982 Usage::
983
984 ceph osd destroy <id> {--yes-i-really-mean-it}
985
986
987 Subcommand ``purge`` performs a combination of ``osd destroy``,
988 ``osd rm`` and ``osd crush remove``.
989
990 Usage::
991
992 ceph osd purge <id> {--yes-i-really-mean-it}
993
994 Subcommand ``scrub`` initiates scrub on specified osd.
995
996 Usage::
997
998 ceph osd scrub <who>
999
1000 Subcommand ``set`` sets <key>.
1001
1002 Usage::
1003
1004 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
1005 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1006
1007 Subcommand ``setcrushmap`` sets crush map from input file.
1008
1009 Usage::
1010
1011 ceph osd setcrushmap
1012
1013 Subcommand ``setmaxosd`` sets new maximum osd value.
1014
1015 Usage::
1016
1017 ceph osd setmaxosd <int[0-]>
1018
1019 Subcommand ``stat`` prints summary of OSD map.
1020
1021 Usage::
1022
1023 ceph osd stat
1024
1025 Subcommand ``tier`` is used for managing tiers. It uses some additional
1026 subcommands.
1027
1028 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1029 (the first one).
1030
1031 Usage::
1032
1033 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1034
1035 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1036 to existing pool <pool> (the first one).
1037
1038 Usage::
1039
1040 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1041
1042 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1043
1044 Usage::
1045
1046 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
1047 readforward|readproxy
1048
1049 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1050 <pool> (the first one).
1051
1052 Usage::
1053
1054 ceph osd tier remove <poolname> <poolname>
1055
1056 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1057
1058 Usage::
1059
1060 ceph osd tier remove-overlay <poolname>
1061
1062 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1063 <overlaypool>.
1064
1065 Usage::
1066
1067 ceph osd tier set-overlay <poolname> <poolname>
1068
1069 Subcommand ``tree`` prints OSD tree.
1070
1071 Usage::
1072
1073 ceph osd tree {<int[0-]>}
1074
1075 Subcommand ``unpause`` unpauses osd.
1076
1077 Usage::
1078
1079 ceph osd unpause
1080
1081 Subcommand ``unset`` unsets <key>.
1082
1083 Usage::
1084
1085 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1086 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1087
1088
1089 pg
1090 --
1091
1092 It is used for managing the placement groups in OSDs. It uses some
1093 additional subcommands.
1094
1095 Subcommand ``debug`` shows debug info about pgs.
1096
1097 Usage::
1098
1099 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1100
1101 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1102
1103 Usage::
1104
1105 ceph pg deep-scrub <pgid>
1106
1107 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1108 with plain).
1109
1110 Usage::
1111
1112 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1113
1114 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1115
1116 Usage::
1117
1118 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1119
1120 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1121
1122 Usage::
1123
1124 ceph pg dump_pools_json
1125
1126 Subcommand ``dump_stuck`` shows information about stuck pgs.
1127
1128 Usage::
1129
1130 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1131 {<int>}
1132
1133 Subcommand ``force_create_pg`` forces creation of pg <pgid>.
1134
1135 Usage::
1136
1137 ceph pg force_create_pg <pgid>
1138
1139 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1140
1141 Usage::
1142
1143 ceph pg getmap
1144
1145 Subcommand ``ls`` lists pg with specific pool, osd, state
1146
1147 Usage::
1148
1149 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1150 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1151 recovery|backfill_wait|incomplete|stale| remapped|
1152 deep_scrub|backfill|backfill_toofull|recovery_wait|
1153 undersized [active|clean|down|replay|splitting|
1154 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1155 recovery|backfill_wait|incomplete|stale|remapped|
1156 deep_scrub|backfill|backfill_toofull|recovery_wait|
1157 undersized...]}
1158
1159 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1160
1161 Usage::
1162
1163 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1164 {active|clean|down|replay|splitting|
1165 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1166 recovery|backfill_wait|incomplete|stale| remapped|
1167 deep_scrub|backfill|backfill_toofull|recovery_wait|
1168 undersized [active|clean|down|replay|splitting|
1169 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1170 recovery|backfill_wait|incomplete|stale|remapped|
1171 deep_scrub|backfill|backfill_toofull|recovery_wait|
1172 undersized...]}
1173
1174 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1175
1176 Usage::
1177
1178 ceph pg ls-by-pool <poolstr> {<int>} {active|
1179 clean|down|replay|splitting|
1180 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1181 recovery|backfill_wait|incomplete|stale| remapped|
1182 deep_scrub|backfill|backfill_toofull|recovery_wait|
1183 undersized [active|clean|down|replay|splitting|
1184 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1185 recovery|backfill_wait|incomplete|stale|remapped|
1186 deep_scrub|backfill|backfill_toofull|recovery_wait|
1187 undersized...]}
1188
1189 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1190
1191 Usage::
1192
1193 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1194 {active|clean|down|replay|splitting|
1195 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1196 recovery|backfill_wait|incomplete|stale| remapped|
1197 deep_scrub|backfill|backfill_toofull|recovery_wait|
1198 undersized [active|clean|down|replay|splitting|
1199 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1200 recovery|backfill_wait|incomplete|stale|remapped|
1201 deep_scrub|backfill|backfill_toofull|recovery_wait|
1202 undersized...]}
1203
1204 Subcommand ``map`` shows mapping of pg to osds.
1205
1206 Usage::
1207
1208 ceph pg map <pgid>
1209
1210 Subcommand ``repair`` starts repair on <pgid>.
1211
1212 Usage::
1213
1214 ceph pg repair <pgid>
1215
1216 Subcommand ``scrub`` starts scrub on <pgid>.
1217
1218 Usage::
1219
1220 ceph pg scrub <pgid>
1221
1222 Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1223
1224 Usage::
1225
1226 ceph pg set_full_ratio <float[0.0-1.0]>
1227
1228 Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1229
1230 Usage::
1231
1232 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1233
1234 Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1235 full.
1236
1237 Usage::
1238
1239 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1240
1241 Subcommand ``stat`` shows placement group status.
1242
1243 Usage::
1244
1245 ceph pg stat
1246
1247
1248 quorum
1249 ------
1250
1251 Cause MON to enter or exit quorum.
1252
1253 Usage::
1254
1255 ceph quorum enter|exit
1256
1257 Note: this only works on the MON to which the ``ceph`` command is connected.
1258 If you want a specific MON to enter or exit quorum, use this syntax::
1259
1260 ceph tell mon.<id> quorum enter|exit
1261
1262 quorum_status
1263 -------------
1264
1265 Reports status of monitor quorum.
1266
1267 Usage::
1268
1269 ceph quorum_status
1270
1271
1272 report
1273 ------
1274
1275 Reports full status of cluster, optional title tag strings.
1276
1277 Usage::
1278
1279 ceph report {<tags> [<tags>...]}
1280
1281
1282 scrub
1283 -----
1284
1285 Scrubs the monitor stores.
1286
1287 Usage::
1288
1289 ceph scrub
1290
1291
1292 status
1293 ------
1294
1295 Shows cluster status.
1296
1297 Usage::
1298
1299 ceph status
1300
1301
1302 sync force
1303 ----------
1304
1305 Forces sync of and clear monitor store.
1306
1307 Usage::
1308
1309 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1310
1311
1312 tell
1313 ----
1314
1315 Sends a command to a specific daemon.
1316
1317 Usage::
1318
1319 ceph tell <name (type.id)> <args> [<args>...]
1320
1321
1322 List all available commands.
1323
1324 Usage::
1325
1326 ceph tell <name (type.id)> help
1327
1328 version
1329 -------
1330
1331 Show mon daemon version
1332
1333 Usage::
1334
1335 ceph version
1336
1337 Options
1338 =======
1339
1340 .. option:: -i infile
1341
1342 will specify an input file to be passed along as a payload with the
1343 command to the monitor cluster. This is only used for specific
1344 monitor commands.
1345
1346 .. option:: -o outfile
1347
1348 will write any payload returned by the monitor cluster with its
1349 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1350 return a payload.
1351
1352 .. option:: -c ceph.conf, --conf=ceph.conf
1353
1354 Use ceph.conf configuration file instead of the default
1355 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1356
1357 .. option:: --id CLIENT_ID, --user CLIENT_ID
1358
1359 Client id for authentication.
1360
1361 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1362
1363 Client name for authentication.
1364
1365 .. option:: --cluster CLUSTER
1366
1367 Name of the Ceph cluster.
1368
1369 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1370
1371 Submit admin-socket commands via admin sockets in /var/run/ceph.
1372
1373 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1374
1375 You probably mean --admin-daemon
1376
1377 .. option:: -s, --status
1378
1379 Show cluster status.
1380
1381 .. option:: -w, --watch
1382
1383 Watch live cluster changes.
1384
1385 .. option:: --watch-debug
1386
1387 Watch debug events.
1388
1389 .. option:: --watch-info
1390
1391 Watch info events.
1392
1393 .. option:: --watch-sec
1394
1395 Watch security events.
1396
1397 .. option:: --watch-warn
1398
1399 Watch warning events.
1400
1401 .. option:: --watch-error
1402
1403 Watch error events.
1404
1405 .. option:: --version, -v
1406
1407 Display version.
1408
1409 .. option:: --verbose
1410
1411 Make verbose.
1412
1413 .. option:: --concise
1414
1415 Make less verbose.
1416
1417 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1418
1419 Format of output.
1420
1421 .. option:: --connect-timeout CLUSTER_TIMEOUT
1422
1423 Set a timeout for connecting to the cluster.
1424
1425 .. option:: --no-increasing
1426
1427 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1428 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1429 If this option is used with these commands, it will help not to increase osd weight
1430 even the osd is under utilized.
1431
1432
1433 Availability
1434 ============
1435
1436 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1437 the Ceph documentation at http://ceph.com/docs for more information.
1438
1439
1440 See also
1441 ========
1442
1443 :doc:`ceph-mon <ceph-mon>`\(8),
1444 :doc:`ceph-osd <ceph-osd>`\(8),
1445 :doc:`ceph-mds <ceph-mds>`\(8)