]> git.proxmox.com Git - ceph.git/blob - ceph/doc/man/8/ceph.rst
add subtree-ish sources for 12.0.3
[ceph.git] / ceph / doc / man / 8 / ceph.rst
1 :orphan:
2
3 ==================================
4 ceph -- ceph administration tool
5 ==================================
6
7 .. program:: ceph
8
9 Synopsis
10 ========
11
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
13
14 | **ceph** **compact**
15
16 | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
17
18 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
19
20 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
21
22 | **ceph** **df** *{detail}*
23
24 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
25
26 | **ceph** **fsid**
27
28 | **ceph** **health** *{detail}*
29
30 | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
31
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
33
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
35
36 | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
37
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
39
40 | **ceph** **mon_status**
41
42 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
43
44 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
45
46 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
47
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
49
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
51
52 | **ceph** **quorum** [ *enter* \| *exit* ]
53
54 | **ceph** **quorum_status**
55
56 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
57
58 | **ceph** **scrub**
59
60 | **ceph** **status**
61
62 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
63
64 | **ceph** **tell** *<name (type.id)> <args> [<args>...]*
65
66 | **ceph** **version**
67
68 Description
69 ===========
70
71 :program:`ceph` is a control utility which is used for manual deployment and maintenance
72 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73 monitors, OSDs, placement groups, MDS and overall maintenance, administration
74 of the cluster.
75
76 Commands
77 ========
78
79 auth
80 ----
81
82 Manage authentication keys. It is used for adding, removing, exporting
83 or updating of authentication keys for a particular entity such as a monitor or
84 OSD. It uses some additional subcommands.
85
86 Subcommand ``add`` adds authentication info for a particular entity from input
87 file, or random key if no input is given and/or any caps specified in the command.
88
89 Usage::
90
91 ceph auth add <entity> {<caps> [<caps>...]}
92
93 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
94
95 Usage::
96
97 ceph auth caps <entity> <caps> [<caps>...]
98
99 Subcommand ``del`` deletes all caps for ``name``.
100
101 Usage::
102
103 ceph auth del <entity>
104
105 Subcommand ``export`` writes keyring for requested entity, or master keyring if
106 none given.
107
108 Usage::
109
110 ceph auth export {<entity>}
111
112 Subcommand ``get`` writes keyring file with requested key.
113
114 Usage::
115
116 ceph auth get <entity>
117
118 Subcommand ``get-key`` displays requested key.
119
120 Usage::
121
122 ceph auth get-key <entity>
123
124 Subcommand ``get-or-create`` adds authentication info for a particular entity
125 from input file, or random key if no input given and/or any caps specified in the
126 command.
127
128 Usage::
129
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
131
132 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133 pairs specified in the command. If key already exists, any given caps must match
134 the existing caps for that key.
135
136 Usage::
137
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
139
140 Subcommand ``import`` reads keyring from input file.
141
142 Usage::
143
144 ceph auth import
145
146 Subcommand ``list`` lists authentication state.
147
148 Usage::
149
150 ceph auth list
151
152 Subcommand ``print-key`` displays requested key.
153
154 Usage::
155
156 ceph auth print-key <entity>
157
158 Subcommand ``print_key`` displays requested key.
159
160 Usage::
161
162 ceph auth print_key <entity>
163
164
165 compact
166 -------
167
168 Causes compaction of monitor's leveldb storage.
169
170 Usage::
171
172 ceph compact
173
174
175 config-key
176 ----------
177
178 Manage configuration key. It uses some additional subcommands.
179
180 Subcommand ``del`` deletes configuration key.
181
182 Usage::
183
184 ceph config-key del <key>
185
186 Subcommand ``exists`` checks for configuration keys existence.
187
188 Usage::
189
190 ceph config-key exists <key>
191
192 Subcommand ``get`` gets the configuration key.
193
194 Usage::
195
196 ceph config-key get <key>
197
198 Subcommand ``list`` lists configuration keys.
199
200 Usage::
201
202 ceph config-key list
203
204 Subcommand ``dump`` dumps configuration keys and values.
205
206 Usage::
207
208 ceph config-key dump
209
210 Subcommand ``put`` puts configuration key and value.
211
212 Usage::
213
214 ceph config-key put <key> {<val>}
215
216
217 daemon
218 ------
219
220 Submit admin-socket commands.
221
222 Usage::
223
224 ceph daemon {daemon_name|socket_path} {command} ...
225
226 Example::
227
228 ceph daemon osd.0 help
229
230
231 daemonperf
232 ----------
233
234 Watch performance counters from a Ceph daemon.
235
236 Usage::
237
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
239
240
241 df
242 --
243
244 Show cluster's free space status.
245
246 Usage::
247
248 ceph df {detail}
249
250
251 fs
252 --
253
254 Manage cephfs filesystems. It uses some additional subcommands.
255
256 Subcommand ``ls`` to list filesystems
257
258 Usage::
259
260 ceph fs ls
261
262 Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
263
264 Usage::
265
266 ceph fs new <fs_name> <metadata> <data>
267
268 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
269
270 Usage::
271
272 ceph fs reset <fs_name> {--yes-i-really-mean-it}
273
274 Subcommand ``rm`` to disable the named filesystem
275
276 Usage::
277
278 ceph fs rm <fs_name> {--yes-i-really-mean-it}
279
280
281 fsid
282 ----
283
284 Show cluster's FSID/UUID.
285
286 Usage::
287
288 ceph fsid
289
290
291 health
292 ------
293
294 Show cluster's health.
295
296 Usage::
297
298 ceph health {detail}
299
300
301 heap
302 ----
303
304 Show heap usage info (available only if compiled with tcmalloc)
305
306 Usage::
307
308 ceph heap dump|start_profiler|stop_profiler|release|stats
309
310
311 injectargs
312 ----------
313
314 Inject configuration arguments into monitor.
315
316 Usage::
317
318 ceph injectargs <injected_args> [<injected_args>...]
319
320
321 log
322 ---
323
324 Log supplied text to the monitor log.
325
326 Usage::
327
328 ceph log <logtext> [<logtext>...]
329
330
331 mds
332 ---
333
334 Manage metadata server configuration and administration. It uses some
335 additional subcommands.
336
337 Subcommand ``compat`` manages compatible features. It uses some additional
338 subcommands.
339
340 Subcommand ``rm_compat`` removes compatible feature.
341
342 Usage::
343
344 ceph mds compat rm_compat <int[0-]>
345
346 Subcommand ``rm_incompat`` removes incompatible feature.
347
348 Usage::
349
350 ceph mds compat rm_incompat <int[0-]>
351
352 Subcommand ``show`` shows mds compatibility settings.
353
354 Usage::
355
356 ceph mds compat show
357
358 Subcommand ``deactivate`` stops mds.
359
360 Usage::
361
362 ceph mds deactivate <who>
363
364 Subcommand ``fail`` forces mds to status fail.
365
366 Usage::
367
368 ceph mds fail <who>
369
370 Subcommand ``rm`` removes inactive mds.
371
372 Usage::
373
374 ceph mds rm <int[0-]> <name> (type.id)>
375
376 Subcommand ``rmfailed`` removes failed mds.
377
378 Usage::
379
380 ceph mds rmfailed <int[0-]>
381
382 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
383
384 Usage::
385
386 ceph mds set_state <int[0-]> <int[0-20]>
387
388 Subcommand ``stat`` shows MDS status.
389
390 Usage::
391
392 ceph mds stat
393
394 Subcommand ``tell`` sends command to particular mds.
395
396 Usage::
397
398 ceph mds tell <who> <args> [<args>...]
399
400 mon
401 ---
402
403 Manage monitor configuration and administration. It uses some additional
404 subcommands.
405
406 Subcommand ``add`` adds new monitor named <name> at <addr>.
407
408 Usage::
409
410 ceph mon add <name> <IPaddr[:port]>
411
412 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
413
414 Usage::
415
416 ceph mon dump {<int[0-]>}
417
418 Subcommand ``getmap`` gets monmap.
419
420 Usage::
421
422 ceph mon getmap {<int[0-]>}
423
424 Subcommand ``remove`` removes monitor named <name>.
425
426 Usage::
427
428 ceph mon remove <name>
429
430 Subcommand ``stat`` summarizes monitor status.
431
432 Usage::
433
434 ceph mon stat
435
436 mon_status
437 ----------
438
439 Reports status of monitors.
440
441 Usage::
442
443 ceph mon_status
444
445 osd
446 ---
447
448 Manage OSD configuration and administration. It uses some additional
449 subcommands.
450
451 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
452 subcommands.
453
454 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
455 from now)
456
457 Usage::
458
459 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
460
461 Subcommand ``ls`` show blacklisted clients
462
463 Usage::
464
465 ceph osd blacklist ls
466
467 Subcommand ``rm`` remove <addr> from blacklist
468
469 Usage::
470
471 ceph osd blacklist rm <EntityAddr>
472
473 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
474
475 Usage::
476
477 ceph osd blocked-by
478
479 Subcommand ``create`` creates new osd (with optional UUID and ID).
480
481 Usage::
482
483 ceph osd create {<uuid>} {<id>}
484
485 Subcommand ``crush`` is used for CRUSH management. It uses some additional
486 subcommands.
487
488 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
489 <weight> and location <args>.
490
491 Usage::
492
493 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
494
495 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
496 type <type>.
497
498 Usage::
499
500 ceph osd crush add-bucket <name> <type>
501
502 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
503 <weight> at/to location <args>.
504
505 Usage::
506
507 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
508 [<args>...]
509
510 Subcommand ``dump`` dumps crush map.
511
512 Usage::
513
514 ceph osd crush dump
515
516 Subcommand ``get-tunable`` get crush tunable straw_calc_version
517
518 Usage::
519
520 ceph osd crush get-tunable straw_calc_version
521
522 Subcommand ``link`` links existing entry for <name> under location <args>.
523
524 Usage::
525
526 ceph osd crush link <name> <args> [<args>...]
527
528 Subcommand ``move`` moves existing entry for <name> to location <args>.
529
530 Usage::
531
532 ceph osd crush move <name> <args> [<args>...]
533
534 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
535 <ancestor>).
536
537 Usage::
538
539 ceph osd crush remove <name> {<ancestor>}
540
541 Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
542
543 Usage::
544
545 ceph osd crush rename-bucket <srcname> <dstname>
546
547 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
548
549 Usage::
550
551 ceph osd crush reweight <name> <float[0.0-]>
552
553 Subcommand ``reweight-all`` recalculate the weights for the tree to
554 ensure they sum correctly
555
556 Usage::
557
558 ceph osd crush reweight-all
559
560 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
561 to <weight> in crush map
562
563 Usage::
564
565 ceph osd crush reweight-subtree <name> <weight>
566
567 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
568 <ancestor>).
569
570 Usage::
571
572 ceph osd crush rm <name> {<ancestor>}
573
574 Subcommand ``rule`` is used for creating crush rules. It uses some additional
575 subcommands.
576
577 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
578 created with <profile> (default default).
579
580 Usage::
581
582 ceph osd crush rule create-erasure <name> {<profile>}
583
584 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
585 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
586 (default firstn; indep best for erasure pools).
587
588 Usage::
589
590 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
591
592 Subcommand ``dump`` dumps crush rule <name> (default all).
593
594 Usage::
595
596 ceph osd crush rule dump {<name>}
597
598 Subcommand ``list`` lists crush rules.
599
600 Usage::
601
602 ceph osd crush rule list
603
604 Subcommand ``ls`` lists crush rules.
605
606 Usage::
607
608 ceph osd crush rule ls
609
610 Subcommand ``rm`` removes crush rule <name>.
611
612 Usage::
613
614 ceph osd crush rule rm <name>
615
616 Subcommand ``set`` used alone, sets crush map from input file.
617
618 Usage::
619
620 ceph osd crush set
621
622 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
623 for <name> to <weight> with location <args>.
624
625 Usage::
626
627 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
628
629 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
630 tunable that can be set is straw_calc_version.
631
632 Usage::
633
634 ceph osd crush set-tunable straw_calc_version <value>
635
636 Subcommand ``show-tunables`` shows current crush tunables.
637
638 Usage::
639
640 ceph osd crush show-tunables
641
642 Subcommand ``tree`` shows the crush buckets and items in a tree view.
643
644 Usage::
645
646 ceph osd crush tree
647
648 Subcommand ``tunables`` sets crush tunables values to <profile>.
649
650 Usage::
651
652 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
653
654 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
655 <ancestor>).
656
657 Usage::
658
659 ceph osd crush unlink <name> {<ancestor>}
660
661 Subcommand ``df`` shows OSD utilization
662
663 Usage::
664
665 ceph osd df {plain|tree}
666
667 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
668
669 Usage::
670
671 ceph osd deep-scrub <who>
672
673 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
674
675 Usage::
676
677 ceph osd down <ids> [<ids>...]
678
679 Subcommand ``dump`` prints summary of OSD map.
680
681 Usage::
682
683 ceph osd dump {<int[0-]>}
684
685 Subcommand ``erasure-code-profile`` is used for managing the erasure code
686 profiles. It uses some additional subcommands.
687
688 Subcommand ``get`` gets erasure code profile <name>.
689
690 Usage::
691
692 ceph osd erasure-code-profile get <name>
693
694 Subcommand ``ls`` lists all erasure code profiles.
695
696 Usage::
697
698 ceph osd erasure-code-profile ls
699
700 Subcommand ``rm`` removes erasure code profile <name>.
701
702 Usage::
703
704 ceph osd erasure-code-profile rm <name>
705
706 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
707 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
708
709 Usage::
710
711 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
712
713 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
714
715 Usage::
716
717 ceph osd find <int[0-]>
718
719 Subcommand ``getcrushmap`` gets CRUSH map.
720
721 Usage::
722
723 ceph osd getcrushmap {<int[0-]>}
724
725 Subcommand ``getmap`` gets OSD map.
726
727 Usage::
728
729 ceph osd getmap {<int[0-]>}
730
731 Subcommand ``getmaxosd`` shows largest OSD id.
732
733 Usage::
734
735 ceph osd getmaxosd
736
737 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
738
739 Usage::
740
741 ceph osd in <ids> [<ids>...]
742
743 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
744 MORE REPLICAS EXIST, BE CAREFUL.
745
746 Usage::
747
748 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
749
750 Subcommand ``ls`` shows all OSD ids.
751
752 Usage::
753
754 ceph osd ls {<int[0-]>}
755
756 Subcommand ``lspools`` lists pools.
757
758 Usage::
759
760 ceph osd lspools {<int>}
761
762 Subcommand ``map`` finds pg for <object> in <pool>.
763
764 Usage::
765
766 ceph osd map <poolname> <objectname>
767
768 Subcommand ``metadata`` fetches metadata for osd <id>.
769
770 Usage::
771
772 ceph osd metadata {int[0-]} (default all)
773
774 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
775
776 Usage::
777
778 ceph osd out <ids> [<ids>...]
779
780 Subcommand ``pause`` pauses osd.
781
782 Usage::
783
784 ceph osd pause
785
786 Subcommand ``perf`` prints dump of OSD perf summary stats.
787
788 Usage::
789
790 ceph osd perf
791
792 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
793 only).
794
795 Usage::
796
797 ceph osd pg-temp <pgid> {<id> [<id>...]}
798
799 Subcommand ``pool`` is used for managing data pools. It uses some additional
800 subcommands.
801
802 Subcommand ``create`` creates pool.
803
804 Usage::
805
806 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
807 {<erasure_code_profile>} {<ruleset>} {<int>}
808
809 Subcommand ``delete`` deletes pool.
810
811 Usage::
812
813 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
814
815 Subcommand ``get`` gets pool parameter <var>.
816
817 Usage::
818
819 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
820 pgp_num|crush_ruleset|auid|write_fadvise_dontneed
821
822 Only for tiered pools::
823
824 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
825 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
826 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
827 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
828
829 Only for erasure coded pools::
830
831 ceph osd pool get <poolname> erasure_code_profile
832
833 Use ``all`` to get all pool parameters that apply to the pool's type::
834
835 ceph osd pool get <poolname> all
836
837 Subcommand ``get-quota`` obtains object or byte limits for pool.
838
839 Usage::
840
841 ceph osd pool get-quota <poolname>
842
843 Subcommand ``ls`` list pools
844
845 Usage::
846
847 ceph osd pool ls {detail}
848
849 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
850
851 Usage::
852
853 ceph osd pool mksnap <poolname> <snap>
854
855 Subcommand ``rename`` renames <srcpool> to <destpool>.
856
857 Usage::
858
859 ceph osd pool rename <poolname> <poolname>
860
861 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
862
863 Usage::
864
865 ceph osd pool rmsnap <poolname> <snap>
866
867 Subcommand ``set`` sets pool parameter <var> to <val>.
868
869 Usage::
870
871 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
872 pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange|
873 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
874 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
875 cache_target_dirty_high_ratio|
876 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
877 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
878 hit_set_search_last_n
879 <val> {--yes-i-really-mean-it}
880
881 Subcommand ``set-quota`` sets object or byte limit on pool.
882
883 Usage::
884
885 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
886
887 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
888
889 Usage::
890
891 ceph osd pool stats {<name>}
892
893 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
894 <= 1.0
895
896 Usage::
897
898 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
899
900 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
901 only).
902
903 Usage::
904
905 ceph osd primary-temp <pgid> <id>
906
907 Subcommand ``repair`` initiates repair on a specified osd.
908
909 Usage::
910
911 ceph osd repair <who>
912
913 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
914
915 Usage::
916
917 osd reweight <int[0-]> <float[0.0-1.0]>
918
919 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
920 [overload-percentage-for-consideration, default 120].
921
922 Usage::
923
924 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
925 {--no-increasing}
926
927 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
928 [overload-percentage-for-consideration, default 120].
929
930 Usage::
931
932 ceph osd reweight-by-utilization {<int[100-]>}
933 {--no-increasing}
934
935 Subcommand ``rm`` removes osd(s) <id> [<id>...] in the cluster.
936
937 Usage::
938
939 ceph osd rm <ids> [<ids>...]
940
941 Subcommand ``scrub`` initiates scrub on specified osd.
942
943 Usage::
944
945 ceph osd scrub <who>
946
947 Subcommand ``set`` sets <key>.
948
949 Usage::
950
951 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
952 norebalance|norecover|noscrub|nodeep-scrub|notieragent
953
954 Subcommand ``setcrushmap`` sets crush map from input file.
955
956 Usage::
957
958 ceph osd setcrushmap
959
960 Subcommand ``setmaxosd`` sets new maximum osd value.
961
962 Usage::
963
964 ceph osd setmaxosd <int[0-]>
965
966 Subcommand ``stat`` prints summary of OSD map.
967
968 Usage::
969
970 ceph osd stat
971
972 Subcommand ``tier`` is used for managing tiers. It uses some additional
973 subcommands.
974
975 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
976 (the first one).
977
978 Usage::
979
980 ceph osd tier add <poolname> <poolname> {--force-nonempty}
981
982 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
983 to existing pool <pool> (the first one).
984
985 Usage::
986
987 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
988
989 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
990
991 Usage::
992
993 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
994 readforward|readproxy
995
996 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
997 <pool> (the first one).
998
999 Usage::
1000
1001 ceph osd tier remove <poolname> <poolname>
1002
1003 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1004
1005 Usage::
1006
1007 ceph osd tier remove-overlay <poolname>
1008
1009 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1010 <overlaypool>.
1011
1012 Usage::
1013
1014 ceph osd tier set-overlay <poolname> <poolname>
1015
1016 Subcommand ``tree`` prints OSD tree.
1017
1018 Usage::
1019
1020 ceph osd tree {<int[0-]>}
1021
1022 Subcommand ``unpause`` unpauses osd.
1023
1024 Usage::
1025
1026 ceph osd unpause
1027
1028 Subcommand ``unset`` unsets <key>.
1029
1030 Usage::
1031
1032 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1033 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1034
1035
1036 pg
1037 --
1038
1039 It is used for managing the placement groups in OSDs. It uses some
1040 additional subcommands.
1041
1042 Subcommand ``debug`` shows debug info about pgs.
1043
1044 Usage::
1045
1046 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1047
1048 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1049
1050 Usage::
1051
1052 ceph pg deep-scrub <pgid>
1053
1054 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1055 with plain).
1056
1057 Usage::
1058
1059 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1060
1061 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1062
1063 Usage::
1064
1065 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1066
1067 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1068
1069 Usage::
1070
1071 ceph pg dump_pools_json
1072
1073 Subcommand ``dump_stuck`` shows information about stuck pgs.
1074
1075 Usage::
1076
1077 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1078 {<int>}
1079
1080 Subcommand ``force_create_pg`` forces creation of pg <pgid>.
1081
1082 Usage::
1083
1084 ceph pg force_create_pg <pgid>
1085
1086 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1087
1088 Usage::
1089
1090 ceph pg getmap
1091
1092 Subcommand ``ls`` lists pg with specific pool, osd, state
1093
1094 Usage::
1095
1096 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1097 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1098 recovery|backfill_wait|incomplete|stale| remapped|
1099 deep_scrub|backfill|backfill_toofull|recovery_wait|
1100 undersized [active|clean|down|replay|splitting|
1101 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1102 recovery|backfill_wait|incomplete|stale|remapped|
1103 deep_scrub|backfill|backfill_toofull|recovery_wait|
1104 undersized...]}
1105
1106 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1107
1108 Usage::
1109
1110 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1111 {active|clean|down|replay|splitting|
1112 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1113 recovery|backfill_wait|incomplete|stale| remapped|
1114 deep_scrub|backfill|backfill_toofull|recovery_wait|
1115 undersized [active|clean|down|replay|splitting|
1116 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1117 recovery|backfill_wait|incomplete|stale|remapped|
1118 deep_scrub|backfill|backfill_toofull|recovery_wait|
1119 undersized...]}
1120
1121 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1122
1123 Usage::
1124
1125 ceph pg ls-by-pool <poolstr> {<int>} {active|
1126 clean|down|replay|splitting|
1127 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1128 recovery|backfill_wait|incomplete|stale| remapped|
1129 deep_scrub|backfill|backfill_toofull|recovery_wait|
1130 undersized [active|clean|down|replay|splitting|
1131 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1132 recovery|backfill_wait|incomplete|stale|remapped|
1133 deep_scrub|backfill|backfill_toofull|recovery_wait|
1134 undersized...]}
1135
1136 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1137
1138 Usage::
1139
1140 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1141 {active|clean|down|replay|splitting|
1142 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1143 recovery|backfill_wait|incomplete|stale| remapped|
1144 deep_scrub|backfill|backfill_toofull|recovery_wait|
1145 undersized [active|clean|down|replay|splitting|
1146 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1147 recovery|backfill_wait|incomplete|stale|remapped|
1148 deep_scrub|backfill|backfill_toofull|recovery_wait|
1149 undersized...]}
1150
1151 Subcommand ``map`` shows mapping of pg to osds.
1152
1153 Usage::
1154
1155 ceph pg map <pgid>
1156
1157 Subcommand ``repair`` starts repair on <pgid>.
1158
1159 Usage::
1160
1161 ceph pg repair <pgid>
1162
1163 Subcommand ``scrub`` starts scrub on <pgid>.
1164
1165 Usage::
1166
1167 ceph pg scrub <pgid>
1168
1169 Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1170
1171 Usage::
1172
1173 ceph pg set_full_ratio <float[0.0-1.0]>
1174
1175 Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1176
1177 Usage::
1178
1179 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1180
1181 Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1182 full.
1183
1184 Usage::
1185
1186 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1187
1188 Subcommand ``stat`` shows placement group status.
1189
1190 Usage::
1191
1192 ceph pg stat
1193
1194
1195 quorum
1196 ------
1197
1198 Cause MON to enter or exit quorum.
1199
1200 Usage::
1201
1202 ceph quorum enter|exit
1203
1204 Note: this only works on the MON to which the ``ceph`` command is connected.
1205 If you want a specific MON to enter or exit quorum, use this syntax::
1206
1207 ceph tell mon.<id> quorum enter|exit
1208
1209 quorum_status
1210 -------------
1211
1212 Reports status of monitor quorum.
1213
1214 Usage::
1215
1216 ceph quorum_status
1217
1218
1219 report
1220 ------
1221
1222 Reports full status of cluster, optional title tag strings.
1223
1224 Usage::
1225
1226 ceph report {<tags> [<tags>...]}
1227
1228
1229 scrub
1230 -----
1231
1232 Scrubs the monitor stores.
1233
1234 Usage::
1235
1236 ceph scrub
1237
1238
1239 status
1240 ------
1241
1242 Shows cluster status.
1243
1244 Usage::
1245
1246 ceph status
1247
1248
1249 sync force
1250 ----------
1251
1252 Forces sync of and clear monitor store.
1253
1254 Usage::
1255
1256 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1257
1258
1259 tell
1260 ----
1261
1262 Sends a command to a specific daemon.
1263
1264 Usage::
1265
1266 ceph tell <name (type.id)> <args> [<args>...]
1267
1268 version
1269 -------
1270
1271 Show mon daemon version
1272
1273 Usage::
1274
1275 ceph version
1276
1277 Options
1278 =======
1279
1280 .. option:: -i infile
1281
1282 will specify an input file to be passed along as a payload with the
1283 command to the monitor cluster. This is only used for specific
1284 monitor commands.
1285
1286 .. option:: -o outfile
1287
1288 will write any payload returned by the monitor cluster with its
1289 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1290 return a payload.
1291
1292 .. option:: -c ceph.conf, --conf=ceph.conf
1293
1294 Use ceph.conf configuration file instead of the default
1295 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1296
1297 .. option:: --id CLIENT_ID, --user CLIENT_ID
1298
1299 Client id for authentication.
1300
1301 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1302
1303 Client name for authentication.
1304
1305 .. option:: --cluster CLUSTER
1306
1307 Name of the Ceph cluster.
1308
1309 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1310
1311 Submit admin-socket commands via admin sockets in /var/run/ceph.
1312
1313 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1314
1315 You probably mean --admin-daemon
1316
1317 .. option:: -s, --status
1318
1319 Show cluster status.
1320
1321 .. option:: -w, --watch
1322
1323 Watch live cluster changes.
1324
1325 .. option:: --watch-debug
1326
1327 Watch debug events.
1328
1329 .. option:: --watch-info
1330
1331 Watch info events.
1332
1333 .. option:: --watch-sec
1334
1335 Watch security events.
1336
1337 .. option:: --watch-warn
1338
1339 Watch warning events.
1340
1341 .. option:: --watch-error
1342
1343 Watch error events.
1344
1345 .. option:: --version, -v
1346
1347 Display version.
1348
1349 .. option:: --verbose
1350
1351 Make verbose.
1352
1353 .. option:: --concise
1354
1355 Make less verbose.
1356
1357 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1358
1359 Format of output.
1360
1361 .. option:: --connect-timeout CLUSTER_TIMEOUT
1362
1363 Set a timeout for connecting to the cluster.
1364
1365 .. option:: --no-increasing
1366
1367 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1368 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1369 If this option is used with these commands, it will help not to increase osd weight
1370 even the osd is under utilized.
1371
1372
1373 Availability
1374 ============
1375
1376 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1377 the Ceph documentation at http://ceph.com/docs for more information.
1378
1379
1380 See also
1381 ========
1382
1383 :doc:`ceph-mon <ceph-mon>`\(8),
1384 :doc:`ceph-osd <ceph-osd>`\(8),
1385 :doc:`ceph-mds <ceph-mds>`\(8)