3 ==================================
4 ceph -- ceph administration tool
5 ==================================
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
14 | **ceph** **compact**
16 | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
18 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
20 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
22 | **ceph** **df** *{detail}*
24 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
28 | **ceph** **health** *{detail}*
30 | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
36 | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
40 | **ceph** **mon_status**
42 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
44 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
46 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
52 | **ceph** **quorum** [ *enter* \| *exit* ]
54 | **ceph** **quorum_status**
56 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
62 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
64 | **ceph** **tell** *<name (type.id)> <args> [<args>...]*
66 | **ceph** **version**
71 :program:`ceph` is a control utility which is used for manual deployment and maintenance
72 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73 monitors, OSDs, placement groups, MDS and overall maintenance, administration
82 Manage authentication keys. It is used for adding, removing, exporting
83 or updating of authentication keys for a particular entity such as a monitor or
84 OSD. It uses some additional subcommands.
86 Subcommand ``add`` adds authentication info for a particular entity from input
87 file, or random key if no input is given and/or any caps specified in the command.
91 ceph auth add <entity> {<caps> [<caps>...]}
93 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
97 ceph auth caps <entity> <caps> [<caps>...]
99 Subcommand ``del`` deletes all caps for ``name``.
103 ceph auth del <entity>
105 Subcommand ``export`` writes keyring for requested entity, or master keyring if
110 ceph auth export {<entity>}
112 Subcommand ``get`` writes keyring file with requested key.
116 ceph auth get <entity>
118 Subcommand ``get-key`` displays requested key.
122 ceph auth get-key <entity>
124 Subcommand ``get-or-create`` adds authentication info for a particular entity
125 from input file, or random key if no input given and/or any caps specified in the
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
132 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133 pairs specified in the command. If key already exists, any given caps must match
134 the existing caps for that key.
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
140 Subcommand ``import`` reads keyring from input file.
146 Subcommand ``list`` lists authentication state.
152 Subcommand ``print-key`` displays requested key.
156 ceph auth print-key <entity>
158 Subcommand ``print_key`` displays requested key.
162 ceph auth print_key <entity>
168 Causes compaction of monitor's leveldb storage.
178 Manage configuration key. It uses some additional subcommands.
180 Subcommand ``del`` deletes configuration key.
184 ceph config-key del <key>
186 Subcommand ``exists`` checks for configuration keys existence.
190 ceph config-key exists <key>
192 Subcommand ``get`` gets the configuration key.
196 ceph config-key get <key>
198 Subcommand ``list`` lists configuration keys.
204 Subcommand ``dump`` dumps configuration keys and values.
210 Subcommand ``put`` puts configuration key and value.
214 ceph config-key put <key> {<val>}
220 Submit admin-socket commands.
224 ceph daemon {daemon_name|socket_path} {command} ...
228 ceph daemon osd.0 help
234 Watch performance counters from a Ceph daemon.
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
244 Show cluster's free space status.
254 Manage cephfs filesystems. It uses some additional subcommands.
256 Subcommand ``ls`` to list filesystems
262 Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
266 ceph fs new <fs_name> <metadata> <data>
268 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
272 ceph fs reset <fs_name> {--yes-i-really-mean-it}
274 Subcommand ``rm`` to disable the named filesystem
278 ceph fs rm <fs_name> {--yes-i-really-mean-it}
284 Show cluster's FSID/UUID.
294 Show cluster's health.
304 Show heap usage info (available only if compiled with tcmalloc)
308 ceph heap dump|start_profiler|stop_profiler|release|stats
314 Inject configuration arguments into monitor.
318 ceph injectargs <injected_args> [<injected_args>...]
324 Log supplied text to the monitor log.
328 ceph log <logtext> [<logtext>...]
334 Manage metadata server configuration and administration. It uses some
335 additional subcommands.
337 Subcommand ``compat`` manages compatible features. It uses some additional
340 Subcommand ``rm_compat`` removes compatible feature.
344 ceph mds compat rm_compat <int[0-]>
346 Subcommand ``rm_incompat`` removes incompatible feature.
350 ceph mds compat rm_incompat <int[0-]>
352 Subcommand ``show`` shows mds compatibility settings.
358 Subcommand ``deactivate`` stops mds.
362 ceph mds deactivate <who>
364 Subcommand ``fail`` forces mds to status fail.
370 Subcommand ``rm`` removes inactive mds.
374 ceph mds rm <int[0-]> <name> (type.id)>
376 Subcommand ``rmfailed`` removes failed mds.
380 ceph mds rmfailed <int[0-]>
382 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
386 ceph mds set_state <int[0-]> <int[0-20]>
388 Subcommand ``stat`` shows MDS status.
394 Subcommand ``tell`` sends command to particular mds.
398 ceph mds tell <who> <args> [<args>...]
403 Manage monitor configuration and administration. It uses some additional
406 Subcommand ``add`` adds new monitor named <name> at <addr>.
410 ceph mon add <name> <IPaddr[:port]>
412 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
416 ceph mon dump {<int[0-]>}
418 Subcommand ``getmap`` gets monmap.
422 ceph mon getmap {<int[0-]>}
424 Subcommand ``remove`` removes monitor named <name>.
428 ceph mon remove <name>
430 Subcommand ``stat`` summarizes monitor status.
439 Reports status of monitors.
448 Manage OSD configuration and administration. It uses some additional
451 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
454 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
459 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
461 Subcommand ``ls`` show blacklisted clients
465 ceph osd blacklist ls
467 Subcommand ``rm`` remove <addr> from blacklist
471 ceph osd blacklist rm <EntityAddr>
473 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
479 Subcommand ``create`` creates new osd (with optional UUID and ID).
483 ceph osd create {<uuid>} {<id>}
485 Subcommand ``crush`` is used for CRUSH management. It uses some additional
488 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
489 <weight> and location <args>.
493 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
495 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
500 ceph osd crush add-bucket <name> <type>
502 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
503 <weight> at/to location <args>.
507 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
510 Subcommand ``dump`` dumps crush map.
516 Subcommand ``get-tunable`` get crush tunable straw_calc_version
520 ceph osd crush get-tunable straw_calc_version
522 Subcommand ``link`` links existing entry for <name> under location <args>.
526 ceph osd crush link <name> <args> [<args>...]
528 Subcommand ``move`` moves existing entry for <name> to location <args>.
532 ceph osd crush move <name> <args> [<args>...]
534 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
539 ceph osd crush remove <name> {<ancestor>}
541 Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
545 ceph osd crush rename-bucket <srcname> <dstname>
547 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
551 ceph osd crush reweight <name> <float[0.0-]>
553 Subcommand ``reweight-all`` recalculate the weights for the tree to
554 ensure they sum correctly
558 ceph osd crush reweight-all
560 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
561 to <weight> in crush map
565 ceph osd crush reweight-subtree <name> <weight>
567 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
572 ceph osd crush rm <name> {<ancestor>}
574 Subcommand ``rule`` is used for creating crush rules. It uses some additional
577 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
578 created with <profile> (default default).
582 ceph osd crush rule create-erasure <name> {<profile>}
584 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
585 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
586 (default firstn; indep best for erasure pools).
590 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
592 Subcommand ``dump`` dumps crush rule <name> (default all).
596 ceph osd crush rule dump {<name>}
598 Subcommand ``list`` lists crush rules.
602 ceph osd crush rule list
604 Subcommand ``ls`` lists crush rules.
608 ceph osd crush rule ls
610 Subcommand ``rm`` removes crush rule <name>.
614 ceph osd crush rule rm <name>
616 Subcommand ``set`` used alone, sets crush map from input file.
622 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
623 for <name> to <weight> with location <args>.
627 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
629 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
630 tunable that can be set is straw_calc_version.
634 ceph osd crush set-tunable straw_calc_version <value>
636 Subcommand ``show-tunables`` shows current crush tunables.
640 ceph osd crush show-tunables
642 Subcommand ``tree`` shows the crush buckets and items in a tree view.
648 Subcommand ``tunables`` sets crush tunables values to <profile>.
652 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
654 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
659 ceph osd crush unlink <name> {<ancestor>}
661 Subcommand ``df`` shows OSD utilization
665 ceph osd df {plain|tree}
667 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
671 ceph osd deep-scrub <who>
673 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
677 ceph osd down <ids> [<ids>...]
679 Subcommand ``dump`` prints summary of OSD map.
683 ceph osd dump {<int[0-]>}
685 Subcommand ``erasure-code-profile`` is used for managing the erasure code
686 profiles. It uses some additional subcommands.
688 Subcommand ``get`` gets erasure code profile <name>.
692 ceph osd erasure-code-profile get <name>
694 Subcommand ``ls`` lists all erasure code profiles.
698 ceph osd erasure-code-profile ls
700 Subcommand ``rm`` removes erasure code profile <name>.
704 ceph osd erasure-code-profile rm <name>
706 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
707 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
711 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
713 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
717 ceph osd find <int[0-]>
719 Subcommand ``getcrushmap`` gets CRUSH map.
723 ceph osd getcrushmap {<int[0-]>}
725 Subcommand ``getmap`` gets OSD map.
729 ceph osd getmap {<int[0-]>}
731 Subcommand ``getmaxosd`` shows largest OSD id.
737 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
741 ceph osd in <ids> [<ids>...]
743 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
744 MORE REPLICAS EXIST, BE CAREFUL.
748 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
750 Subcommand ``ls`` shows all OSD ids.
754 ceph osd ls {<int[0-]>}
756 Subcommand ``lspools`` lists pools.
760 ceph osd lspools {<int>}
762 Subcommand ``map`` finds pg for <object> in <pool>.
766 ceph osd map <poolname> <objectname>
768 Subcommand ``metadata`` fetches metadata for osd <id>.
772 ceph osd metadata {int[0-]} (default all)
774 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
778 ceph osd out <ids> [<ids>...]
780 Subcommand ``pause`` pauses osd.
786 Subcommand ``perf`` prints dump of OSD perf summary stats.
792 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
797 ceph osd pg-temp <pgid> {<id> [<id>...]}
799 Subcommand ``pool`` is used for managing data pools. It uses some additional
802 Subcommand ``create`` creates pool.
806 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
807 {<erasure_code_profile>} {<ruleset>} {<int>}
809 Subcommand ``delete`` deletes pool.
813 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
815 Subcommand ``get`` gets pool parameter <var>.
819 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
820 pgp_num|crush_ruleset|auid|write_fadvise_dontneed
822 Only for tiered pools::
824 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
825 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
826 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
827 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
829 Only for erasure coded pools::
831 ceph osd pool get <poolname> erasure_code_profile
833 Use ``all`` to get all pool parameters that apply to the pool's type::
835 ceph osd pool get <poolname> all
837 Subcommand ``get-quota`` obtains object or byte limits for pool.
841 ceph osd pool get-quota <poolname>
843 Subcommand ``ls`` list pools
847 ceph osd pool ls {detail}
849 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
853 ceph osd pool mksnap <poolname> <snap>
855 Subcommand ``rename`` renames <srcpool> to <destpool>.
859 ceph osd pool rename <poolname> <poolname>
861 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
865 ceph osd pool rmsnap <poolname> <snap>
867 Subcommand ``set`` sets pool parameter <var> to <val>.
871 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
872 pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange|
873 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
874 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
875 cache_target_dirty_high_ratio|
876 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
877 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
878 hit_set_search_last_n
879 <val> {--yes-i-really-mean-it}
881 Subcommand ``set-quota`` sets object or byte limit on pool.
885 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
887 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
891 ceph osd pool stats {<name>}
893 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
898 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
900 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
905 ceph osd primary-temp <pgid> <id>
907 Subcommand ``repair`` initiates repair on a specified osd.
911 ceph osd repair <who>
913 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
917 osd reweight <int[0-]> <float[0.0-1.0]>
919 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
920 [overload-percentage-for-consideration, default 120].
924 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
927 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
928 [overload-percentage-for-consideration, default 120].
932 ceph osd reweight-by-utilization {<int[100-]>}
935 Subcommand ``rm`` removes osd(s) <id> [<id>...] in the cluster.
939 ceph osd rm <ids> [<ids>...]
941 Subcommand ``scrub`` initiates scrub on specified osd.
947 Subcommand ``set`` sets <key>.
951 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
952 norebalance|norecover|noscrub|nodeep-scrub|notieragent
954 Subcommand ``setcrushmap`` sets crush map from input file.
960 Subcommand ``setmaxosd`` sets new maximum osd value.
964 ceph osd setmaxosd <int[0-]>
966 Subcommand ``stat`` prints summary of OSD map.
972 Subcommand ``tier`` is used for managing tiers. It uses some additional
975 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
980 ceph osd tier add <poolname> <poolname> {--force-nonempty}
982 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
983 to existing pool <pool> (the first one).
987 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
989 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
993 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
994 readforward|readproxy
996 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
997 <pool> (the first one).
1001 ceph osd tier remove <poolname> <poolname>
1003 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1007 ceph osd tier remove-overlay <poolname>
1009 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1014 ceph osd tier set-overlay <poolname> <poolname>
1016 Subcommand ``tree`` prints OSD tree.
1020 ceph osd tree {<int[0-]>}
1022 Subcommand ``unpause`` unpauses osd.
1028 Subcommand ``unset`` unsets <key>.
1032 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1033 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1039 It is used for managing the placement groups in OSDs. It uses some
1040 additional subcommands.
1042 Subcommand ``debug`` shows debug info about pgs.
1046 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1048 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1052 ceph pg deep-scrub <pgid>
1054 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1059 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1061 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1065 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1067 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1071 ceph pg dump_pools_json
1073 Subcommand ``dump_stuck`` shows information about stuck pgs.
1077 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1080 Subcommand ``force_create_pg`` forces creation of pg <pgid>.
1084 ceph pg force_create_pg <pgid>
1086 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1092 Subcommand ``ls`` lists pg with specific pool, osd, state
1096 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1097 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1098 recovery|backfill_wait|incomplete|stale| remapped|
1099 deep_scrub|backfill|backfill_toofull|recovery_wait|
1100 undersized [active|clean|down|replay|splitting|
1101 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1102 recovery|backfill_wait|incomplete|stale|remapped|
1103 deep_scrub|backfill|backfill_toofull|recovery_wait|
1106 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1110 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1111 {active|clean|down|replay|splitting|
1112 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1113 recovery|backfill_wait|incomplete|stale| remapped|
1114 deep_scrub|backfill|backfill_toofull|recovery_wait|
1115 undersized [active|clean|down|replay|splitting|
1116 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1117 recovery|backfill_wait|incomplete|stale|remapped|
1118 deep_scrub|backfill|backfill_toofull|recovery_wait|
1121 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1125 ceph pg ls-by-pool <poolstr> {<int>} {active|
1126 clean|down|replay|splitting|
1127 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1128 recovery|backfill_wait|incomplete|stale| remapped|
1129 deep_scrub|backfill|backfill_toofull|recovery_wait|
1130 undersized [active|clean|down|replay|splitting|
1131 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1132 recovery|backfill_wait|incomplete|stale|remapped|
1133 deep_scrub|backfill|backfill_toofull|recovery_wait|
1136 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1140 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1141 {active|clean|down|replay|splitting|
1142 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1143 recovery|backfill_wait|incomplete|stale| remapped|
1144 deep_scrub|backfill|backfill_toofull|recovery_wait|
1145 undersized [active|clean|down|replay|splitting|
1146 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1147 recovery|backfill_wait|incomplete|stale|remapped|
1148 deep_scrub|backfill|backfill_toofull|recovery_wait|
1151 Subcommand ``map`` shows mapping of pg to osds.
1157 Subcommand ``repair`` starts repair on <pgid>.
1161 ceph pg repair <pgid>
1163 Subcommand ``scrub`` starts scrub on <pgid>.
1167 ceph pg scrub <pgid>
1169 Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1173 ceph pg set_full_ratio <float[0.0-1.0]>
1175 Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1179 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1181 Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1186 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1188 Subcommand ``stat`` shows placement group status.
1198 Cause MON to enter or exit quorum.
1202 ceph quorum enter|exit
1204 Note: this only works on the MON to which the ``ceph`` command is connected.
1205 If you want a specific MON to enter or exit quorum, use this syntax::
1207 ceph tell mon.<id> quorum enter|exit
1212 Reports status of monitor quorum.
1222 Reports full status of cluster, optional title tag strings.
1226 ceph report {<tags> [<tags>...]}
1232 Scrubs the monitor stores.
1242 Shows cluster status.
1252 Forces sync of and clear monitor store.
1256 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1262 Sends a command to a specific daemon.
1266 ceph tell <name (type.id)> <args> [<args>...]
1271 Show mon daemon version
1280 .. option:: -i infile
1282 will specify an input file to be passed along as a payload with the
1283 command to the monitor cluster. This is only used for specific
1286 .. option:: -o outfile
1288 will write any payload returned by the monitor cluster with its
1289 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1292 .. option:: -c ceph.conf, --conf=ceph.conf
1294 Use ceph.conf configuration file instead of the default
1295 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1297 .. option:: --id CLIENT_ID, --user CLIENT_ID
1299 Client id for authentication.
1301 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1303 Client name for authentication.
1305 .. option:: --cluster CLUSTER
1307 Name of the Ceph cluster.
1309 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1311 Submit admin-socket commands via admin sockets in /var/run/ceph.
1313 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1315 You probably mean --admin-daemon
1317 .. option:: -s, --status
1319 Show cluster status.
1321 .. option:: -w, --watch
1323 Watch live cluster changes.
1325 .. option:: --watch-debug
1329 .. option:: --watch-info
1333 .. option:: --watch-sec
1335 Watch security events.
1337 .. option:: --watch-warn
1339 Watch warning events.
1341 .. option:: --watch-error
1345 .. option:: --version, -v
1349 .. option:: --verbose
1353 .. option:: --concise
1357 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1361 .. option:: --connect-timeout CLUSTER_TIMEOUT
1363 Set a timeout for connecting to the cluster.
1365 .. option:: --no-increasing
1367 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1368 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1369 If this option is used with these commands, it will help not to increase osd weight
1370 even the osd is under utilized.
1376 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1377 the Ceph documentation at http://ceph.com/docs for more information.
1383 :doc:`ceph-mon <ceph-mon>`\(8),
1384 :doc:`ceph-osd <ceph-osd>`\(8),
1385 :doc:`ceph-mds <ceph-mds>`\(8)