3 ==================================
4 ceph -- ceph administration tool
5 ==================================
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
14 | **ceph** **compact**
16 | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
18 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
20 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
22 | **ceph** **df** *{detail}*
24 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
28 | **ceph** **health** *{detail}*
30 | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
36 | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
40 | **ceph** **mon_status**
42 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
44 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
46 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
52 | **ceph** **quorum** [ *enter* \| *exit* ]
54 | **ceph** **quorum_status**
56 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
62 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
64 | **ceph** **tell** *<name (type.id)> <args> [<args>...]*
66 | **ceph** **version**
71 :program:`ceph` is a control utility which is used for manual deployment and maintenance
72 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73 monitors, OSDs, placement groups, MDS and overall maintenance, administration
82 Manage authentication keys. It is used for adding, removing, exporting
83 or updating of authentication keys for a particular entity such as a monitor or
84 OSD. It uses some additional subcommands.
86 Subcommand ``add`` adds authentication info for a particular entity from input
87 file, or random key if no input is given and/or any caps specified in the command.
91 ceph auth add <entity> {<caps> [<caps>...]}
93 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
97 ceph auth caps <entity> <caps> [<caps>...]
99 Subcommand ``del`` deletes all caps for ``name``.
103 ceph auth del <entity>
105 Subcommand ``export`` writes keyring for requested entity, or master keyring if
110 ceph auth export {<entity>}
112 Subcommand ``get`` writes keyring file with requested key.
116 ceph auth get <entity>
118 Subcommand ``get-key`` displays requested key.
122 ceph auth get-key <entity>
124 Subcommand ``get-or-create`` adds authentication info for a particular entity
125 from input file, or random key if no input given and/or any caps specified in the
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
132 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133 pairs specified in the command. If key already exists, any given caps must match
134 the existing caps for that key.
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
140 Subcommand ``import`` reads keyring from input file.
146 Subcommand ``list`` lists authentication state.
152 Subcommand ``print-key`` displays requested key.
156 ceph auth print-key <entity>
158 Subcommand ``print_key`` displays requested key.
162 ceph auth print_key <entity>
168 Causes compaction of monitor's leveldb storage.
178 Manage configuration key. It uses some additional subcommands.
180 Subcommand ``del`` deletes configuration key.
184 ceph config-key del <key>
186 Subcommand ``exists`` checks for configuration keys existence.
190 ceph config-key exists <key>
192 Subcommand ``get`` gets the configuration key.
196 ceph config-key get <key>
198 Subcommand ``list`` lists configuration keys.
204 Subcommand ``dump`` dumps configuration keys and values.
210 Subcommand ``put`` puts configuration key and value.
214 ceph config-key put <key> {<val>}
220 Submit admin-socket commands.
224 ceph daemon {daemon_name|socket_path} {command} ...
228 ceph daemon osd.0 help
234 Watch performance counters from a Ceph daemon.
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
244 Show cluster's free space status.
254 Manage cephfs filesystems. It uses some additional subcommands.
256 Subcommand ``ls`` to list filesystems
262 Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
266 ceph fs new <fs_name> <metadata> <data>
268 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
272 ceph fs reset <fs_name> {--yes-i-really-mean-it}
274 Subcommand ``rm`` to disable the named filesystem
278 ceph fs rm <fs_name> {--yes-i-really-mean-it}
284 Show cluster's FSID/UUID.
294 Show cluster's health.
304 Show heap usage info (available only if compiled with tcmalloc)
308 ceph heap dump|start_profiler|stop_profiler|release|stats
314 Inject configuration arguments into monitor.
318 ceph injectargs <injected_args> [<injected_args>...]
324 Log supplied text to the monitor log.
328 ceph log <logtext> [<logtext>...]
334 Manage metadata server configuration and administration. It uses some
335 additional subcommands.
337 Subcommand ``compat`` manages compatible features. It uses some additional
340 Subcommand ``rm_compat`` removes compatible feature.
344 ceph mds compat rm_compat <int[0-]>
346 Subcommand ``rm_incompat`` removes incompatible feature.
350 ceph mds compat rm_incompat <int[0-]>
352 Subcommand ``show`` shows mds compatibility settings.
358 Subcommand ``deactivate`` stops mds.
362 ceph mds deactivate <who>
364 Subcommand ``fail`` forces mds to status fail.
370 Subcommand ``rm`` removes inactive mds.
374 ceph mds rm <int[0-]> <name> (type.id)>
376 Subcommand ``rmfailed`` removes failed mds.
380 ceph mds rmfailed <int[0-]>
382 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
386 ceph mds set_state <int[0-]> <int[0-20]>
388 Subcommand ``stat`` shows MDS status.
394 Subcommand ``tell`` sends command to particular mds.
398 ceph mds tell <who> <args> [<args>...]
403 Manage monitor configuration and administration. It uses some additional
406 Subcommand ``add`` adds new monitor named <name> at <addr>.
410 ceph mon add <name> <IPaddr[:port]>
412 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
416 ceph mon dump {<int[0-]>}
418 Subcommand ``getmap`` gets monmap.
422 ceph mon getmap {<int[0-]>}
424 Subcommand ``remove`` removes monitor named <name>.
428 ceph mon remove <name>
430 Subcommand ``stat`` summarizes monitor status.
439 Reports status of monitors.
448 Manage OSD configuration and administration. It uses some additional
451 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
454 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
459 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
461 Subcommand ``ls`` show blacklisted clients
465 ceph osd blacklist ls
467 Subcommand ``rm`` remove <addr> from blacklist
471 ceph osd blacklist rm <EntityAddr>
473 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
479 Subcommand ``create`` creates new osd (with optional UUID and ID).
481 This command is DEPRECATED as of the Luminous release, and will be removed in
484 Subcommand ``new`` should instead be used.
488 ceph osd create {<uuid>} {<id>}
490 Subcommand ``new`` reuses a previously destroyed OSD *id*. The new OSD will
491 have the specified *uuid*, and the command expects a JSON file containing
492 the base64 cephx key for auth entity *client.osd.<id>*, as well as optional
493 base64 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying
494 a dm-crypt requires specifying the accompanying lockbox cephx key.
498 ceph osd new {<id>} {<uuid>} -i {<secrets.json>}
500 The secrets JSON file is expected to maintain a form of the following format::
503 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg=="
509 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
510 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
511 "dmcrypt_key": "<dm-crypt key>"
515 Subcommand ``crush`` is used for CRUSH management. It uses some additional
518 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
519 <weight> and location <args>.
523 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
525 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
530 ceph osd crush add-bucket <name> <type>
532 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
533 <weight> at/to location <args>.
537 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
540 Subcommand ``dump`` dumps crush map.
546 Subcommand ``get-tunable`` get crush tunable straw_calc_version
550 ceph osd crush get-tunable straw_calc_version
552 Subcommand ``link`` links existing entry for <name> under location <args>.
556 ceph osd crush link <name> <args> [<args>...]
558 Subcommand ``move`` moves existing entry for <name> to location <args>.
562 ceph osd crush move <name> <args> [<args>...]
564 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
569 ceph osd crush remove <name> {<ancestor>}
571 Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
575 ceph osd crush rename-bucket <srcname> <dstname>
577 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
581 ceph osd crush reweight <name> <float[0.0-]>
583 Subcommand ``reweight-all`` recalculate the weights for the tree to
584 ensure they sum correctly
588 ceph osd crush reweight-all
590 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
591 to <weight> in crush map
595 ceph osd crush reweight-subtree <name> <weight>
597 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
602 ceph osd crush rm <name> {<ancestor>}
604 Subcommand ``rule`` is used for creating crush rules. It uses some additional
607 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
608 created with <profile> (default default).
612 ceph osd crush rule create-erasure <name> {<profile>}
614 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
615 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
616 (default firstn; indep best for erasure pools).
620 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
622 Subcommand ``dump`` dumps crush rule <name> (default all).
626 ceph osd crush rule dump {<name>}
628 Subcommand ``list`` lists crush rules.
632 ceph osd crush rule list
634 Subcommand ``ls`` lists crush rules.
638 ceph osd crush rule ls
640 Subcommand ``rm`` removes crush rule <name>.
644 ceph osd crush rule rm <name>
646 Subcommand ``set`` used alone, sets crush map from input file.
652 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
653 for <name> to <weight> with location <args>.
657 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
659 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
660 tunable that can be set is straw_calc_version.
664 ceph osd crush set-tunable straw_calc_version <value>
666 Subcommand ``show-tunables`` shows current crush tunables.
670 ceph osd crush show-tunables
672 Subcommand ``tree`` shows the crush buckets and items in a tree view.
678 Subcommand ``tunables`` sets crush tunables values to <profile>.
682 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
684 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
689 ceph osd crush unlink <name> {<ancestor>}
691 Subcommand ``df`` shows OSD utilization
695 ceph osd df {plain|tree}
697 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
701 ceph osd deep-scrub <who>
703 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
707 ceph osd down <ids> [<ids>...]
709 Subcommand ``dump`` prints summary of OSD map.
713 ceph osd dump {<int[0-]>}
715 Subcommand ``erasure-code-profile`` is used for managing the erasure code
716 profiles. It uses some additional subcommands.
718 Subcommand ``get`` gets erasure code profile <name>.
722 ceph osd erasure-code-profile get <name>
724 Subcommand ``ls`` lists all erasure code profiles.
728 ceph osd erasure-code-profile ls
730 Subcommand ``rm`` removes erasure code profile <name>.
734 ceph osd erasure-code-profile rm <name>
736 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
737 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
741 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
743 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
747 ceph osd find <int[0-]>
749 Subcommand ``getcrushmap`` gets CRUSH map.
753 ceph osd getcrushmap {<int[0-]>}
755 Subcommand ``getmap`` gets OSD map.
759 ceph osd getmap {<int[0-]>}
761 Subcommand ``getmaxosd`` shows largest OSD id.
767 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
771 ceph osd in <ids> [<ids>...]
773 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
774 MORE REPLICAS EXIST, BE CAREFUL.
778 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
780 Subcommand ``ls`` shows all OSD ids.
784 ceph osd ls {<int[0-]>}
786 Subcommand ``lspools`` lists pools.
790 ceph osd lspools {<int>}
792 Subcommand ``map`` finds pg for <object> in <pool>.
796 ceph osd map <poolname> <objectname>
798 Subcommand ``metadata`` fetches metadata for osd <id>.
802 ceph osd metadata {int[0-]} (default all)
804 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
808 ceph osd out <ids> [<ids>...]
810 Subcommand ``pause`` pauses osd.
816 Subcommand ``perf`` prints dump of OSD perf summary stats.
822 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
827 ceph osd pg-temp <pgid> {<id> [<id>...]}
829 Subcommand ``pool`` is used for managing data pools. It uses some additional
832 Subcommand ``create`` creates pool.
836 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
837 {<erasure_code_profile>} {<ruleset>} {<int>}
839 Subcommand ``delete`` deletes pool.
843 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
845 Subcommand ``get`` gets pool parameter <var>.
849 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
850 pgp_num|crush_ruleset|auid|write_fadvise_dontneed
852 Only for tiered pools::
854 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
855 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
856 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
857 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
859 Only for erasure coded pools::
861 ceph osd pool get <poolname> erasure_code_profile
863 Use ``all`` to get all pool parameters that apply to the pool's type::
865 ceph osd pool get <poolname> all
867 Subcommand ``get-quota`` obtains object or byte limits for pool.
871 ceph osd pool get-quota <poolname>
873 Subcommand ``ls`` list pools
877 ceph osd pool ls {detail}
879 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
883 ceph osd pool mksnap <poolname> <snap>
885 Subcommand ``rename`` renames <srcpool> to <destpool>.
889 ceph osd pool rename <poolname> <poolname>
891 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
895 ceph osd pool rmsnap <poolname> <snap>
897 Subcommand ``set`` sets pool parameter <var> to <val>.
901 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
902 pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange|
903 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
904 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
905 cache_target_dirty_high_ratio|
906 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
907 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
908 hit_set_search_last_n
909 <val> {--yes-i-really-mean-it}
911 Subcommand ``set-quota`` sets object or byte limit on pool.
915 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
917 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
921 ceph osd pool stats {<name>}
923 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
928 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
930 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
935 ceph osd primary-temp <pgid> <id>
937 Subcommand ``repair`` initiates repair on a specified osd.
941 ceph osd repair <who>
943 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
947 osd reweight <int[0-]> <float[0.0-1.0]>
949 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
950 [overload-percentage-for-consideration, default 120].
954 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
957 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
958 [overload-percentage-for-consideration, default 120].
962 ceph osd reweight-by-utilization {<int[100-]>}
965 Subcommand ``rm`` removes osd(s) <id> [<id>...] in the cluster.
969 ceph osd rm <ids> [<ids>...]
971 Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
972 entity's keys and all of its dm-crypt and daemon-private config key
975 This command will not remove the OSD from crush, nor will it remove the
976 OSD from the OSD map. Instead, once the command successfully completes,
977 the OSD will show marked as *destroyed*.
979 In order to mark an OSD as destroyed, the OSD must first be marked as
984 ceph osd destroy <id> {--yes-i-really-mean-it}
987 Subcommand ``purge`` performs a combination of ``osd destroy``,
988 ``osd rm`` and ``osd crush remove``.
992 ceph osd purge <id> {--yes-i-really-mean-it}
994 Subcommand ``scrub`` initiates scrub on specified osd.
1000 Subcommand ``set`` sets <key>.
1004 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
1005 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1007 Subcommand ``setcrushmap`` sets crush map from input file.
1011 ceph osd setcrushmap
1013 Subcommand ``setmaxosd`` sets new maximum osd value.
1017 ceph osd setmaxosd <int[0-]>
1019 Subcommand ``stat`` prints summary of OSD map.
1025 Subcommand ``tier`` is used for managing tiers. It uses some additional
1028 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1033 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1035 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1036 to existing pool <pool> (the first one).
1040 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1042 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1046 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
1047 readforward|readproxy
1049 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1050 <pool> (the first one).
1054 ceph osd tier remove <poolname> <poolname>
1056 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1060 ceph osd tier remove-overlay <poolname>
1062 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1067 ceph osd tier set-overlay <poolname> <poolname>
1069 Subcommand ``tree`` prints OSD tree.
1073 ceph osd tree {<int[0-]>}
1075 Subcommand ``unpause`` unpauses osd.
1081 Subcommand ``unset`` unsets <key>.
1085 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1086 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1092 It is used for managing the placement groups in OSDs. It uses some
1093 additional subcommands.
1095 Subcommand ``debug`` shows debug info about pgs.
1099 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1101 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1105 ceph pg deep-scrub <pgid>
1107 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1112 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1114 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1118 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1120 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1124 ceph pg dump_pools_json
1126 Subcommand ``dump_stuck`` shows information about stuck pgs.
1130 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1133 Subcommand ``force_create_pg`` forces creation of pg <pgid>.
1137 ceph pg force_create_pg <pgid>
1139 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1145 Subcommand ``ls`` lists pg with specific pool, osd, state
1149 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1150 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1151 recovery|backfill_wait|incomplete|stale| remapped|
1152 deep_scrub|backfill|backfill_toofull|recovery_wait|
1153 undersized [active|clean|down|replay|splitting|
1154 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1155 recovery|backfill_wait|incomplete|stale|remapped|
1156 deep_scrub|backfill|backfill_toofull|recovery_wait|
1159 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1163 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1164 {active|clean|down|replay|splitting|
1165 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1166 recovery|backfill_wait|incomplete|stale| remapped|
1167 deep_scrub|backfill|backfill_toofull|recovery_wait|
1168 undersized [active|clean|down|replay|splitting|
1169 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1170 recovery|backfill_wait|incomplete|stale|remapped|
1171 deep_scrub|backfill|backfill_toofull|recovery_wait|
1174 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1178 ceph pg ls-by-pool <poolstr> {<int>} {active|
1179 clean|down|replay|splitting|
1180 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1181 recovery|backfill_wait|incomplete|stale| remapped|
1182 deep_scrub|backfill|backfill_toofull|recovery_wait|
1183 undersized [active|clean|down|replay|splitting|
1184 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1185 recovery|backfill_wait|incomplete|stale|remapped|
1186 deep_scrub|backfill|backfill_toofull|recovery_wait|
1189 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1193 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1194 {active|clean|down|replay|splitting|
1195 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1196 recovery|backfill_wait|incomplete|stale| remapped|
1197 deep_scrub|backfill|backfill_toofull|recovery_wait|
1198 undersized [active|clean|down|replay|splitting|
1199 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1200 recovery|backfill_wait|incomplete|stale|remapped|
1201 deep_scrub|backfill|backfill_toofull|recovery_wait|
1204 Subcommand ``map`` shows mapping of pg to osds.
1210 Subcommand ``repair`` starts repair on <pgid>.
1214 ceph pg repair <pgid>
1216 Subcommand ``scrub`` starts scrub on <pgid>.
1220 ceph pg scrub <pgid>
1222 Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1226 ceph pg set_full_ratio <float[0.0-1.0]>
1228 Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1232 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1234 Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1239 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1241 Subcommand ``stat`` shows placement group status.
1251 Cause MON to enter or exit quorum.
1255 ceph quorum enter|exit
1257 Note: this only works on the MON to which the ``ceph`` command is connected.
1258 If you want a specific MON to enter or exit quorum, use this syntax::
1260 ceph tell mon.<id> quorum enter|exit
1265 Reports status of monitor quorum.
1275 Reports full status of cluster, optional title tag strings.
1279 ceph report {<tags> [<tags>...]}
1285 Scrubs the monitor stores.
1295 Shows cluster status.
1305 Forces sync of and clear monitor store.
1309 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1315 Sends a command to a specific daemon.
1319 ceph tell <name (type.id)> <args> [<args>...]
1322 List all available commands.
1326 ceph tell <name (type.id)> help
1331 Show mon daemon version
1340 .. option:: -i infile
1342 will specify an input file to be passed along as a payload with the
1343 command to the monitor cluster. This is only used for specific
1346 .. option:: -o outfile
1348 will write any payload returned by the monitor cluster with its
1349 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1352 .. option:: -c ceph.conf, --conf=ceph.conf
1354 Use ceph.conf configuration file instead of the default
1355 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1357 .. option:: --id CLIENT_ID, --user CLIENT_ID
1359 Client id for authentication.
1361 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1363 Client name for authentication.
1365 .. option:: --cluster CLUSTER
1367 Name of the Ceph cluster.
1369 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1371 Submit admin-socket commands via admin sockets in /var/run/ceph.
1373 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1375 You probably mean --admin-daemon
1377 .. option:: -s, --status
1379 Show cluster status.
1381 .. option:: -w, --watch
1383 Watch live cluster changes.
1385 .. option:: --watch-debug
1389 .. option:: --watch-info
1393 .. option:: --watch-sec
1395 Watch security events.
1397 .. option:: --watch-warn
1399 Watch warning events.
1401 .. option:: --watch-error
1405 .. option:: --version, -v
1409 .. option:: --verbose
1413 .. option:: --concise
1417 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1421 .. option:: --connect-timeout CLUSTER_TIMEOUT
1423 Set a timeout for connecting to the cluster.
1425 .. option:: --no-increasing
1427 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1428 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1429 If this option is used with these commands, it will help not to increase osd weight
1430 even the osd is under utilized.
1436 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1437 the Ceph documentation at http://ceph.com/docs for more information.
1443 :doc:`ceph-mon <ceph-mon>`\(8),
1444 :doc:`ceph-osd <ceph-osd>`\(8),
1445 :doc:`ceph-mds <ceph-mds>`\(8)