3 ==================================
4 ceph -- ceph administration tool
5 ==================================
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
14 | **ceph** **compact**
16 | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
18 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
20 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
22 | **ceph** **df** *{detail}*
24 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
28 | **ceph** **health** *{detail}*
30 | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
36 | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
40 | **ceph** **mon_status**
42 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
44 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
46 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
52 | **ceph** **quorum** [ *enter* \| *exit* ]
54 | **ceph** **quorum_status**
56 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
62 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
64 | **ceph** **tell** *<name (type.id)> <args> [<args>...]*
66 | **ceph** **version**
71 :program:`ceph` is a control utility which is used for manual deployment and maintenance
72 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73 monitors, OSDs, placement groups, MDS and overall maintenance, administration
82 Manage authentication keys. It is used for adding, removing, exporting
83 or updating of authentication keys for a particular entity such as a monitor or
84 OSD. It uses some additional subcommands.
86 Subcommand ``add`` adds authentication info for a particular entity from input
87 file, or random key if no input is given and/or any caps specified in the command.
91 ceph auth add <entity> {<caps> [<caps>...]}
93 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
97 ceph auth caps <entity> <caps> [<caps>...]
99 Subcommand ``del`` deletes all caps for ``name``.
103 ceph auth del <entity>
105 Subcommand ``export`` writes keyring for requested entity, or master keyring if
110 ceph auth export {<entity>}
112 Subcommand ``get`` writes keyring file with requested key.
116 ceph auth get <entity>
118 Subcommand ``get-key`` displays requested key.
122 ceph auth get-key <entity>
124 Subcommand ``get-or-create`` adds authentication info for a particular entity
125 from input file, or random key if no input given and/or any caps specified in the
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
132 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133 pairs specified in the command. If key already exists, any given caps must match
134 the existing caps for that key.
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
140 Subcommand ``import`` reads keyring from input file.
146 Subcommand ``ls`` lists authentication state.
152 Subcommand ``print-key`` displays requested key.
156 ceph auth print-key <entity>
158 Subcommand ``print_key`` displays requested key.
162 ceph auth print_key <entity>
168 Causes compaction of monitor's leveldb storage.
178 Manage configuration key. It uses some additional subcommands.
180 Subcommand ``del`` deletes configuration key.
184 ceph config-key del <key>
186 Subcommand ``exists`` checks for configuration keys existence.
190 ceph config-key exists <key>
192 Subcommand ``get`` gets the configuration key.
196 ceph config-key get <key>
198 Subcommand ``list`` lists configuration keys.
204 Subcommand ``dump`` dumps configuration keys and values.
210 Subcommand ``set`` puts configuration key and value.
214 ceph config-key set <key> {<val>}
220 Submit admin-socket commands.
224 ceph daemon {daemon_name|socket_path} {command} ...
228 ceph daemon osd.0 help
234 Watch performance counters from a Ceph daemon.
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
244 Show cluster's free space status.
255 Show the releases and features of all connected daemons and clients connected
256 to the cluster, along with the numbers of them in each bucket grouped by the
257 corresponding features/releases. Each release of Ceph supports a different set
258 of features, expressed by the features bitmask. New cluster features require
259 that clients support the feature, or else they are not allowed to connect to
260 these new features. As new features or capabilities are enabled after an
261 upgrade, older clients are prevented from connecting.
270 Manage cephfs filesystems. It uses some additional subcommands.
272 Subcommand ``ls`` to list filesystems
278 Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
282 ceph fs new <fs_name> <metadata> <data>
284 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
288 ceph fs reset <fs_name> {--yes-i-really-mean-it}
290 Subcommand ``rm`` to disable the named filesystem
294 ceph fs rm <fs_name> {--yes-i-really-mean-it}
300 Show cluster's FSID/UUID.
310 Show cluster's health.
320 Show heap usage info (available only if compiled with tcmalloc)
324 ceph heap dump|start_profiler|stop_profiler|release|stats
330 Inject configuration arguments into monitor.
334 ceph injectargs <injected_args> [<injected_args>...]
340 Log supplied text to the monitor log.
344 ceph log <logtext> [<logtext>...]
350 Manage metadata server configuration and administration. It uses some
351 additional subcommands.
353 Subcommand ``compat`` manages compatible features. It uses some additional
356 Subcommand ``rm_compat`` removes compatible feature.
360 ceph mds compat rm_compat <int[0-]>
362 Subcommand ``rm_incompat`` removes incompatible feature.
366 ceph mds compat rm_incompat <int[0-]>
368 Subcommand ``show`` shows mds compatibility settings.
374 Subcommand ``deactivate`` stops mds.
378 ceph mds deactivate <who>
380 Subcommand ``fail`` forces mds to status fail.
386 Subcommand ``rm`` removes inactive mds.
390 ceph mds rm <int[0-]> <name> (type.id)>
392 Subcommand ``rmfailed`` removes failed mds.
396 ceph mds rmfailed <int[0-]>
398 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
402 ceph mds set_state <int[0-]> <int[0-20]>
404 Subcommand ``stat`` shows MDS status.
410 Subcommand ``tell`` sends command to particular mds.
414 ceph mds tell <who> <args> [<args>...]
419 Manage monitor configuration and administration. It uses some additional
422 Subcommand ``add`` adds new monitor named <name> at <addr>.
426 ceph mon add <name> <IPaddr[:port]>
428 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
432 ceph mon dump {<int[0-]>}
434 Subcommand ``getmap`` gets monmap.
438 ceph mon getmap {<int[0-]>}
440 Subcommand ``remove`` removes monitor named <name>.
444 ceph mon remove <name>
446 Subcommand ``stat`` summarizes monitor status.
455 Reports status of monitors.
464 Ceph manager daemon configuration and management.
466 Subcommand ``dump`` dumps the latest MgrMap, which describes the active
467 and standby manager daemons.
473 Subcommand ``fail`` will mark a manager daemon as failed, removing it
474 from the manager map. If it is the active manager daemon a standby
481 Subcommand ``module ls`` will list currently enabled manager modules (plugins).
487 Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
491 ceph mgr module enable <module>
493 Subcommand ``module disable`` will disable an active manager module.
497 ceph mgr module disable <module>
499 Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
503 ceph mgr metadata [name]
505 Subcommand ``versions`` will report a count of running daemon versions.
511 Subcommand ``count-metadata`` will report a count of any daemon metadata field.
515 ceph mgr count-metadata <field>
521 Manage OSD configuration and administration. It uses some additional
524 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
527 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
532 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
534 Subcommand ``ls`` show blacklisted clients
538 ceph osd blacklist ls
540 Subcommand ``rm`` remove <addr> from blacklist
544 ceph osd blacklist rm <EntityAddr>
546 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
552 Subcommand ``create`` creates new osd (with optional UUID and ID).
554 This command is DEPRECATED as of the Luminous release, and will be removed in
557 Subcommand ``new`` should instead be used.
561 ceph osd create {<uuid>} {<id>}
563 Subcommand ``new`` reuses a previously destroyed OSD *id*. The new OSD will
564 have the specified *uuid*, and the command expects a JSON file containing
565 the base64 cephx key for auth entity *client.osd.<id>*, as well as optional
566 base64 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying
567 a dm-crypt requires specifying the accompanying lockbox cephx key.
571 ceph osd new {<id>} {<uuid>} -i {<secrets.json>}
573 The secrets JSON file is expected to maintain a form of the following format::
576 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg=="
582 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
583 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
584 "dmcrypt_key": "<dm-crypt key>"
588 Subcommand ``crush`` is used for CRUSH management. It uses some additional
591 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
592 <weight> and location <args>.
596 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
598 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
603 ceph osd crush add-bucket <name> <type>
605 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
606 <weight> at/to location <args>.
610 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
613 Subcommand ``dump`` dumps crush map.
619 Subcommand ``get-tunable`` get crush tunable straw_calc_version
623 ceph osd crush get-tunable straw_calc_version
625 Subcommand ``link`` links existing entry for <name> under location <args>.
629 ceph osd crush link <name> <args> [<args>...]
631 Subcommand ``move`` moves existing entry for <name> to location <args>.
635 ceph osd crush move <name> <args> [<args>...]
637 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
642 ceph osd crush remove <name> {<ancestor>}
644 Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
648 ceph osd crush rename-bucket <srcname> <dstname>
650 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
654 ceph osd crush reweight <name> <float[0.0-]>
656 Subcommand ``reweight-all`` recalculate the weights for the tree to
657 ensure they sum correctly
661 ceph osd crush reweight-all
663 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
664 to <weight> in crush map
668 ceph osd crush reweight-subtree <name> <weight>
670 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
675 ceph osd crush rm <name> {<ancestor>}
677 Subcommand ``rule`` is used for creating crush rules. It uses some additional
680 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
681 created with <profile> (default default).
685 ceph osd crush rule create-erasure <name> {<profile>}
687 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
688 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
689 (default firstn; indep best for erasure pools).
693 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
695 Subcommand ``dump`` dumps crush rule <name> (default all).
699 ceph osd crush rule dump {<name>}
701 Subcommand ``ls`` lists crush rules.
705 ceph osd crush rule ls
707 Subcommand ``rm`` removes crush rule <name>.
711 ceph osd crush rule rm <name>
713 Subcommand ``set`` used alone, sets crush map from input file.
719 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
720 for <name> to <weight> with location <args>.
724 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
726 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
727 tunable that can be set is straw_calc_version.
731 ceph osd crush set-tunable straw_calc_version <value>
733 Subcommand ``show-tunables`` shows current crush tunables.
737 ceph osd crush show-tunables
739 Subcommand ``tree`` shows the crush buckets and items in a tree view.
745 Subcommand ``tunables`` sets crush tunables values to <profile>.
749 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
751 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
756 ceph osd crush unlink <name> {<ancestor>}
758 Subcommand ``df`` shows OSD utilization
762 ceph osd df {plain|tree}
764 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
768 ceph osd deep-scrub <who>
770 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
774 ceph osd down <ids> [<ids>...]
776 Subcommand ``dump`` prints summary of OSD map.
780 ceph osd dump {<int[0-]>}
782 Subcommand ``erasure-code-profile`` is used for managing the erasure code
783 profiles. It uses some additional subcommands.
785 Subcommand ``get`` gets erasure code profile <name>.
789 ceph osd erasure-code-profile get <name>
791 Subcommand ``ls`` lists all erasure code profiles.
795 ceph osd erasure-code-profile ls
797 Subcommand ``rm`` removes erasure code profile <name>.
801 ceph osd erasure-code-profile rm <name>
803 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
804 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
808 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
810 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
814 ceph osd find <int[0-]>
816 Subcommand ``getcrushmap`` gets CRUSH map.
820 ceph osd getcrushmap {<int[0-]>}
822 Subcommand ``getmap`` gets OSD map.
826 ceph osd getmap {<int[0-]>}
828 Subcommand ``getmaxosd`` shows largest OSD id.
834 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
838 ceph osd in <ids> [<ids>...]
840 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
841 MORE REPLICAS EXIST, BE CAREFUL.
845 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
847 Subcommand ``ls`` shows all OSD ids.
851 ceph osd ls {<int[0-]>}
853 Subcommand ``lspools`` lists pools.
857 ceph osd lspools {<int>}
859 Subcommand ``map`` finds pg for <object> in <pool>.
863 ceph osd map <poolname> <objectname>
865 Subcommand ``metadata`` fetches metadata for osd <id>.
869 ceph osd metadata {int[0-]} (default all)
871 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
875 ceph osd out <ids> [<ids>...]
877 Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
878 stopped without immediately making data unavailable. That is, all
879 data should remain readable and writeable, although data redundancy
880 may be reduced as some PGs may end up in a degraded (but active)
881 state. It will return a success code if it is okay to stop the
882 OSD(s), or an error code and informative message if it is not or if no
883 conclusion can be drawn at the current time.
887 ceph osd ok-to-stop <id> [<ids>...]
889 Subcommand ``pause`` pauses osd.
895 Subcommand ``perf`` prints dump of OSD perf summary stats.
901 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
906 ceph osd pg-temp <pgid> {<id> [<id>...]}
908 Subcommand ``force-create-pg`` forces creation of pg <pgid>.
912 ceph osd force-create-pg <pgid>
915 Subcommand ``pool`` is used for managing data pools. It uses some additional
918 Subcommand ``create`` creates pool.
922 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
923 {<erasure_code_profile>} {<ruleset>} {<int>}
925 Subcommand ``delete`` deletes pool.
929 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
931 Subcommand ``get`` gets pool parameter <var>.
935 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
936 pgp_num|crush_ruleset|auid|write_fadvise_dontneed
938 Only for tiered pools::
940 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
941 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
942 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
943 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
945 Only for erasure coded pools::
947 ceph osd pool get <poolname> erasure_code_profile
949 Use ``all`` to get all pool parameters that apply to the pool's type::
951 ceph osd pool get <poolname> all
953 Subcommand ``get-quota`` obtains object or byte limits for pool.
957 ceph osd pool get-quota <poolname>
959 Subcommand ``ls`` list pools
963 ceph osd pool ls {detail}
965 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
969 ceph osd pool mksnap <poolname> <snap>
971 Subcommand ``rename`` renames <srcpool> to <destpool>.
975 ceph osd pool rename <poolname> <poolname>
977 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
981 ceph osd pool rmsnap <poolname> <snap>
983 Subcommand ``set`` sets pool parameter <var> to <val>.
987 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
988 pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange|
989 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
990 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
991 cache_target_dirty_high_ratio|
992 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
993 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
994 hit_set_search_last_n
995 <val> {--yes-i-really-mean-it}
997 Subcommand ``set-quota`` sets object or byte limit on pool.
1001 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1003 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1007 ceph osd pool stats {<name>}
1009 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1014 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1016 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1021 ceph osd primary-temp <pgid> <id>
1023 Subcommand ``repair`` initiates repair on a specified osd.
1027 ceph osd repair <who>
1029 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1033 osd reweight <int[0-]> <float[0.0-1.0]>
1035 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1036 [overload-percentage-for-consideration, default 120].
1040 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1043 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
1044 [overload-percentage-for-consideration, default 120].
1048 ceph osd reweight-by-utilization {<int[100-]>}
1051 Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1056 ceph osd rm <ids> [<ids>...]
1058 Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1059 entity's keys and all of its dm-crypt and daemon-private config key
1062 This command will not remove the OSD from crush, nor will it remove the
1063 OSD from the OSD map. Instead, once the command successfully completes,
1064 the OSD will show marked as *destroyed*.
1066 In order to mark an OSD as destroyed, the OSD must first be marked as
1071 ceph osd destroy <id> {--yes-i-really-mean-it}
1074 Subcommand ``purge`` performs a combination of ``osd destroy``,
1075 ``osd rm`` and ``osd crush remove``.
1079 ceph osd purge <id> {--yes-i-really-mean-it}
1081 Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1082 destroy an OSD without reducing overall data redundancy or durability.
1083 It will return a success code if it is definitely safe, or an error
1084 code and informative message if it is not or if no conclusion can be
1085 drawn at the current time.
1089 ceph osd safe-to-destroy <id> [<ids>...]
1091 Subcommand ``scrub`` initiates scrub on specified osd.
1095 ceph osd scrub <who>
1097 Subcommand ``set`` sets <key>.
1101 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
1102 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1104 Subcommand ``setcrushmap`` sets crush map from input file.
1108 ceph osd setcrushmap
1110 Subcommand ``setmaxosd`` sets new maximum osd value.
1114 ceph osd setmaxosd <int[0-]>
1116 Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1117 compatible with the specified client version. This subcommand prevents you from
1118 making any changes (e.g., crush tunables, or using new features) that
1119 would violate the current setting. Please note, This subcommand will fail if
1120 any connected daemon or client is not compatible with the features offered by
1121 the given <version>. To see the features and releases of all clients connected
1122 to cluster, please see `ceph features`_.
1126 ceph osd set-require-min-compat-client <version>
1128 Subcommand ``stat`` prints summary of OSD map.
1134 Subcommand ``tier`` is used for managing tiers. It uses some additional
1137 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1142 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1144 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1145 to existing pool <pool> (the first one).
1149 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1151 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1155 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
1156 readforward|readproxy
1158 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1159 <pool> (the first one).
1163 ceph osd tier remove <poolname> <poolname>
1165 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1169 ceph osd tier remove-overlay <poolname>
1171 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1176 ceph osd tier set-overlay <poolname> <poolname>
1178 Subcommand ``tree`` prints OSD tree.
1182 ceph osd tree {<int[0-]>}
1184 Subcommand ``unpause`` unpauses osd.
1190 Subcommand ``unset`` unsets <key>.
1194 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1195 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1201 It is used for managing the placement groups in OSDs. It uses some
1202 additional subcommands.
1204 Subcommand ``debug`` shows debug info about pgs.
1208 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1210 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1214 ceph pg deep-scrub <pgid>
1216 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1221 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1223 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1227 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1229 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1233 ceph pg dump_pools_json
1235 Subcommand ``dump_stuck`` shows information about stuck pgs.
1239 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1242 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1248 Subcommand ``ls`` lists pg with specific pool, osd, state
1252 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1253 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1254 recovery|backfill_wait|incomplete|stale| remapped|
1255 deep_scrub|backfill|backfill_toofull|recovery_wait|
1256 undersized [active|clean|down|replay|splitting|
1257 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1258 recovery|backfill_wait|incomplete|stale|remapped|
1259 deep_scrub|backfill|backfill_toofull|recovery_wait|
1262 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1266 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1267 {active|clean|down|replay|splitting|
1268 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1269 recovery|backfill_wait|incomplete|stale| remapped|
1270 deep_scrub|backfill|backfill_toofull|recovery_wait|
1271 undersized [active|clean|down|replay|splitting|
1272 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1273 recovery|backfill_wait|incomplete|stale|remapped|
1274 deep_scrub|backfill|backfill_toofull|recovery_wait|
1277 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1281 ceph pg ls-by-pool <poolstr> {<int>} {active|
1282 clean|down|replay|splitting|
1283 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1284 recovery|backfill_wait|incomplete|stale| remapped|
1285 deep_scrub|backfill|backfill_toofull|recovery_wait|
1286 undersized [active|clean|down|replay|splitting|
1287 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1288 recovery|backfill_wait|incomplete|stale|remapped|
1289 deep_scrub|backfill|backfill_toofull|recovery_wait|
1292 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1296 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1297 {active|clean|down|replay|splitting|
1298 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1299 recovery|backfill_wait|incomplete|stale| remapped|
1300 deep_scrub|backfill|backfill_toofull|recovery_wait|
1301 undersized [active|clean|down|replay|splitting|
1302 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1303 recovery|backfill_wait|incomplete|stale|remapped|
1304 deep_scrub|backfill|backfill_toofull|recovery_wait|
1307 Subcommand ``map`` shows mapping of pg to osds.
1313 Subcommand ``repair`` starts repair on <pgid>.
1317 ceph pg repair <pgid>
1319 Subcommand ``scrub`` starts scrub on <pgid>.
1323 ceph pg scrub <pgid>
1325 Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1329 ceph pg set_full_ratio <float[0.0-1.0]>
1331 Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1335 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1337 Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1342 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1344 Subcommand ``stat`` shows placement group status.
1354 Cause MON to enter or exit quorum.
1358 ceph quorum enter|exit
1360 Note: this only works on the MON to which the ``ceph`` command is connected.
1361 If you want a specific MON to enter or exit quorum, use this syntax::
1363 ceph tell mon.<id> quorum enter|exit
1368 Reports status of monitor quorum.
1378 Reports full status of cluster, optional title tag strings.
1382 ceph report {<tags> [<tags>...]}
1388 Scrubs the monitor stores.
1398 Shows cluster status.
1408 Forces sync of and clear monitor store.
1412 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1418 Sends a command to a specific daemon.
1422 ceph tell <name (type.id)> <args> [<args>...]
1425 List all available commands.
1429 ceph tell <name (type.id)> help
1434 Show mon daemon version
1443 .. option:: -i infile
1445 will specify an input file to be passed along as a payload with the
1446 command to the monitor cluster. This is only used for specific
1449 .. option:: -o outfile
1451 will write any payload returned by the monitor cluster with its
1452 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1455 .. option:: -c ceph.conf, --conf=ceph.conf
1457 Use ceph.conf configuration file instead of the default
1458 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1460 .. option:: --id CLIENT_ID, --user CLIENT_ID
1462 Client id for authentication.
1464 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1466 Client name for authentication.
1468 .. option:: --cluster CLUSTER
1470 Name of the Ceph cluster.
1472 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1474 Submit admin-socket commands via admin sockets in /var/run/ceph.
1476 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1478 You probably mean --admin-daemon
1480 .. option:: -s, --status
1482 Show cluster status.
1484 .. option:: -w, --watch
1486 Watch live cluster changes.
1488 .. option:: --watch-debug
1492 .. option:: --watch-info
1496 .. option:: --watch-sec
1498 Watch security events.
1500 .. option:: --watch-warn
1502 Watch warning events.
1504 .. option:: --watch-error
1508 .. option:: --version, -v
1512 .. option:: --verbose
1516 .. option:: --concise
1520 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1524 .. option:: --connect-timeout CLUSTER_TIMEOUT
1526 Set a timeout for connecting to the cluster.
1528 .. option:: --no-increasing
1530 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1531 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1532 If this option is used with these commands, it will help not to increase osd weight
1533 even the osd is under utilized.
1539 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1540 the Ceph documentation at http://ceph.com/docs for more information.
1546 :doc:`ceph-mon <ceph-mon>`\(8),
1547 :doc:`ceph-osd <ceph-osd>`\(8),
1548 :doc:`ceph-mds <ceph-mds>`\(8)