3 ==================================
4 ceph -- ceph administration tool
5 ==================================
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
14 | **ceph** **compact**
16 | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
18 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
20 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
22 | **ceph** **df** *{detail}*
24 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
28 | **ceph** **health** *{detail}*
30 | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
36 | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
40 | **ceph** **mon_status**
42 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
44 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
46 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
52 | **ceph** **quorum** [ *enter* \| *exit* ]
54 | **ceph** **quorum_status**
56 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
62 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
64 | **ceph** **tell** *<name (type.id)> <args> [<args>...]*
66 | **ceph** **version**
71 :program:`ceph` is a control utility which is used for manual deployment and maintenance
72 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73 monitors, OSDs, placement groups, MDS and overall maintenance, administration
82 Manage authentication keys. It is used for adding, removing, exporting
83 or updating of authentication keys for a particular entity such as a monitor or
84 OSD. It uses some additional subcommands.
86 Subcommand ``add`` adds authentication info for a particular entity from input
87 file, or random key if no input is given and/or any caps specified in the command.
91 ceph auth add <entity> {<caps> [<caps>...]}
93 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
97 ceph auth caps <entity> <caps> [<caps>...]
99 Subcommand ``del`` deletes all caps for ``name``.
103 ceph auth del <entity>
105 Subcommand ``export`` writes keyring for requested entity, or master keyring if
110 ceph auth export {<entity>}
112 Subcommand ``get`` writes keyring file with requested key.
116 ceph auth get <entity>
118 Subcommand ``get-key`` displays requested key.
122 ceph auth get-key <entity>
124 Subcommand ``get-or-create`` adds authentication info for a particular entity
125 from input file, or random key if no input given and/or any caps specified in the
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
132 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133 pairs specified in the command. If key already exists, any given caps must match
134 the existing caps for that key.
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
140 Subcommand ``import`` reads keyring from input file.
146 Subcommand ``ls`` lists authentication state.
152 Subcommand ``print-key`` displays requested key.
156 ceph auth print-key <entity>
158 Subcommand ``print_key`` displays requested key.
162 ceph auth print_key <entity>
168 Causes compaction of monitor's leveldb storage.
178 Manage configuration key. It uses some additional subcommands.
180 Subcommand ``del`` deletes configuration key.
184 ceph config-key del <key>
186 Subcommand ``exists`` checks for configuration keys existence.
190 ceph config-key exists <key>
192 Subcommand ``get`` gets the configuration key.
196 ceph config-key get <key>
198 Subcommand ``list`` lists configuration keys.
204 Subcommand ``dump`` dumps configuration keys and values.
210 Subcommand ``set`` puts configuration key and value.
214 ceph config-key set <key> {<val>}
220 Submit admin-socket commands.
224 ceph daemon {daemon_name|socket_path} {command} ...
228 ceph daemon osd.0 help
234 Watch performance counters from a Ceph daemon.
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
244 Show cluster's free space status.
255 Show the releases and features of all connected daemons and clients connected
256 to the cluster, along with the numbers of them in each bucket grouped by the
257 corresponding features/releases. Each release of Ceph supports a different set
258 of features, expressed by the features bitmask. New cluster features require
259 that clients support the feature, or else they are not allowed to connect to
260 these new features. As new features or capabilities are enabled after an
261 upgrade, older clients are prevented from connecting.
270 Manage cephfs filesystems. It uses some additional subcommands.
272 Subcommand ``ls`` to list filesystems
278 Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
282 ceph fs new <fs_name> <metadata> <data>
284 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
288 ceph fs reset <fs_name> {--yes-i-really-mean-it}
290 Subcommand ``rm`` to disable the named filesystem
294 ceph fs rm <fs_name> {--yes-i-really-mean-it}
300 Show cluster's FSID/UUID.
310 Show cluster's health.
320 Show heap usage info (available only if compiled with tcmalloc)
324 ceph heap dump|start_profiler|stop_profiler|release|stats
330 Inject configuration arguments into monitor.
334 ceph injectargs <injected_args> [<injected_args>...]
340 Log supplied text to the monitor log.
344 ceph log <logtext> [<logtext>...]
350 Manage metadata server configuration and administration. It uses some
351 additional subcommands.
353 Subcommand ``compat`` manages compatible features. It uses some additional
356 Subcommand ``rm_compat`` removes compatible feature.
360 ceph mds compat rm_compat <int[0-]>
362 Subcommand ``rm_incompat`` removes incompatible feature.
366 ceph mds compat rm_incompat <int[0-]>
368 Subcommand ``show`` shows mds compatibility settings.
374 Subcommand ``deactivate`` stops mds.
378 ceph mds deactivate <who>
380 Subcommand ``fail`` forces mds to status fail.
386 Subcommand ``rm`` removes inactive mds.
390 ceph mds rm <int[0-]> <name> (type.id)>
392 Subcommand ``rmfailed`` removes failed mds.
396 ceph mds rmfailed <int[0-]>
398 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
402 ceph mds set_state <int[0-]> <int[0-20]>
404 Subcommand ``stat`` shows MDS status.
410 Subcommand ``tell`` sends command to particular mds.
414 ceph mds tell <who> <args> [<args>...]
419 Manage monitor configuration and administration. It uses some additional
422 Subcommand ``add`` adds new monitor named <name> at <addr>.
426 ceph mon add <name> <IPaddr[:port]>
428 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
432 ceph mon dump {<int[0-]>}
434 Subcommand ``getmap`` gets monmap.
438 ceph mon getmap {<int[0-]>}
440 Subcommand ``remove`` removes monitor named <name>.
444 ceph mon remove <name>
446 Subcommand ``stat`` summarizes monitor status.
455 Reports status of monitors.
464 Ceph manager daemon configuration and management.
466 Subcommand ``dump`` dumps the latest MgrMap, which describes the active
467 and standby manager daemons.
473 Subcommand ``fail`` will mark a manager daemon as failed, removing it
474 from the manager map. If it is the active manager daemon a standby
481 Subcommand ``module ls`` will list currently enabled manager modules (plugins).
487 Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
491 ceph mgr module enable <module>
493 Subcommand ``module disable`` will disable an active manager module.
497 ceph mgr module disable <module>
499 Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
503 ceph mgr metadata [name]
505 Subcommand ``versions`` will report a count of running daemon versions.
511 Subcommand ``count-metadata`` will report a count of any daemon metadata field.
515 ceph mgr count-metadata <field>
521 Manage OSD configuration and administration. It uses some additional
524 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
527 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
532 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
534 Subcommand ``ls`` show blacklisted clients
538 ceph osd blacklist ls
540 Subcommand ``rm`` remove <addr> from blacklist
544 ceph osd blacklist rm <EntityAddr>
546 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
552 Subcommand ``create`` creates new osd (with optional UUID and ID).
554 This command is DEPRECATED as of the Luminous release, and will be removed in
557 Subcommand ``new`` should instead be used.
561 ceph osd create {<uuid>} {<id>}
563 Subcommand ``new`` reuses a previously destroyed OSD *id*. The new OSD will
564 have the specified *uuid*, and the command expects a JSON file containing
565 the base64 cephx key for auth entity *client.osd.<id>*, as well as optional
566 base64 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying
567 a dm-crypt requires specifying the accompanying lockbox cephx key.
571 ceph osd new {<id>} {<uuid>} -i {<secrets.json>}
573 The secrets JSON file is expected to maintain a form of the following format::
576 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg=="
582 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
583 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
584 "dmcrypt_key": "<dm-crypt key>"
588 Subcommand ``crush`` is used for CRUSH management. It uses some additional
591 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
592 <weight> and location <args>.
596 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
598 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
603 ceph osd crush add-bucket <name> <type>
605 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
606 <weight> at/to location <args>.
610 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
613 Subcommand ``dump`` dumps crush map.
619 Subcommand ``get-tunable`` get crush tunable straw_calc_version
623 ceph osd crush get-tunable straw_calc_version
625 Subcommand ``link`` links existing entry for <name> under location <args>.
629 ceph osd crush link <name> <args> [<args>...]
631 Subcommand ``move`` moves existing entry for <name> to location <args>.
635 ceph osd crush move <name> <args> [<args>...]
637 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
642 ceph osd crush remove <name> {<ancestor>}
644 Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
648 ceph osd crush rename-bucket <srcname> <dstname>
650 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
654 ceph osd crush reweight <name> <float[0.0-]>
656 Subcommand ``reweight-all`` recalculate the weights for the tree to
657 ensure they sum correctly
661 ceph osd crush reweight-all
663 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
664 to <weight> in crush map
668 ceph osd crush reweight-subtree <name> <weight>
670 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
675 ceph osd crush rm <name> {<ancestor>}
677 Subcommand ``rule`` is used for creating crush rules. It uses some additional
680 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
681 created with <profile> (default default).
685 ceph osd crush rule create-erasure <name> {<profile>}
687 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
688 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
689 (default firstn; indep best for erasure pools).
693 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
695 Subcommand ``dump`` dumps crush rule <name> (default all).
699 ceph osd crush rule dump {<name>}
701 Subcommand ``ls`` lists crush rules.
705 ceph osd crush rule ls
707 Subcommand ``rm`` removes crush rule <name>.
711 ceph osd crush rule rm <name>
713 Subcommand ``set`` used alone, sets crush map from input file.
719 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
720 for <name> to <weight> with location <args>.
724 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
726 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
727 tunable that can be set is straw_calc_version.
731 ceph osd crush set-tunable straw_calc_version <value>
733 Subcommand ``show-tunables`` shows current crush tunables.
737 ceph osd crush show-tunables
739 Subcommand ``tree`` shows the crush buckets and items in a tree view.
745 Subcommand ``tunables`` sets crush tunables values to <profile>.
749 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
751 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
756 ceph osd crush unlink <name> {<ancestor>}
758 Subcommand ``df`` shows OSD utilization
762 ceph osd df {plain|tree}
764 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
768 ceph osd deep-scrub <who>
770 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
774 ceph osd down <ids> [<ids>...]
776 Subcommand ``dump`` prints summary of OSD map.
780 ceph osd dump {<int[0-]>}
782 Subcommand ``erasure-code-profile`` is used for managing the erasure code
783 profiles. It uses some additional subcommands.
785 Subcommand ``get`` gets erasure code profile <name>.
789 ceph osd erasure-code-profile get <name>
791 Subcommand ``ls`` lists all erasure code profiles.
795 ceph osd erasure-code-profile ls
797 Subcommand ``rm`` removes erasure code profile <name>.
801 ceph osd erasure-code-profile rm <name>
803 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
804 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
808 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
810 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
814 ceph osd find <int[0-]>
816 Subcommand ``getcrushmap`` gets CRUSH map.
820 ceph osd getcrushmap {<int[0-]>}
822 Subcommand ``getmap`` gets OSD map.
826 ceph osd getmap {<int[0-]>}
828 Subcommand ``getmaxosd`` shows largest OSD id.
834 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
838 ceph osd in <ids> [<ids>...]
840 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
841 MORE REPLICAS EXIST, BE CAREFUL.
845 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
847 Subcommand ``ls`` shows all OSD ids.
851 ceph osd ls {<int[0-]>}
853 Subcommand ``lspools`` lists pools.
857 ceph osd lspools {<int>}
859 Subcommand ``map`` finds pg for <object> in <pool>.
863 ceph osd map <poolname> <objectname>
865 Subcommand ``metadata`` fetches metadata for osd <id>.
869 ceph osd metadata {int[0-]} (default all)
871 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
875 ceph osd out <ids> [<ids>...]
877 Subcommand ``pause`` pauses osd.
883 Subcommand ``perf`` prints dump of OSD perf summary stats.
889 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
894 ceph osd pg-temp <pgid> {<id> [<id>...]}
896 Subcommand ``force-create-pg`` forces creation of pg <pgid>.
900 ceph osd force-create-pg <pgid>
903 Subcommand ``pool`` is used for managing data pools. It uses some additional
906 Subcommand ``create`` creates pool.
910 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
911 {<erasure_code_profile>} {<ruleset>} {<int>}
913 Subcommand ``delete`` deletes pool.
917 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
919 Subcommand ``get`` gets pool parameter <var>.
923 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
924 pgp_num|crush_ruleset|auid|write_fadvise_dontneed
926 Only for tiered pools::
928 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
929 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
930 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
931 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
933 Only for erasure coded pools::
935 ceph osd pool get <poolname> erasure_code_profile
937 Use ``all`` to get all pool parameters that apply to the pool's type::
939 ceph osd pool get <poolname> all
941 Subcommand ``get-quota`` obtains object or byte limits for pool.
945 ceph osd pool get-quota <poolname>
947 Subcommand ``ls`` list pools
951 ceph osd pool ls {detail}
953 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
957 ceph osd pool mksnap <poolname> <snap>
959 Subcommand ``rename`` renames <srcpool> to <destpool>.
963 ceph osd pool rename <poolname> <poolname>
965 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
969 ceph osd pool rmsnap <poolname> <snap>
971 Subcommand ``set`` sets pool parameter <var> to <val>.
975 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
976 pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange|
977 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
978 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
979 cache_target_dirty_high_ratio|
980 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
981 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
982 hit_set_search_last_n
983 <val> {--yes-i-really-mean-it}
985 Subcommand ``set-quota`` sets object or byte limit on pool.
989 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
991 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
995 ceph osd pool stats {<name>}
997 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1002 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1004 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1009 ceph osd primary-temp <pgid> <id>
1011 Subcommand ``repair`` initiates repair on a specified osd.
1015 ceph osd repair <who>
1017 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1021 osd reweight <int[0-]> <float[0.0-1.0]>
1023 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1024 [overload-percentage-for-consideration, default 120].
1028 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1031 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
1032 [overload-percentage-for-consideration, default 120].
1036 ceph osd reweight-by-utilization {<int[100-]>}
1039 Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1044 ceph osd rm <ids> [<ids>...]
1046 Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1047 entity's keys and all of its dm-crypt and daemon-private config key
1050 This command will not remove the OSD from crush, nor will it remove the
1051 OSD from the OSD map. Instead, once the command successfully completes,
1052 the OSD will show marked as *destroyed*.
1054 In order to mark an OSD as destroyed, the OSD must first be marked as
1059 ceph osd destroy <id> {--yes-i-really-mean-it}
1062 Subcommand ``purge`` performs a combination of ``osd destroy``,
1063 ``osd rm`` and ``osd crush remove``.
1067 ceph osd purge <id> {--yes-i-really-mean-it}
1069 Subcommand ``scrub`` initiates scrub on specified osd.
1073 ceph osd scrub <who>
1075 Subcommand ``set`` sets <key>.
1079 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
1080 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1082 Subcommand ``setcrushmap`` sets crush map from input file.
1086 ceph osd setcrushmap
1088 Subcommand ``setmaxosd`` sets new maximum osd value.
1092 ceph osd setmaxosd <int[0-]>
1094 Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1095 compatible with the specified client version. This subcommand prevents you from
1096 making any changes (e.g., crush tunables, or using new features) that
1097 would violate the current setting. Please note, This subcommand will fail if
1098 any connected daemon or client is not compatible with the features offered by
1099 the given <version>. To see the features and releases of all clients connected
1100 to cluster, please see `ceph features`_.
1104 ceph osd set-require-min-compat-client <version>
1106 Subcommand ``stat`` prints summary of OSD map.
1112 Subcommand ``tier`` is used for managing tiers. It uses some additional
1115 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1120 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1122 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1123 to existing pool <pool> (the first one).
1127 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1129 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1133 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
1134 readforward|readproxy
1136 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1137 <pool> (the first one).
1141 ceph osd tier remove <poolname> <poolname>
1143 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1147 ceph osd tier remove-overlay <poolname>
1149 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1154 ceph osd tier set-overlay <poolname> <poolname>
1156 Subcommand ``tree`` prints OSD tree.
1160 ceph osd tree {<int[0-]>}
1162 Subcommand ``unpause`` unpauses osd.
1168 Subcommand ``unset`` unsets <key>.
1172 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1173 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1179 It is used for managing the placement groups in OSDs. It uses some
1180 additional subcommands.
1182 Subcommand ``debug`` shows debug info about pgs.
1186 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1188 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1192 ceph pg deep-scrub <pgid>
1194 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1199 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1201 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1205 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1207 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1211 ceph pg dump_pools_json
1213 Subcommand ``dump_stuck`` shows information about stuck pgs.
1217 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1220 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1226 Subcommand ``ls`` lists pg with specific pool, osd, state
1230 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1231 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1232 recovery|backfill_wait|incomplete|stale| remapped|
1233 deep_scrub|backfill|backfill_toofull|recovery_wait|
1234 undersized [active|clean|down|replay|splitting|
1235 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1236 recovery|backfill_wait|incomplete|stale|remapped|
1237 deep_scrub|backfill|backfill_toofull|recovery_wait|
1240 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1244 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1245 {active|clean|down|replay|splitting|
1246 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1247 recovery|backfill_wait|incomplete|stale| remapped|
1248 deep_scrub|backfill|backfill_toofull|recovery_wait|
1249 undersized [active|clean|down|replay|splitting|
1250 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1251 recovery|backfill_wait|incomplete|stale|remapped|
1252 deep_scrub|backfill|backfill_toofull|recovery_wait|
1255 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1259 ceph pg ls-by-pool <poolstr> {<int>} {active|
1260 clean|down|replay|splitting|
1261 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1262 recovery|backfill_wait|incomplete|stale| remapped|
1263 deep_scrub|backfill|backfill_toofull|recovery_wait|
1264 undersized [active|clean|down|replay|splitting|
1265 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1266 recovery|backfill_wait|incomplete|stale|remapped|
1267 deep_scrub|backfill|backfill_toofull|recovery_wait|
1270 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1274 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1275 {active|clean|down|replay|splitting|
1276 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1277 recovery|backfill_wait|incomplete|stale| remapped|
1278 deep_scrub|backfill|backfill_toofull|recovery_wait|
1279 undersized [active|clean|down|replay|splitting|
1280 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1281 recovery|backfill_wait|incomplete|stale|remapped|
1282 deep_scrub|backfill|backfill_toofull|recovery_wait|
1285 Subcommand ``map`` shows mapping of pg to osds.
1291 Subcommand ``repair`` starts repair on <pgid>.
1295 ceph pg repair <pgid>
1297 Subcommand ``scrub`` starts scrub on <pgid>.
1301 ceph pg scrub <pgid>
1303 Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1307 ceph pg set_full_ratio <float[0.0-1.0]>
1309 Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1313 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1315 Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1320 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1322 Subcommand ``stat`` shows placement group status.
1332 Cause MON to enter or exit quorum.
1336 ceph quorum enter|exit
1338 Note: this only works on the MON to which the ``ceph`` command is connected.
1339 If you want a specific MON to enter or exit quorum, use this syntax::
1341 ceph tell mon.<id> quorum enter|exit
1346 Reports status of monitor quorum.
1356 Reports full status of cluster, optional title tag strings.
1360 ceph report {<tags> [<tags>...]}
1366 Scrubs the monitor stores.
1376 Shows cluster status.
1386 Forces sync of and clear monitor store.
1390 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1396 Sends a command to a specific daemon.
1400 ceph tell <name (type.id)> <args> [<args>...]
1403 List all available commands.
1407 ceph tell <name (type.id)> help
1412 Show mon daemon version
1421 .. option:: -i infile
1423 will specify an input file to be passed along as a payload with the
1424 command to the monitor cluster. This is only used for specific
1427 .. option:: -o outfile
1429 will write any payload returned by the monitor cluster with its
1430 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1433 .. option:: -c ceph.conf, --conf=ceph.conf
1435 Use ceph.conf configuration file instead of the default
1436 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1438 .. option:: --id CLIENT_ID, --user CLIENT_ID
1440 Client id for authentication.
1442 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1444 Client name for authentication.
1446 .. option:: --cluster CLUSTER
1448 Name of the Ceph cluster.
1450 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1452 Submit admin-socket commands via admin sockets in /var/run/ceph.
1454 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1456 You probably mean --admin-daemon
1458 .. option:: -s, --status
1460 Show cluster status.
1462 .. option:: -w, --watch
1464 Watch live cluster changes.
1466 .. option:: --watch-debug
1470 .. option:: --watch-info
1474 .. option:: --watch-sec
1476 Watch security events.
1478 .. option:: --watch-warn
1480 Watch warning events.
1482 .. option:: --watch-error
1486 .. option:: --version, -v
1490 .. option:: --verbose
1494 .. option:: --concise
1498 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1502 .. option:: --connect-timeout CLUSTER_TIMEOUT
1504 Set a timeout for connecting to the cluster.
1506 .. option:: --no-increasing
1508 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1509 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1510 If this option is used with these commands, it will help not to increase osd weight
1511 even the osd is under utilized.
1517 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1518 the Ceph documentation at http://ceph.com/docs for more information.
1524 :doc:`ceph-mon <ceph-mon>`\(8),
1525 :doc:`ceph-osd <ceph-osd>`\(8),
1526 :doc:`ceph-mds <ceph-mds>`\(8)