3 ==================================
4 ceph -- ceph administration tool
5 ==================================
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
14 | **ceph** **compact**
16 | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
18 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
20 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
22 | **ceph** **df** *{detail}*
24 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
28 | **ceph** **health** *{detail}*
30 | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
36 | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
40 | **ceph** **mon_status**
42 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
44 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
46 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
52 | **ceph** **quorum** [ *enter* \| *exit* ]
54 | **ceph** **quorum_status**
56 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
62 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
64 | **ceph** **tell** *<name (type.id)> <args> [<args>...]*
66 | **ceph** **version**
71 :program:`ceph` is a control utility which is used for manual deployment and maintenance
72 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73 monitors, OSDs, placement groups, MDS and overall maintenance, administration
82 Manage authentication keys. It is used for adding, removing, exporting
83 or updating of authentication keys for a particular entity such as a monitor or
84 OSD. It uses some additional subcommands.
86 Subcommand ``add`` adds authentication info for a particular entity from input
87 file, or random key if no input is given and/or any caps specified in the command.
91 ceph auth add <entity> {<caps> [<caps>...]}
93 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
97 ceph auth caps <entity> <caps> [<caps>...]
99 Subcommand ``del`` deletes all caps for ``name``.
103 ceph auth del <entity>
105 Subcommand ``export`` writes keyring for requested entity, or master keyring if
110 ceph auth export {<entity>}
112 Subcommand ``get`` writes keyring file with requested key.
116 ceph auth get <entity>
118 Subcommand ``get-key`` displays requested key.
122 ceph auth get-key <entity>
124 Subcommand ``get-or-create`` adds authentication info for a particular entity
125 from input file, or random key if no input given and/or any caps specified in the
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
132 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133 pairs specified in the command. If key already exists, any given caps must match
134 the existing caps for that key.
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
140 Subcommand ``import`` reads keyring from input file.
146 Subcommand ``ls`` lists authentication state.
152 Subcommand ``print-key`` displays requested key.
156 ceph auth print-key <entity>
158 Subcommand ``print_key`` displays requested key.
162 ceph auth print_key <entity>
168 Causes compaction of monitor's leveldb storage.
178 Manage configuration key. It uses some additional subcommands.
180 Subcommand ``del`` deletes configuration key.
184 ceph config-key del <key>
186 Subcommand ``exists`` checks for configuration keys existence.
190 ceph config-key exists <key>
192 Subcommand ``get`` gets the configuration key.
196 ceph config-key get <key>
198 Subcommand ``list`` lists configuration keys.
204 Subcommand ``dump`` dumps configuration keys and values.
210 Subcommand ``set`` puts configuration key and value.
214 ceph config-key set <key> {<val>}
220 Submit admin-socket commands.
224 ceph daemon {daemon_name|socket_path} {command} ...
228 ceph daemon osd.0 help
234 Watch performance counters from a Ceph daemon.
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
244 Show cluster's free space status.
255 Show the releases and features of all connected daemons and clients connected
256 to the cluster, along with the numbers of them in each bucket grouped by the
257 corresponding features/releases. Each release of Ceph supports a different set
258 of features, expressed by the features bitmask. New cluster features require
259 that clients support the feature, or else they are not allowed to connect to
260 these new features. As new features or capabilities are enabled after an
261 upgrade, older clients are prevented from connecting.
270 Manage cephfs filesystems. It uses some additional subcommands.
272 Subcommand ``ls`` to list filesystems
278 Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
282 ceph fs new <fs_name> <metadata> <data>
284 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
288 ceph fs reset <fs_name> {--yes-i-really-mean-it}
290 Subcommand ``rm`` to disable the named filesystem
294 ceph fs rm <fs_name> {--yes-i-really-mean-it}
300 Show cluster's FSID/UUID.
310 Show cluster's health.
320 Show heap usage info (available only if compiled with tcmalloc)
324 ceph heap dump|start_profiler|stop_profiler|release|stats
330 Inject configuration arguments into monitor.
334 ceph injectargs <injected_args> [<injected_args>...]
340 Log supplied text to the monitor log.
344 ceph log <logtext> [<logtext>...]
350 Manage metadata server configuration and administration. It uses some
351 additional subcommands.
353 Subcommand ``compat`` manages compatible features. It uses some additional
356 Subcommand ``rm_compat`` removes compatible feature.
360 ceph mds compat rm_compat <int[0-]>
362 Subcommand ``rm_incompat`` removes incompatible feature.
366 ceph mds compat rm_incompat <int[0-]>
368 Subcommand ``show`` shows mds compatibility settings.
374 Subcommand ``deactivate`` stops mds.
378 ceph mds deactivate <who>
380 Subcommand ``fail`` forces mds to status fail.
386 Subcommand ``rm`` removes inactive mds.
390 ceph mds rm <int[0-]> <name> (type.id)>
392 Subcommand ``rmfailed`` removes failed mds.
396 ceph mds rmfailed <int[0-]>
398 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
402 ceph mds set_state <int[0-]> <int[0-20]>
404 Subcommand ``stat`` shows MDS status.
410 Subcommand ``tell`` sends command to particular mds.
414 ceph mds tell <who> <args> [<args>...]
419 Manage monitor configuration and administration. It uses some additional
422 Subcommand ``add`` adds new monitor named <name> at <addr>.
426 ceph mon add <name> <IPaddr[:port]>
428 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
432 ceph mon dump {<int[0-]>}
434 Subcommand ``getmap`` gets monmap.
438 ceph mon getmap {<int[0-]>}
440 Subcommand ``remove`` removes monitor named <name>.
444 ceph mon remove <name>
446 Subcommand ``stat`` summarizes monitor status.
455 Reports status of monitors.
464 Ceph manager daemon configuration and management.
466 Subcommand ``dump`` dumps the latest MgrMap, which describes the active
467 and standby manager daemons.
473 Subcommand ``fail`` will mark a manager daemon as failed, removing it
474 from the manager map. If it is the active manager daemon a standby
481 Subcommand ``module ls`` will list currently enabled manager modules (plugins).
487 Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
491 ceph mgr module enable <module>
493 Subcommand ``module disable`` will disable an active manager module.
497 ceph mgr module disable <module>
499 Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
503 ceph mgr metadata [name]
505 Subcommand ``versions`` will report a count of running daemon versions.
511 Subcommand ``count-metadata`` will report a count of any daemon metadata field.
515 ceph mgr count-metadata <field>
521 Manage OSD configuration and administration. It uses some additional
524 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
527 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
532 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
534 Subcommand ``ls`` show blacklisted clients
538 ceph osd blacklist ls
540 Subcommand ``rm`` remove <addr> from blacklist
544 ceph osd blacklist rm <EntityAddr>
546 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
552 Subcommand ``create`` creates new osd (with optional UUID and ID).
554 This command is DEPRECATED as of the Luminous release, and will be removed in
557 Subcommand ``new`` should instead be used.
561 ceph osd create {<uuid>} {<id>}
563 Subcommand ``new`` can be used to create a new OSD or to recreate a previously
564 destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*,
565 and the command expects a JSON file containing the base64 cephx key for auth
566 entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt
567 lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
568 the accompanying lockbox cephx key.
572 ceph osd new {<uuid>} {<id>} -i {<params.json>}
574 The parameters JSON file is optional but if provided, is expected to maintain
575 a form of the following format::
578 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
579 "crush_device_class": "myclass"
585 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
586 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
587 "dmcrypt_key": "<dm-crypt key>",
588 "crush_device_class": "myclass"
594 "crush_device_class": "myclass"
597 The "crush_device_class" property is optional. If specified, it will set the
598 initial CRUSH device class for the new OSD.
601 Subcommand ``crush`` is used for CRUSH management. It uses some additional
604 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
605 <weight> and location <args>.
609 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
611 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
616 ceph osd crush add-bucket <name> <type>
618 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
619 <weight> at/to location <args>.
623 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
626 Subcommand ``dump`` dumps crush map.
632 Subcommand ``get-tunable`` get crush tunable straw_calc_version
636 ceph osd crush get-tunable straw_calc_version
638 Subcommand ``link`` links existing entry for <name> under location <args>.
642 ceph osd crush link <name> <args> [<args>...]
644 Subcommand ``move`` moves existing entry for <name> to location <args>.
648 ceph osd crush move <name> <args> [<args>...]
650 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
655 ceph osd crush remove <name> {<ancestor>}
657 Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
661 ceph osd crush rename-bucket <srcname> <dstname>
663 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
667 ceph osd crush reweight <name> <float[0.0-]>
669 Subcommand ``reweight-all`` recalculate the weights for the tree to
670 ensure they sum correctly
674 ceph osd crush reweight-all
676 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
677 to <weight> in crush map
681 ceph osd crush reweight-subtree <name> <weight>
683 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
688 ceph osd crush rm <name> {<ancestor>}
690 Subcommand ``rule`` is used for creating crush rules. It uses some additional
693 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
694 created with <profile> (default default).
698 ceph osd crush rule create-erasure <name> {<profile>}
700 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
701 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
702 (default firstn; indep best for erasure pools).
706 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
708 Subcommand ``dump`` dumps crush rule <name> (default all).
712 ceph osd crush rule dump {<name>}
714 Subcommand ``ls`` lists crush rules.
718 ceph osd crush rule ls
720 Subcommand ``rm`` removes crush rule <name>.
724 ceph osd crush rule rm <name>
726 Subcommand ``set`` used alone, sets crush map from input file.
732 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
733 for <name> to <weight> with location <args>.
737 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
739 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
740 tunable that can be set is straw_calc_version.
744 ceph osd crush set-tunable straw_calc_version <value>
746 Subcommand ``show-tunables`` shows current crush tunables.
750 ceph osd crush show-tunables
752 Subcommand ``tree`` shows the crush buckets and items in a tree view.
758 Subcommand ``tunables`` sets crush tunables values to <profile>.
762 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
764 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
769 ceph osd crush unlink <name> {<ancestor>}
771 Subcommand ``df`` shows OSD utilization
775 ceph osd df {plain|tree}
777 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
781 ceph osd deep-scrub <who>
783 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
787 ceph osd down <ids> [<ids>...]
789 Subcommand ``dump`` prints summary of OSD map.
793 ceph osd dump {<int[0-]>}
795 Subcommand ``erasure-code-profile`` is used for managing the erasure code
796 profiles. It uses some additional subcommands.
798 Subcommand ``get`` gets erasure code profile <name>.
802 ceph osd erasure-code-profile get <name>
804 Subcommand ``ls`` lists all erasure code profiles.
808 ceph osd erasure-code-profile ls
810 Subcommand ``rm`` removes erasure code profile <name>.
814 ceph osd erasure-code-profile rm <name>
816 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
817 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
821 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
823 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
827 ceph osd find <int[0-]>
829 Subcommand ``getcrushmap`` gets CRUSH map.
833 ceph osd getcrushmap {<int[0-]>}
835 Subcommand ``getmap`` gets OSD map.
839 ceph osd getmap {<int[0-]>}
841 Subcommand ``getmaxosd`` shows largest OSD id.
847 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
851 ceph osd in <ids> [<ids>...]
853 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
854 MORE REPLICAS EXIST, BE CAREFUL.
858 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
860 Subcommand ``ls`` shows all OSD ids.
864 ceph osd ls {<int[0-]>}
866 Subcommand ``lspools`` lists pools.
870 ceph osd lspools {<int>}
872 Subcommand ``map`` finds pg for <object> in <pool>.
876 ceph osd map <poolname> <objectname>
878 Subcommand ``metadata`` fetches metadata for osd <id>.
882 ceph osd metadata {int[0-]} (default all)
884 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
888 ceph osd out <ids> [<ids>...]
890 Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
891 stopped without immediately making data unavailable. That is, all
892 data should remain readable and writeable, although data redundancy
893 may be reduced as some PGs may end up in a degraded (but active)
894 state. It will return a success code if it is okay to stop the
895 OSD(s), or an error code and informative message if it is not or if no
896 conclusion can be drawn at the current time.
900 ceph osd ok-to-stop <id> [<ids>...]
902 Subcommand ``pause`` pauses osd.
908 Subcommand ``perf`` prints dump of OSD perf summary stats.
914 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
919 ceph osd pg-temp <pgid> {<id> [<id>...]}
921 Subcommand ``force-create-pg`` forces creation of pg <pgid>.
925 ceph osd force-create-pg <pgid>
928 Subcommand ``pool`` is used for managing data pools. It uses some additional
931 Subcommand ``create`` creates pool.
935 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
936 {<erasure_code_profile>} {<rule>} {<int>}
938 Subcommand ``delete`` deletes pool.
942 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
944 Subcommand ``get`` gets pool parameter <var>.
948 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
949 pgp_num|crush_rule|auid|write_fadvise_dontneed
951 Only for tiered pools::
953 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
954 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
955 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
956 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
958 Only for erasure coded pools::
960 ceph osd pool get <poolname> erasure_code_profile
962 Use ``all`` to get all pool parameters that apply to the pool's type::
964 ceph osd pool get <poolname> all
966 Subcommand ``get-quota`` obtains object or byte limits for pool.
970 ceph osd pool get-quota <poolname>
972 Subcommand ``ls`` list pools
976 ceph osd pool ls {detail}
978 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
982 ceph osd pool mksnap <poolname> <snap>
984 Subcommand ``rename`` renames <srcpool> to <destpool>.
988 ceph osd pool rename <poolname> <poolname>
990 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
994 ceph osd pool rmsnap <poolname> <snap>
996 Subcommand ``set`` sets pool parameter <var> to <val>.
1000 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
1001 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
1002 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
1003 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
1004 cache_target_dirty_high_ratio|
1005 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
1006 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
1007 hit_set_search_last_n
1008 <val> {--yes-i-really-mean-it}
1010 Subcommand ``set-quota`` sets object or byte limit on pool.
1014 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1016 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1020 ceph osd pool stats {<name>}
1022 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1027 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1029 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1034 ceph osd primary-temp <pgid> <id>
1036 Subcommand ``repair`` initiates repair on a specified osd.
1040 ceph osd repair <who>
1042 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1046 osd reweight <int[0-]> <float[0.0-1.0]>
1048 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1049 [overload-percentage-for-consideration, default 120].
1053 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1056 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
1057 [overload-percentage-for-consideration, default 120].
1061 ceph osd reweight-by-utilization {<int[100-]>}
1064 Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1069 ceph osd rm <ids> [<ids>...]
1071 Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1072 entity's keys and all of its dm-crypt and daemon-private config key
1075 This command will not remove the OSD from crush, nor will it remove the
1076 OSD from the OSD map. Instead, once the command successfully completes,
1077 the OSD will show marked as *destroyed*.
1079 In order to mark an OSD as destroyed, the OSD must first be marked as
1084 ceph osd destroy <id> {--yes-i-really-mean-it}
1087 Subcommand ``purge`` performs a combination of ``osd destroy``,
1088 ``osd rm`` and ``osd crush remove``.
1092 ceph osd purge <id> {--yes-i-really-mean-it}
1094 Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1095 destroy an OSD without reducing overall data redundancy or durability.
1096 It will return a success code if it is definitely safe, or an error
1097 code and informative message if it is not or if no conclusion can be
1098 drawn at the current time.
1102 ceph osd safe-to-destroy <id> [<ids>...]
1104 Subcommand ``scrub`` initiates scrub on specified osd.
1108 ceph osd scrub <who>
1110 Subcommand ``set`` sets <key>.
1114 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
1115 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1117 Subcommand ``setcrushmap`` sets crush map from input file.
1121 ceph osd setcrushmap
1123 Subcommand ``setmaxosd`` sets new maximum osd value.
1127 ceph osd setmaxosd <int[0-]>
1129 Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1130 compatible with the specified client version. This subcommand prevents you from
1131 making any changes (e.g., crush tunables, or using new features) that
1132 would violate the current setting. Please note, This subcommand will fail if
1133 any connected daemon or client is not compatible with the features offered by
1134 the given <version>. To see the features and releases of all clients connected
1135 to cluster, please see `ceph features`_.
1139 ceph osd set-require-min-compat-client <version>
1141 Subcommand ``stat`` prints summary of OSD map.
1147 Subcommand ``tier`` is used for managing tiers. It uses some additional
1150 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1155 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1157 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1158 to existing pool <pool> (the first one).
1162 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1164 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1168 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
1169 readforward|readproxy
1171 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1172 <pool> (the first one).
1176 ceph osd tier remove <poolname> <poolname>
1178 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1182 ceph osd tier remove-overlay <poolname>
1184 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1189 ceph osd tier set-overlay <poolname> <poolname>
1191 Subcommand ``tree`` prints OSD tree.
1195 ceph osd tree {<int[0-]>}
1197 Subcommand ``unpause`` unpauses osd.
1203 Subcommand ``unset`` unsets <key>.
1207 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1208 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1214 It is used for managing the placement groups in OSDs. It uses some
1215 additional subcommands.
1217 Subcommand ``debug`` shows debug info about pgs.
1221 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1223 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1227 ceph pg deep-scrub <pgid>
1229 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1234 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1236 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1240 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1242 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1246 ceph pg dump_pools_json
1248 Subcommand ``dump_stuck`` shows information about stuck pgs.
1252 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1255 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1261 Subcommand ``ls`` lists pg with specific pool, osd, state
1265 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1266 scrubbing|degraded|inconsistent|peering|repair|
1267 recovery|backfill_wait|incomplete|stale| remapped|
1268 deep_scrub|backfill|backfill_toofull|recovery_wait|
1269 undersized [active|clean|down|replay|splitting|
1270 scrubbing|degraded|inconsistent|peering|repair|
1271 recovery|backfill_wait|incomplete|stale|remapped|
1272 deep_scrub|backfill|backfill_toofull|recovery_wait|
1275 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1279 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1280 {active|clean|down|replay|splitting|
1281 scrubbing|degraded|inconsistent|peering|repair|
1282 recovery|backfill_wait|incomplete|stale| remapped|
1283 deep_scrub|backfill|backfill_toofull|recovery_wait|
1284 undersized [active|clean|down|replay|splitting|
1285 scrubbing|degraded|inconsistent|peering|repair|
1286 recovery|backfill_wait|incomplete|stale|remapped|
1287 deep_scrub|backfill|backfill_toofull|recovery_wait|
1290 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1294 ceph pg ls-by-pool <poolstr> {<int>} {active|
1295 clean|down|replay|splitting|
1296 scrubbing|degraded|inconsistent|peering|repair|
1297 recovery|backfill_wait|incomplete|stale| remapped|
1298 deep_scrub|backfill|backfill_toofull|recovery_wait|
1299 undersized [active|clean|down|replay|splitting|
1300 scrubbing|degraded|inconsistent|peering|repair|
1301 recovery|backfill_wait|incomplete|stale|remapped|
1302 deep_scrub|backfill|backfill_toofull|recovery_wait|
1305 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1309 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1310 {active|clean|down|replay|splitting|
1311 scrubbing|degraded|inconsistent|peering|repair|
1312 recovery|backfill_wait|incomplete|stale| remapped|
1313 deep_scrub|backfill|backfill_toofull|recovery_wait|
1314 undersized [active|clean|down|replay|splitting|
1315 scrubbing|degraded|inconsistent|peering|repair|
1316 recovery|backfill_wait|incomplete|stale|remapped|
1317 deep_scrub|backfill|backfill_toofull|recovery_wait|
1320 Subcommand ``map`` shows mapping of pg to osds.
1326 Subcommand ``repair`` starts repair on <pgid>.
1330 ceph pg repair <pgid>
1332 Subcommand ``scrub`` starts scrub on <pgid>.
1336 ceph pg scrub <pgid>
1338 Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1342 ceph pg set_full_ratio <float[0.0-1.0]>
1344 Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1348 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1350 Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1355 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1357 Subcommand ``stat`` shows placement group status.
1367 Cause MON to enter or exit quorum.
1371 ceph quorum enter|exit
1373 Note: this only works on the MON to which the ``ceph`` command is connected.
1374 If you want a specific MON to enter or exit quorum, use this syntax::
1376 ceph tell mon.<id> quorum enter|exit
1381 Reports status of monitor quorum.
1391 Reports full status of cluster, optional title tag strings.
1395 ceph report {<tags> [<tags>...]}
1401 Scrubs the monitor stores.
1411 Shows cluster status.
1421 Forces sync of and clear monitor store.
1425 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1431 Sends a command to a specific daemon.
1435 ceph tell <name (type.id)> <args> [<args>...]
1438 List all available commands.
1442 ceph tell <name (type.id)> help
1447 Show mon daemon version
1456 .. option:: -i infile
1458 will specify an input file to be passed along as a payload with the
1459 command to the monitor cluster. This is only used for specific
1462 .. option:: -o outfile
1464 will write any payload returned by the monitor cluster with its
1465 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1468 .. option:: --setuser user
1470 will apply the appropriate user ownership to the file specified by
1473 .. option:: --setgroup group
1475 will apply the appropriate group ownership to the file specified by
1478 .. option:: -c ceph.conf, --conf=ceph.conf
1480 Use ceph.conf configuration file instead of the default
1481 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1483 .. option:: --id CLIENT_ID, --user CLIENT_ID
1485 Client id for authentication.
1487 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1489 Client name for authentication.
1491 .. option:: --cluster CLUSTER
1493 Name of the Ceph cluster.
1495 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1497 Submit admin-socket commands via admin sockets in /var/run/ceph.
1499 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1501 You probably mean --admin-daemon
1503 .. option:: -s, --status
1505 Show cluster status.
1507 .. option:: -w, --watch
1509 Watch live cluster changes.
1511 .. option:: --watch-debug
1515 .. option:: --watch-info
1519 .. option:: --watch-sec
1521 Watch security events.
1523 .. option:: --watch-warn
1525 Watch warning events.
1527 .. option:: --watch-error
1531 .. option:: --version, -v
1535 .. option:: --verbose
1539 .. option:: --concise
1543 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1547 .. option:: --connect-timeout CLUSTER_TIMEOUT
1549 Set a timeout for connecting to the cluster.
1551 .. option:: --no-increasing
1553 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1554 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1555 If this option is used with these commands, it will help not to increase osd weight
1556 even the osd is under utilized.
1562 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1563 the Ceph documentation at http://ceph.com/docs for more information.
1569 :doc:`ceph-mon <ceph-mon>`\(8),
1570 :doc:`ceph-osd <ceph-osd>`\(8),
1571 :doc:`ceph-mds <ceph-mds>`\(8)