3 ==================================
4 ceph -- ceph administration tool
5 ==================================
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
14 | **ceph** **compact**
16 | **ceph** **config** [ *dump* | *ls* | *help* | *get* | *show* | *show-with-defaults* | *set* | *rm* | *log* | *reset* | *assimilate-conf* | *generate-minimal-conf* ] ...
18 | **ceph** **config-key** [ *rm* | *exists* | *get* | *ls* | *dump* | *set* ] ...
20 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
22 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
24 | **ceph** **df** *{detail}*
26 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
30 | **ceph** **health** *{detail}*
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
36 | **ceph** **mds** [ *compat* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *repaired* ] ...
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
40 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *ls* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
42 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
44 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
46 | **ceph** **osd** **pool** **application** [ *disable* \| *enable* \| *get* \| *rm* \| *set* ] ...
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *stat* ] ...
52 | **ceph** **quorum_status**
54 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
58 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
60 | **ceph** **tell** *<name (type.id)> <command> [options...]*
62 | **ceph** **version**
67 :program:`ceph` is a control utility which is used for manual deployment and maintenance
68 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
69 monitors, OSDs, placement groups, MDS and overall maintenance, administration
78 Manage authentication keys. It is used for adding, removing, exporting
79 or updating of authentication keys for a particular entity such as a monitor or
80 OSD. It uses some additional subcommands.
82 Subcommand ``add`` adds authentication info for a particular entity from input
83 file, or random key if no input is given and/or any caps specified in the command.
87 ceph auth add <entity> {<caps> [<caps>...]}
89 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
93 ceph auth caps <entity> <caps> [<caps>...]
95 Subcommand ``del`` deletes all caps for ``name``.
99 ceph auth del <entity>
101 Subcommand ``export`` writes keyring for requested entity, or master keyring if
106 ceph auth export {<entity>}
108 Subcommand ``get`` writes keyring file with requested key.
112 ceph auth get <entity>
114 Subcommand ``get-key`` displays requested key.
118 ceph auth get-key <entity>
120 Subcommand ``get-or-create`` adds authentication info for a particular entity
121 from input file, or random key if no input given and/or any caps specified in the
126 ceph auth get-or-create <entity> {<caps> [<caps>...]}
128 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
129 pairs specified in the command. If key already exists, any given caps must match
130 the existing caps for that key.
134 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
136 Subcommand ``import`` reads keyring from input file.
142 Subcommand ``ls`` lists authentication state.
148 Subcommand ``print-key`` displays requested key.
152 ceph auth print-key <entity>
154 Subcommand ``print_key`` displays requested key.
158 ceph auth print_key <entity>
164 Causes compaction of monitor's leveldb storage.
174 Configure the cluster. By default, Ceph daemons and clients retrieve their
175 configuration options from monitor when they start, and are updated if any of
176 the tracked options is changed at run time. It uses following additional
179 Subcommand ``dump`` to dump all options for the cluster
185 Subcommand ``ls`` to list all option names for the cluster
191 Subcommand ``help`` to describe the specified configuration option
195 ceph config help <option>
197 Subcommand ``get`` to dump the option(s) for the specified entity.
201 ceph config get <who> {<option>}
203 Subcommand ``show`` to display the running configuration of the specified
204 entity. Please note, unlike ``get``, which only shows the options managed
205 by monitor, ``show`` displays all the configurations being actively used.
206 These options are pulled from several sources, for instance, the compiled-in
207 default value, the monitor's configuration database, ``ceph.conf`` file on
208 the host. The options can even be overridden at runtime. So, there is chance
209 that the configuration options in the output of ``show`` could be different
210 from those in the output of ``get``.
214 ceph config show {<who>}
216 Subcommand ``show-with-defaults`` to display the running configuration along with the compiled-in defaults of the specified entity
220 ceph config show {<who>}
222 Subcommand ``set`` to set an option for one or more specified entities
226 ceph config set <who> <option> <value> {--force}
228 Subcommand ``rm`` to clear an option for one or more entities
232 ceph config rm <who> <option>
234 Subcommand ``log`` to show recent history of config changes. If `count` option
235 is omitted it defeaults to 10.
239 ceph config log {<count>}
241 Subcommand ``reset`` to revert configuration to the specified historical version
245 ceph config reset <version>
248 Subcommand ``assimilate-conf`` to assimilate options from stdin, and return a
249 new, minimal conf file
253 ceph config assimilate-conf -i <input-config-path> > <output-config-path>
254 ceph config assimilate-conf < <input-config-path>
256 Subcommand ``generate-minimal-conf`` to generate a minimal ``ceph.conf`` file,
257 which can be used for bootstrapping a daemon or a client.
261 ceph config generate-minimal-conf > <minimal-config-path>
267 Manage configuration key. Config-key is a general purpose key/value service
268 offered by the monitors. This service is mainly used by Ceph tools and daemons
269 for persisting various settings. Among which, ceph-mgr modules uses it for
270 storing their options. It uses some additional subcommands.
272 Subcommand ``rm`` deletes configuration key.
276 ceph config-key rm <key>
278 Subcommand ``exists`` checks for configuration keys existence.
282 ceph config-key exists <key>
284 Subcommand ``get`` gets the configuration key.
288 ceph config-key get <key>
290 Subcommand ``ls`` lists configuration keys.
296 Subcommand ``dump`` dumps configuration keys and values.
302 Subcommand ``set`` puts configuration key and value.
306 ceph config-key set <key> {<val>}
312 Submit admin-socket commands.
316 ceph daemon {daemon_name|socket_path} {command} ...
320 ceph daemon osd.0 help
326 Watch performance counters from a Ceph daemon.
330 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
336 Show cluster's free space status.
347 Show the releases and features of all connected daemons and clients connected
348 to the cluster, along with the numbers of them in each bucket grouped by the
349 corresponding features/releases. Each release of Ceph supports a different set
350 of features, expressed by the features bitmask. New cluster features require
351 that clients support the feature, or else they are not allowed to connect to
352 these new features. As new features or capabilities are enabled after an
353 upgrade, older clients are prevented from connecting.
362 Manage cephfs file systems. It uses some additional subcommands.
364 Subcommand ``ls`` to list file systems
370 Subcommand ``new`` to make a new file system using named pools <metadata> and <data>
374 ceph fs new <fs_name> <metadata> <data>
376 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
380 ceph fs reset <fs_name> {--yes-i-really-mean-it}
382 Subcommand ``rm`` to disable the named file system
386 ceph fs rm <fs_name> {--yes-i-really-mean-it}
392 Show cluster's FSID/UUID.
402 Show cluster's health.
412 Show heap usage info (available only if compiled with tcmalloc)
416 ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
418 Subcommand ``release`` to make TCMalloc to releases no-longer-used memory back to the kernel at once.
422 ceph tell <name (type.id)> heap release
424 Subcommand ``(get|set)_release_rate`` get or set the TCMalloc memory release rate. TCMalloc releases
425 no-longer-used memory back to the kernel gradually. the rate controls how quickly this happens.
426 Increase this setting to make TCMalloc to return unused memory more frequently. 0 means never return
427 memory to system, 1 means wait for 1000 pages after releasing a page to system. It is ``1.0`` by default..
431 ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
436 Inject configuration arguments into monitor.
440 ceph injectargs <injected_args> [<injected_args>...]
446 Log supplied text to the monitor log.
450 ceph log <logtext> [<logtext>...]
456 Manage metadata server configuration and administration. It uses some
457 additional subcommands.
459 Subcommand ``compat`` manages compatible features. It uses some additional
462 Subcommand ``rm_compat`` removes compatible feature.
466 ceph mds compat rm_compat <int[0-]>
468 Subcommand ``rm_incompat`` removes incompatible feature.
472 ceph mds compat rm_incompat <int[0-]>
474 Subcommand ``show`` shows mds compatibility settings.
480 Subcommand ``fail`` forces mds to status fail.
484 ceph mds fail <role|gid>
486 Subcommand ``rm`` removes inactive mds.
490 ceph mds rm <int[0-]> <name> (type.id)>
492 Subcommand ``rmfailed`` removes failed mds.
496 ceph mds rmfailed <int[0-]>
498 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
502 ceph mds set_state <int[0-]> <int[0-20]>
504 Subcommand ``stat`` shows MDS status.
510 Subcommand ``repaired`` mark a damaged MDS rank as no longer damaged.
514 ceph mds repaired <role>
519 Manage monitor configuration and administration. It uses some additional
522 Subcommand ``add`` adds new monitor named <name> at <addr>.
526 ceph mon add <name> <IPaddr[:port]>
528 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
532 ceph mon dump {<int[0-]>}
534 Subcommand ``getmap`` gets monmap.
538 ceph mon getmap {<int[0-]>}
540 Subcommand ``remove`` removes monitor named <name>.
544 ceph mon remove <name>
546 Subcommand ``stat`` summarizes monitor status.
555 Ceph manager daemon configuration and management.
557 Subcommand ``dump`` dumps the latest MgrMap, which describes the active
558 and standby manager daemons.
564 Subcommand ``fail`` will mark a manager daemon as failed, removing it
565 from the manager map. If it is the active manager daemon a standby
572 Subcommand ``module ls`` will list currently enabled manager modules (plugins).
578 Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
582 ceph mgr module enable <module>
584 Subcommand ``module disable`` will disable an active manager module.
588 ceph mgr module disable <module>
590 Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
594 ceph mgr metadata [name]
596 Subcommand ``versions`` will report a count of running daemon versions.
602 Subcommand ``count-metadata`` will report a count of any daemon metadata field.
606 ceph mgr count-metadata <field>
613 Manage OSD configuration and administration. It uses some additional
616 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
619 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
624 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
626 Subcommand ``ls`` show blacklisted clients
630 ceph osd blacklist ls
632 Subcommand ``rm`` remove <addr> from blacklist
636 ceph osd blacklist rm <EntityAddr>
638 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
644 Subcommand ``create`` creates new osd (with optional UUID and ID).
646 This command is DEPRECATED as of the Luminous release, and will be removed in
649 Subcommand ``new`` should instead be used.
653 ceph osd create {<uuid>} {<id>}
655 Subcommand ``new`` can be used to create a new OSD or to recreate a previously
656 destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*,
657 and the command expects a JSON file containing the base64 cephx key for auth
658 entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt
659 lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
660 the accompanying lockbox cephx key.
664 ceph osd new {<uuid>} {<id>} -i {<params.json>}
666 The parameters JSON file is optional but if provided, is expected to maintain
667 a form of the following format::
670 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
671 "crush_device_class": "myclass"
677 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
678 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
679 "dmcrypt_key": "<dm-crypt key>",
680 "crush_device_class": "myclass"
686 "crush_device_class": "myclass"
689 The "crush_device_class" property is optional. If specified, it will set the
690 initial CRUSH device class for the new OSD.
693 Subcommand ``crush`` is used for CRUSH management. It uses some additional
696 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
697 <weight> and location <args>.
701 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
703 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
708 ceph osd crush add-bucket <name> <type>
710 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
711 <weight> at/to location <args>.
715 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
718 Subcommand ``dump`` dumps crush map.
724 Subcommand ``get-tunable`` get crush tunable straw_calc_version
728 ceph osd crush get-tunable straw_calc_version
730 Subcommand ``link`` links existing entry for <name> under location <args>.
734 ceph osd crush link <name> <args> [<args>...]
736 Subcommand ``move`` moves existing entry for <name> to location <args>.
740 ceph osd crush move <name> <args> [<args>...]
742 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
747 ceph osd crush remove <name> {<ancestor>}
749 Subcommand ``rename-bucket`` renames bucket <srcname> to <dstname>
753 ceph osd crush rename-bucket <srcname> <dstname>
755 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
759 ceph osd crush reweight <name> <float[0.0-]>
761 Subcommand ``reweight-all`` recalculate the weights for the tree to
762 ensure they sum correctly
766 ceph osd crush reweight-all
768 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
769 to <weight> in crush map
773 ceph osd crush reweight-subtree <name> <weight>
775 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
780 ceph osd crush rm <name> {<ancestor>}
782 Subcommand ``rule`` is used for creating crush rules. It uses some additional
785 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
786 created with <profile> (default default).
790 ceph osd crush rule create-erasure <name> {<profile>}
792 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
793 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
794 (default firstn; indep best for erasure pools).
798 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
800 Subcommand ``dump`` dumps crush rule <name> (default all).
804 ceph osd crush rule dump {<name>}
806 Subcommand ``ls`` lists crush rules.
810 ceph osd crush rule ls
812 Subcommand ``rm`` removes crush rule <name>.
816 ceph osd crush rule rm <name>
818 Subcommand ``set`` used alone, sets crush map from input file.
824 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
825 for <name> to <weight> with location <args>.
829 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
831 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
832 tunable that can be set is straw_calc_version.
836 ceph osd crush set-tunable straw_calc_version <value>
838 Subcommand ``show-tunables`` shows current crush tunables.
842 ceph osd crush show-tunables
844 Subcommand ``tree`` shows the crush buckets and items in a tree view.
850 Subcommand ``tunables`` sets crush tunables values to <profile>.
854 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
856 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
861 ceph osd crush unlink <name> {<ancestor>}
863 Subcommand ``df`` shows OSD utilization
867 ceph osd df {plain|tree}
869 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
873 ceph osd deep-scrub <who>
875 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
879 ceph osd down <ids> [<ids>...]
881 Subcommand ``dump`` prints summary of OSD map.
885 ceph osd dump {<int[0-]>}
887 Subcommand ``erasure-code-profile`` is used for managing the erasure code
888 profiles. It uses some additional subcommands.
890 Subcommand ``get`` gets erasure code profile <name>.
894 ceph osd erasure-code-profile get <name>
896 Subcommand ``ls`` lists all erasure code profiles.
900 ceph osd erasure-code-profile ls
902 Subcommand ``rm`` removes erasure code profile <name>.
906 ceph osd erasure-code-profile rm <name>
908 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
909 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
913 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
915 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
919 ceph osd find <int[0-]>
921 Subcommand ``getcrushmap`` gets CRUSH map.
925 ceph osd getcrushmap {<int[0-]>}
927 Subcommand ``getmap`` gets OSD map.
931 ceph osd getmap {<int[0-]>}
933 Subcommand ``getmaxosd`` shows largest OSD id.
939 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
943 ceph osd in <ids> [<ids>...]
945 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
946 MORE REPLICAS EXIST, BE CAREFUL.
950 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
952 Subcommand ``ls`` shows all OSD ids.
956 ceph osd ls {<int[0-]>}
958 Subcommand ``lspools`` lists pools.
962 ceph osd lspools {<int>}
964 Subcommand ``map`` finds pg for <object> in <pool>.
968 ceph osd map <poolname> <objectname>
970 Subcommand ``metadata`` fetches metadata for osd <id>.
974 ceph osd metadata {int[0-]} (default all)
976 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
980 ceph osd out <ids> [<ids>...]
982 Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
983 stopped without immediately making data unavailable. That is, all
984 data should remain readable and writeable, although data redundancy
985 may be reduced as some PGs may end up in a degraded (but active)
986 state. It will return a success code if it is okay to stop the
987 OSD(s), or an error code and informative message if it is not or if no
988 conclusion can be drawn at the current time.
992 ceph osd ok-to-stop <id> [<ids>...]
994 Subcommand ``pause`` pauses osd.
1000 Subcommand ``perf`` prints dump of OSD perf summary stats.
1006 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
1011 ceph osd pg-temp <pgid> {<id> [<id>...]}
1013 Subcommand ``force-create-pg`` forces creation of pg <pgid>.
1017 ceph osd force-create-pg <pgid>
1020 Subcommand ``pool`` is used for managing data pools. It uses some additional
1023 Subcommand ``create`` creates pool.
1027 ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
1028 {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
1030 Subcommand ``delete`` deletes pool.
1034 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
1036 Subcommand ``get`` gets pool parameter <var>.
1040 ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
1042 Only for tiered pools::
1044 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
1045 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
1046 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1047 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
1049 Only for erasure coded pools::
1051 ceph osd pool get <poolname> erasure_code_profile
1053 Use ``all`` to get all pool parameters that apply to the pool's type::
1055 ceph osd pool get <poolname> all
1057 Subcommand ``get-quota`` obtains object or byte limits for pool.
1061 ceph osd pool get-quota <poolname>
1063 Subcommand ``ls`` list pools
1067 ceph osd pool ls {detail}
1069 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
1073 ceph osd pool mksnap <poolname> <snap>
1075 Subcommand ``rename`` renames <srcpool> to <destpool>.
1079 ceph osd pool rename <poolname> <poolname>
1081 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
1085 ceph osd pool rmsnap <poolname> <snap>
1087 Subcommand ``set`` sets pool parameter <var> to <val>.
1091 ceph osd pool set <poolname> size|min_size|pg_num|
1092 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
1093 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
1094 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
1095 cache_target_dirty_high_ratio|
1096 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1097 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
1098 hit_set_search_last_n
1099 <val> {--yes-i-really-mean-it}
1101 Subcommand ``set-quota`` sets object or byte limit on pool.
1105 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1107 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1111 ceph osd pool stats {<name>}
1113 Subcommand ``application`` is used for adding an annotation to the given
1114 pool. By default, the possible applications are object, block, and file
1115 storage (corresponding app-names are "rgw", "rbd", and "cephfs"). However,
1116 there might be other applications as well. Based on the application, there
1117 may or may not be some processing conducted.
1119 Subcommand ``disable`` disables the given application on the given pool.
1123 ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
1125 Subcommand ``enable`` adds an annotation to the given pool for the mentioned
1130 ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
1132 Subcommand ``get`` displays the value for the given key that is associated
1133 with the given application of the given pool. Not passing the optional
1134 arguments would display all key-value pairs for all applications for all
1139 ceph osd pool application get {<pool-name>} {<app>} {<key>}
1141 Subcommand ``rm`` removes the key-value pair for the given key in the given
1142 application of the given pool.
1146 ceph osd pool application rm <pool-name> <app> <key>
1148 Subcommand ``set`` assosciates or updates, if it already exists, a key-value
1149 pair with the given application for the given pool.
1153 ceph osd pool application set <pool-name> <app> <key> <value>
1155 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1160 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1162 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1167 ceph osd primary-temp <pgid> <id>
1169 Subcommand ``repair`` initiates repair on a specified osd.
1173 ceph osd repair <who>
1175 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1179 osd reweight <int[0-]> <float[0.0-1.0]>
1181 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1182 [overload-percentage-for-consideration, default 120].
1186 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1189 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
1190 [overload-percentage-for-consideration, default 120].
1194 ceph osd reweight-by-utilization {<int[100-]>}
1197 Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1202 ceph osd rm <ids> [<ids>...]
1204 Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1205 entity's keys and all of its dm-crypt and daemon-private config key
1208 This command will not remove the OSD from crush, nor will it remove the
1209 OSD from the OSD map. Instead, once the command successfully completes,
1210 the OSD will show marked as *destroyed*.
1212 In order to mark an OSD as destroyed, the OSD must first be marked as
1217 ceph osd destroy <id> {--yes-i-really-mean-it}
1220 Subcommand ``purge`` performs a combination of ``osd destroy``,
1221 ``osd rm`` and ``osd crush remove``.
1225 ceph osd purge <id> {--yes-i-really-mean-it}
1227 Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1228 destroy an OSD without reducing overall data redundancy or durability.
1229 It will return a success code if it is definitely safe, or an error
1230 code and informative message if it is not or if no conclusion can be
1231 drawn at the current time.
1235 ceph osd safe-to-destroy <id> [<ids>...]
1237 Subcommand ``scrub`` initiates scrub on specified osd.
1241 ceph osd scrub <who>
1243 Subcommand ``set`` sets cluster-wide <flag> by updating OSD map.
1244 The ``full`` flag is not honored anymore since the Mimic release, and
1245 ``ceph osd set full`` is not supported in the Octopus release.
1249 ceph osd set pause|noup|nodown|noout|noin|nobackfill|
1250 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1252 Subcommand ``setcrushmap`` sets crush map from input file.
1256 ceph osd setcrushmap
1258 Subcommand ``setmaxosd`` sets new maximum osd value.
1262 ceph osd setmaxosd <int[0-]>
1264 Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1265 compatible with the specified client version. This subcommand prevents you from
1266 making any changes (e.g., crush tunables, or using new features) that
1267 would violate the current setting. Please note, This subcommand will fail if
1268 any connected daemon or client is not compatible with the features offered by
1269 the given <version>. To see the features and releases of all clients connected
1270 to cluster, please see `ceph features`_.
1274 ceph osd set-require-min-compat-client <version>
1276 Subcommand ``stat`` prints summary of OSD map.
1282 Subcommand ``tier`` is used for managing tiers. It uses some additional
1285 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1290 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1292 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1293 to existing pool <pool> (the first one).
1297 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1299 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1303 ceph osd tier cache-mode <poolname> writeback|readproxy|readonly|none
1305 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1306 <pool> (the first one).
1310 ceph osd tier remove <poolname> <poolname>
1312 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1316 ceph osd tier remove-overlay <poolname>
1318 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1323 ceph osd tier set-overlay <poolname> <poolname>
1325 Subcommand ``tree`` prints OSD tree.
1329 ceph osd tree {<int[0-]>}
1331 Subcommand ``unpause`` unpauses osd.
1337 Subcommand ``unset`` unsets cluster-wide <flag> by updating OSD map.
1341 ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
1342 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1348 It is used for managing the placement groups in OSDs. It uses some
1349 additional subcommands.
1351 Subcommand ``debug`` shows debug info about pgs.
1355 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1357 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1361 ceph pg deep-scrub <pgid>
1363 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1368 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1370 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1374 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1376 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1380 ceph pg dump_pools_json
1382 Subcommand ``dump_stuck`` shows information about stuck pgs.
1386 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1389 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1395 Subcommand ``ls`` lists pg with specific pool, osd, state
1399 ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
1401 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1405 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1406 {<pg-state> [<pg-state>...]}
1408 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1412 ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
1414 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1418 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1419 {<pg-state> [<pg-state>...]}
1421 Subcommand ``map`` shows mapping of pg to osds.
1427 Subcommand ``repair`` starts repair on <pgid>.
1431 ceph pg repair <pgid>
1433 Subcommand ``scrub`` starts scrub on <pgid>.
1437 ceph pg scrub <pgid>
1439 Subcommand ``stat`` shows placement group status.
1449 Cause a specific MON to enter or exit quorum.
1453 ceph tell mon.<id> quorum enter|exit
1458 Reports status of monitor quorum.
1468 Reports full status of cluster, optional title tag strings.
1472 ceph report {<tags> [<tags>...]}
1478 Shows cluster status.
1488 Sends a command to a specific daemon.
1492 ceph tell <name (type.id)> <command> [options...]
1495 List all available commands.
1499 ceph tell <name (type.id)> help
1504 Show mon daemon version
1513 .. option:: -i infile
1515 will specify an input file to be passed along as a payload with the
1516 command to the monitor cluster. This is only used for specific
1519 .. option:: -o outfile
1521 will write any payload returned by the monitor cluster with its
1522 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1525 .. option:: --setuser user
1527 will apply the appropriate user ownership to the file specified by
1530 .. option:: --setgroup group
1532 will apply the appropriate group ownership to the file specified by
1535 .. option:: -c ceph.conf, --conf=ceph.conf
1537 Use ceph.conf configuration file instead of the default
1538 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1540 .. option:: --id CLIENT_ID, --user CLIENT_ID
1542 Client id for authentication.
1544 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1546 Client name for authentication.
1548 .. option:: --cluster CLUSTER
1550 Name of the Ceph cluster.
1552 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1554 Submit admin-socket commands via admin sockets in /var/run/ceph.
1556 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1558 You probably mean --admin-daemon
1560 .. option:: -s, --status
1562 Show cluster status.
1564 .. option:: -w, --watch
1566 Watch live cluster changes on the default 'cluster' channel
1568 .. option:: -W, --watch-channel
1570 Watch live cluster changes on any channel (cluster, audit, cephadm, or * for all)
1572 .. option:: --watch-debug
1576 .. option:: --watch-info
1580 .. option:: --watch-sec
1582 Watch security events.
1584 .. option:: --watch-warn
1586 Watch warning events.
1588 .. option:: --watch-error
1592 .. option:: --version, -v
1596 .. option:: --verbose
1600 .. option:: --concise
1604 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1608 .. option:: --connect-timeout CLUSTER_TIMEOUT
1610 Set a timeout for connecting to the cluster.
1612 .. option:: --no-increasing
1614 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1615 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1616 If this option is used with these commands, it will help not to increase osd weight
1617 even the osd is under utilized.
1621 block until completion (scrub and deep-scrub only)
1626 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1627 the Ceph documentation at http://ceph.com/docs for more information.
1633 :doc:`ceph-mon <ceph-mon>`\(8),
1634 :doc:`ceph-osd <ceph-osd>`\(8),
1635 :doc:`ceph-mds <ceph-mds>`\(8)