| **ceph** **compact**
-| **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
+| **ceph** **config-key** [ *rm* | *exists* | *get* | *ls* | *dump* | *set* ] ...
| **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
| **ceph** **health** *{detail}*
-| **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
+| **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *get_release_rate* \| *set_release_rate* \| *stats* ] ...
| **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
| **ceph** **log** *<logtext>* [ *<logtext>*... ]
-| **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
+| **ceph** **mds** [ *compat* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *repaired* ] ...
| **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
| **ceph** **mon_status**
-| **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
+| **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *ls* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
| **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
| **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
+| **ceph** **osd** **pool** **application** [ *disable* \| *enable* \| *get* \| *rm* \| *set* ] ...
+
| **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
-| **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
+| **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *stat* ] ...
| **ceph** **quorum** [ *enter* \| *exit* ]
| **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
-| **ceph** **tell** *<name (type.id)> <args> [<args>...]*
+| **ceph** **tell** *<name (type.id)> <command> [options...]*
| **ceph** **version**
Manage configuration key. It uses some additional subcommands.
-Subcommand ``del`` deletes configuration key.
+Subcommand ``rm`` deletes configuration key.
Usage::
- ceph config-key del <key>
+ ceph config-key rm <key>
Subcommand ``exists`` checks for configuration keys existence.
ceph config-key get <key>
-Subcommand ``list`` lists configuration keys.
+Subcommand ``ls`` lists configuration keys.
Usage::
Usage::
- ceph heap dump|start_profiler|stop_profiler|release|stats
+ ceph heap dump|start_profiler|stop_profiler|stats
+
+Subcommand ``release`` to make TCMalloc to releases no-longer-used memory back to the kernel at once.
+
+Usage::
+
+ ceph heap release
+
+Subcommand ``(get|set)_release_rate`` get or set the TCMalloc memory release rate. TCMalloc releases
+no-longer-used memory back to the kernel gradually. the rate controls how quickly this happens.
+Increase this setting to make TCMalloc to return unused memory more frequently. 0 means never return
+memory to system, 1 means wait for 1000 pages after releasing a page to system. It is ``1.0`` by default..
+Usage::
+
+ ceph heap get_release_rate|set_release_rate {<val>}
injectargs
----------
ceph mds compat show
-Subcommand ``deactivate`` stops mds.
-
-Usage::
-
- ceph mds deactivate <who>
-
Subcommand ``fail`` forces mds to status fail.
Usage::
- ceph mds fail <who>
+ ceph mds fail <role|gid>
Subcommand ``rm`` removes inactive mds.
ceph mds stat
-Subcommand ``tell`` sends command to particular mds.
+Subcommand ``repaired`` mark a damaged MDS rank as no longer damaged.
Usage::
- ceph mds tell <who> <args> [<args>...]
+ ceph mds repaired <role>
mon
---
ceph mgr count-metadata <field>
+.. _ceph-admin-osd:
osd
---
ceph osd crush remove <name> {<ancestor>}
-Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
+Subcommand ``rename-bucket`` renames bucket <srcname> to <dstname>
Usage::
Usage::
- ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
- pgp_num|crush_rule|auid|write_fadvise_dontneed
+ ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
Only for tiered pools::
Usage::
- ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
+ ceph osd pool set <poolname> size|min_size|pg_num|
pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
target_max_bytes|target_max_objects|cache_target_dirty_ratio|
cache_target_dirty_high_ratio|
- cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
+ cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
hit_set_search_last_n
<val> {--yes-i-really-mean-it}
ceph osd pool stats {<name>}
+Subcommand ``application`` is used for adding an annotation to the given
+pool. By default, the possible applications are object, block, and file
+storage (corresponding app-names are "rgw", "rbd", and "cephfs"). However,
+there might be other applications as well. Based on the application, there
+may or may not be some processing conducted.
+
+Subcommand ``disable`` disables the given application on the given pool.
+
+Usage::
+
+ ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
+
+Subcommand ``enable`` adds an annotation to the given pool for the mentioned
+application.
+
+Usage::
+
+ ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
+
+Subcommand ``get`` displays the value for the given key that is assosciated
+with the given application of the given pool. Not passing the optional
+arguments would display all key-value pairs for all applications for all
+pools.
+
+Usage::
+
+ ceph osd pool application get {<pool-name>} {<app>} {<key>}
+
+Subcommand ``rm`` removes the key-value pair for the given key in the given
+application of the given pool.
+
+Usage::
+
+ ceph osd pool application rm <pool-name> <app> <key>
+
+Subcommand ``set`` assosciates or updates, if it already exists, a key-value
+pair with the given application for the given pool.
+
+Usage::
+
+ ceph osd pool application set <pool-name> <app> <key> <value>
+
Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
<= 1.0
Usage::
- ceph pg ls {<int>} {active|clean|down|replay|splitting|
- scrubbing|degraded|inconsistent|peering|repair|
- recovery|backfill_wait|incomplete|stale| remapped|
- deep_scrub|backfill|backfill_toofull|recovery_wait|
- undersized [active|clean|down|replay|splitting|
- scrubbing|degraded|inconsistent|peering|repair|
- recovery|backfill_wait|incomplete|stale|remapped|
- deep_scrub|backfill|backfill_toofull|recovery_wait|
- undersized...]}
+ ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
Subcommand ``ls-by-osd`` lists pg on osd [osd]
Usage::
ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
- {active|clean|down|replay|splitting|
- scrubbing|degraded|inconsistent|peering|repair|
- recovery|backfill_wait|incomplete|stale| remapped|
- deep_scrub|backfill|backfill_toofull|recovery_wait|
- undersized [active|clean|down|replay|splitting|
- scrubbing|degraded|inconsistent|peering|repair|
- recovery|backfill_wait|incomplete|stale|remapped|
- deep_scrub|backfill|backfill_toofull|recovery_wait|
- undersized...]}
+ {<pg-state> [<pg-state>...]}
Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
Usage::
- ceph pg ls-by-pool <poolstr> {<int>} {active|
- clean|down|replay|splitting|
- scrubbing|degraded|inconsistent|peering|repair|
- recovery|backfill_wait|incomplete|stale| remapped|
- deep_scrub|backfill|backfill_toofull|recovery_wait|
- undersized [active|clean|down|replay|splitting|
- scrubbing|degraded|inconsistent|peering|repair|
- recovery|backfill_wait|incomplete|stale|remapped|
- deep_scrub|backfill|backfill_toofull|recovery_wait|
- undersized...]}
+ ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
Subcommand ``ls-by-primary`` lists pg with primary = [osd]
Usage::
ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
- {active|clean|down|replay|splitting|
- scrubbing|degraded|inconsistent|peering|repair|
- recovery|backfill_wait|incomplete|stale| remapped|
- deep_scrub|backfill|backfill_toofull|recovery_wait|
- undersized [active|clean|down|replay|splitting|
- scrubbing|degraded|inconsistent|peering|repair|
- recovery|backfill_wait|incomplete|stale|remapped|
- deep_scrub|backfill|backfill_toofull|recovery_wait|
- undersized...]}
+ {<pg-state> [<pg-state>...]}
Subcommand ``map`` shows mapping of pg to osds.
ceph pg scrub <pgid>
-Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
-
-Usage::
-
- ceph pg set_full_ratio <float[0.0-1.0]>
-
-Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
-
-Usage::
-
- ceph pg set_backfillfull_ratio <float[0.0-1.0]>
-
-Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
-full.
-
-Usage::
-
- ceph pg set_nearfull_ratio <float[0.0-1.0]>
-
Subcommand ``stat`` shows placement group status.
Usage::
Usage::
- ceph tell <name (type.id)> <args> [<args>...]
+ ceph tell <name (type.id)> <command> [options...]
List all available commands.
If this option is used with these commands, it will help not to increase osd weight
even the osd is under utilized.
+.. option:: --block
+
+ block until completion (scrub and deep-scrub only)
Availability
============