1 .. index:: control, commands
11 Monitor commands are issued using the ``ceph`` utility::
13 ceph [-m monhost] {command}
15 The command is usually (though not always) of the form::
17 ceph {subsystem} {command}
23 Execute the following to display the current cluster status. ::
28 Execute the following to display a running summary of cluster status
33 Execute the following to show the monitor quorum, including which monitors are
34 participating and which one is the leader. ::
39 Execute the following to query the status of a single monitor, including whether
40 or not it is in the quorum. ::
42 ceph tell mon.[id] mon_status
44 where the value of ``[id]`` can be determined, e.g., from ``ceph -s``.
47 Authentication Subsystem
48 ========================
50 To add a keyring for an OSD, execute the following::
52 ceph auth add {osd} {--in-file|-i} {path-to-osd-keyring}
54 To list the cluster's keys and their capabilities, execute the following::
59 Placement Group Subsystem
60 =========================
62 To display the statistics for all placement groups (PGs), execute the following::
64 ceph pg dump [--format {format}]
66 The valid formats are ``plain`` (default), ``json`` ``json-pretty``, ``xml``, and ``xml-pretty``.
67 When implementing monitoring and other tools, it is best to use ``json`` format.
68 JSON parsing is more deterministic than the human-oriented ``plain``, and the layout is much
69 less variable from release to release. The ``jq`` utility can be invaluable when extracting
70 data from JSON output.
72 To display the statistics for all placement groups stuck in a specified state,
73 execute the following::
75 ceph pg dump_stuck inactive|unclean|stale|undersized|degraded [--format {format}] [-t|--threshold {seconds}]
78 ``--format`` may be ``plain`` (default), ``json``, ``json-pretty``, ``xml``, or ``xml-pretty``.
80 ``--threshold`` defines how many seconds "stuck" is (default: 300)
82 **Inactive** Placement groups cannot process reads or writes because they are waiting for an OSD
83 with the most up-to-date data to come back.
85 **Unclean** Placement groups contain objects that are not replicated the desired number
86 of times. They should be recovering.
88 **Stale** Placement groups are in an unknown state - the OSDs that host them have not
89 reported to the monitor cluster in a while (configured by
90 ``mon_osd_report_timeout``).
92 Delete "lost" objects or revert them to their prior state, either a previous version
93 or delete them if they were just created. ::
95 ceph pg {pgid} mark_unfound_lost revert|delete
103 Query OSD subsystem status. ::
107 Write a copy of the most recent OSD map to a file. See
108 :ref:`osdmaptool <osdmaptool>`. ::
110 ceph osd getmap -o file
112 Write a copy of the crush map from the most recent OSD map to
115 ceph osd getcrushmap -o file
117 The foregoing is functionally equivalent to ::
119 ceph osd getmap -o /tmp/osdmap
120 osdmaptool /tmp/osdmap --export-crush file
122 Dump the OSD map. Valid formats for ``-f`` are ``plain``, ``json``, ``json-pretty``,
123 ``xml``, and ``xml-pretty``. If no ``--format`` option is given, the OSD map is
124 dumped as plain text. As above, JSON format is best for tools, scripting, and other automation. ::
126 ceph osd dump [--format {format}]
128 Dump the OSD map as a tree with one line per OSD containing weight
131 ceph osd tree [--format {format}]
133 Find out where a specific object is or would be stored in the system::
135 ceph osd map <pool-name> <object-name>
137 Add or move a new item (OSD) with the given id/name/weight at the specified
140 ceph osd crush set {id} {weight} [{loc1} [{loc2} ...]]
142 Remove an existing item (OSD) from the CRUSH map. ::
144 ceph osd crush remove {name}
146 Remove an existing bucket from the CRUSH map. ::
148 ceph osd crush remove {bucket-name}
150 Move an existing bucket from one position in the hierarchy to another. ::
152 ceph osd crush move {id} {loc1} [{loc2} ...]
154 Set the weight of the item given by ``{name}`` to ``{weight}``. ::
156 ceph osd crush reweight {name} {weight}
158 Mark an OSD as ``lost``. This may result in permanent data loss. Use with caution. ::
160 ceph osd lost {id} [--yes-i-really-mean-it]
162 Create a new OSD. If no UUID is given, it will be set automatically when the OSD
165 ceph osd create [{uuid}]
167 Remove the given OSD(s). ::
169 ceph osd rm [{id}...]
171 Query the current ``max_osd`` parameter in the OSD map. ::
175 Import the given crush map. ::
177 ceph osd setcrushmap -i file
179 Set the ``max_osd`` parameter in the OSD map. This defaults to 10000 now so
180 most admins will never need to adjust this. ::
184 Mark OSD ``{osd-num}`` down. ::
186 ceph osd down {osd-num}
188 Mark OSD ``{osd-num}`` out of the distribution (i.e. allocated no data). ::
190 ceph osd out {osd-num}
192 Mark ``{osd-num}`` in the distribution (i.e. allocated data). ::
194 ceph osd in {osd-num}
196 Set or clear the pause flags in the OSD map. If set, no IO requests
197 will be sent to any OSD. Clearing the flags via unpause results in
198 resending pending requests. ::
203 Set the override weight (reweight) of ``{osd-num}`` to ``{weight}``. Two OSDs with the
204 same weight will receive roughly the same number of I/O requests and
205 store approximately the same amount of data. ``ceph osd reweight``
206 sets an override weight on the OSD. This value is in the range 0 to 1,
207 and forces CRUSH to re-place (1-weight) of the data that would
208 otherwise live on this drive. It does not change weights assigned
209 to the buckets above the OSD in the crush map, and is a corrective
210 measure in case the normal CRUSH distribution is not working out quite
211 right. For instance, if one of your OSDs is at 90% and the others are
212 at 50%, you could reduce this weight to compensate. ::
214 ceph osd reweight {osd-num} {weight}
216 Balance OSD fullness by reducing the override weight of OSDs which are
217 overly utilized. Note that these override aka ``reweight`` values
218 default to 1.00000 and are relative only to each other; they not absolute.
219 It is crucial to distinguish them from CRUSH weights, which reflect the
220 absolute capacity of a bucket in TiB. By default this command adjusts
221 override weight on OSDs which have + or - 20% of the average utilization,
222 but if you include a ``threshold`` that percentage will be used instead. ::
224 ceph osd reweight-by-utilization [threshold [max_change [max_osds]]] [--no-increasing]
226 To limit the step by which any OSD's reweight will be changed, specify
227 ``max_change`` which defaults to 0.05. To limit the number of OSDs that will
228 be adjusted, specify ``max_osds`` as well; the default is 4. Increasing these
229 parameters can speed leveling of OSD utilization, at the potential cost of
230 greater impact on client operations due to more data moving at once.
232 To determine which and how many PGs and OSDs will be affected by a given invocation
233 you can test before executing. ::
235 ceph osd test-reweight-by-utilization [threshold [max_change max_osds]] [--no-increasing]
237 Adding ``--no-increasing`` to either command prevents increasing any
238 override weights that are currently < 1.00000. This can be useful when
239 you are balancing in a hurry to remedy ``full`` or ``nearful`` OSDs or
240 when some OSDs are being evacuated or slowly brought into service.
242 Deployments utilizing Nautilus (or later revisions of Luminous and Mimic)
243 that have no pre-Luminous cients may instead wish to instead enable the
244 `balancer`` module for ``ceph-mgr``.
246 Add/remove an IP address to/from the blocklist. When adding an address,
247 you can specify how long it should be blocklisted in seconds; otherwise,
248 it will default to 1 hour. A blocklisted address is prevented from
249 connecting to any OSD. Blocklisting is most often used to prevent a
250 lagging metadata server from making bad changes to data on the OSDs.
252 These commands are mostly only useful for failure testing, as
253 blocklists are normally maintained automatically and shouldn't need
254 manual intervention. ::
256 ceph osd blocklist add ADDRESS[:source_port] [TIME]
257 ceph osd blocklist rm ADDRESS[:source_port]
259 Creates/deletes a snapshot of a pool. ::
261 ceph osd pool mksnap {pool-name} {snap-name}
262 ceph osd pool rmsnap {pool-name} {snap-name}
264 Creates/deletes/renames a storage pool. ::
266 ceph osd pool create {pool-name} [pg_num [pgp_num]]
267 ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]
268 ceph osd pool rename {old-name} {new-name}
270 Changes a pool setting. ::
272 ceph osd pool set {pool-name} {field} {value}
276 * ``size``: Sets the number of copies of data in the pool.
277 * ``pg_num``: The placement group number.
278 * ``pgp_num``: Effective number when calculating pg placement.
279 * ``crush_rule``: rule number for mapping placement.
281 Get the value of a pool setting. ::
283 ceph osd pool get {pool-name} {field}
287 * ``pg_num``: The placement group number.
288 * ``pgp_num``: Effective number of placement groups when calculating placement.
291 Sends a scrub command to OSD ``{osd-num}``. To send the command to all OSDs, use ``*``. ::
293 ceph osd scrub {osd-num}
295 Sends a repair command to OSD.N. To send the command to all OSDs, use ``*``. ::
299 Runs a simple throughput benchmark against OSD.N, writing ``TOTAL_DATA_BYTES``
300 in write requests of ``BYTES_PER_WRITE`` each. By default, the test
301 writes 1 GB in total in 4-MB increments.
302 The benchmark is non-destructive and will not overwrite existing live
303 OSD data, but might temporarily affect the performance of clients
304 concurrently accessing the OSD. ::
306 ceph tell osd.N bench [TOTAL_DATA_BYTES] [BYTES_PER_WRITE]
308 To clear an OSD's caches between benchmark runs, use the 'cache drop' command ::
310 ceph tell osd.N cache drop
312 To get the cache statistics of an OSD, use the 'cache status' command ::
314 ceph tell osd.N cache status
319 Change configuration parameters on a running mds. ::
321 ceph tell mds.{mds-id} config set {setting} {value}
325 ceph tell mds.0 config set debug_ms 1
327 Enables debug messages. ::
331 Displays the status of all metadata servers. ::
335 Marks the active MDS as failed, triggering failover to a standby if present.
337 .. todo:: ``ceph mds`` subcommands missing docs: set, dump, getmap, stop, setmap
347 e2: 3 mons at {a=127.0.0.1:40000/0,b=127.0.0.1:40001/0,c=127.0.0.1:40002/0}, election epoch 6, quorum 0,1,2 a,b,c
350 The ``quorum`` list at the end lists monitor nodes that are part of the current quorum.
352 This is also available more directly::
354 ceph quorum_status -f json-pretty
356 .. code-block:: javascript
370 "quorum_leader_name": "a",
373 "fsid": "ba807e74-b64f-4b72-b43f-597dfe60ddbc",
374 "modified": "2016-12-26 14:42:09.288066",
375 "created": "2016-12-26 14:42:03.573585",
386 "addr": "127.0.0.1:40000\/0",
387 "public_addr": "127.0.0.1:40000\/0"
392 "addr": "127.0.0.1:40001\/0",
393 "public_addr": "127.0.0.1:40001\/0"
398 "addr": "127.0.0.1:40002\/0",
399 "public_addr": "127.0.0.1:40002\/0"
406 The above will block until a quorum is reached.
408 For a status of just a single monitor::
410 ceph tell mon.[name] mon_status
412 where the value of ``[name]`` can be taken from ``ceph quorum_status``. Sample
426 "required_con": "9025616074522624",
430 "quorum_con": "1152921504336314367",
435 "outside_quorum": [],
436 "extra_probe_peers": [],
440 "fsid": "ba807e74-b64f-4b72-b43f-597dfe60ddbc",
441 "modified": "2016-12-26 14:42:09.288066",
442 "created": "2016-12-26 14:42:03.573585",
453 "addr": "127.0.0.1:40000\/0",
454 "public_addr": "127.0.0.1:40000\/0"
459 "addr": "127.0.0.1:40001\/0",
460 "public_addr": "127.0.0.1:40001\/0"
465 "addr": "127.0.0.1:40002\/0",
466 "public_addr": "127.0.0.1:40002\/0"
472 A dump of the monitor state::
476 dumped monmap epoch 2
478 fsid ba807e74-b64f-4b72-b43f-597dfe60ddbc
479 last_changed 2016-12-26 14:42:09.288066
480 created 2016-12-26 14:42:03.573585
481 0: 127.0.0.1:40000/0 mon.a
482 1: 127.0.0.1:40001/0 mon.b
483 2: 127.0.0.1:40002/0 mon.c