]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/operations/control.rst
update sources to v12.1.2
[ceph.git] / ceph / doc / rados / operations / control.rst
CommitLineData
7c673cae
FG
1.. index:: control, commands
2
3==================
4 Control Commands
5==================
6
7
8Monitor Commands
9================
10
11Monitor commands are issued using the ceph utility::
12
13 ceph [-m monhost] {command}
14
15The command is usually (though not always) of the form::
16
17 ceph {subsystem} {command}
18
19
20System Commands
21===============
22
23Execute the following to display the current status of the cluster. ::
24
25 ceph -s
26 ceph status
27
28Execute the following to display a running summary of the status of the cluster,
29and major events. ::
30
31 ceph -w
32
33Execute the following to show the monitor quorum, including which monitors are
34participating and which one is the leader. ::
35
36 ceph quorum_status
37
38Execute the following to query the status of a single monitor, including whether
39or not it is in the quorum. ::
40
41 ceph [-m monhost] mon_status
42
43
44Authentication Subsystem
45========================
46
47To add a keyring for an OSD, execute the following::
48
49 ceph auth add {osd} {--in-file|-i} {path-to-osd-keyring}
50
51To list the cluster's keys and their capabilities, execute the following::
52
c07f9fc5 53 ceph auth ls
7c673cae
FG
54
55
56Placement Group Subsystem
57=========================
58
59To display the statistics for all placement groups, execute the following::
60
61 ceph pg dump [--format {format}]
62
63The valid formats are ``plain`` (default) and ``json``.
64
65To display the statistics for all placement groups stuck in a specified state,
66execute the following::
67
68 ceph pg dump_stuck inactive|unclean|stale|undersized|degraded [--format {format}] [-t|--threshold {seconds}]
69
70
71``--format`` may be ``plain`` (default) or ``json``
72
73``--threshold`` defines how many seconds "stuck" is (default: 300)
74
75**Inactive** Placement groups cannot process reads or writes because they are waiting for an OSD
76with the most up-to-date data to come back.
77
78**Unclean** Placement groups contain objects that are not replicated the desired number
79of times. They should be recovering.
80
81**Stale** Placement groups are in an unknown state - the OSDs that host them have not
82reported to the monitor cluster in a while (configured by
83``mon_osd_report_timeout``).
84
85Delete "lost" objects or revert them to their prior state, either a previous version
86or delete them if they were just created. ::
87
88 ceph pg {pgid} mark_unfound_lost revert|delete
89
90
91OSD Subsystem
92=============
93
94Query OSD subsystem status. ::
95
96 ceph osd stat
97
98Write a copy of the most recent OSD map to a file. See
99`osdmaptool`_. ::
100
101 ceph osd getmap -o file
102
103.. _osdmaptool: ../../man/8/osdmaptool
104
105Write a copy of the crush map from the most recent OSD map to
106file. ::
107
108 ceph osd getcrushmap -o file
109
110The foregoing functionally equivalent to ::
111
112 ceph osd getmap -o /tmp/osdmap
113 osdmaptool /tmp/osdmap --export-crush file
114
115Dump the OSD map. Valid formats for ``-f`` are ``plain`` and ``json``. If no
116``--format`` option is given, the OSD map is dumped as plain text. ::
117
118 ceph osd dump [--format {format}]
119
120Dump the OSD map as a tree with one line per OSD containing weight
121and state. ::
122
123 ceph osd tree [--format {format}]
124
125Find out where a specific object is or would be stored in the system::
126
127 ceph osd map <pool-name> <object-name>
128
129Add or move a new item (OSD) with the given id/name/weight at the specified
130location. ::
131
132 ceph osd crush set {id} {weight} [{loc1} [{loc2} ...]]
133
134Remove an existing item (OSD) from the CRUSH map. ::
135
136 ceph osd crush remove {name}
137
138Remove an existing bucket from the CRUSH map. ::
139
140 ceph osd crush remove {bucket-name}
141
142Move an existing bucket from one position in the hierarchy to another. ::
143
c07f9fc5 144 ceph osd crush move {id} {loc1} [{loc2} ...]
7c673cae
FG
145
146Set the weight of the item given by ``{name}`` to ``{weight}``. ::
147
148 ceph osd crush reweight {name} {weight}
149
7c673cae
FG
150Mark an OSD as lost. This may result in permanent data loss. Use with caution. ::
151
152 ceph osd lost {id} [--yes-i-really-mean-it]
153
154Create a new OSD. If no UUID is given, it will be set automatically when the OSD
155starts up. ::
156
157 ceph osd create [{uuid}]
158
159Remove the given OSD(s). ::
160
161 ceph osd rm [{id}...]
162
163Query the current max_osd parameter in the OSD map. ::
164
165 ceph osd getmaxosd
166
167Import the given crush map. ::
168
169 ceph osd setcrushmap -i file
170
171Set the ``max_osd`` parameter in the OSD map. This is necessary when
172expanding the storage cluster. ::
173
174 ceph osd setmaxosd
175
176Mark OSD ``{osd-num}`` down. ::
177
178 ceph osd down {osd-num}
179
180Mark OSD ``{osd-num}`` out of the distribution (i.e. allocated no data). ::
181
182 ceph osd out {osd-num}
183
184Mark ``{osd-num}`` in the distribution (i.e. allocated data). ::
185
186 ceph osd in {osd-num}
187
7c673cae
FG
188Set or clear the pause flags in the OSD map. If set, no IO requests
189will be sent to any OSD. Clearing the flags via unpause results in
190resending pending requests. ::
191
192 ceph osd pause
193 ceph osd unpause
194
195Set the weight of ``{osd-num}`` to ``{weight}``. Two OSDs with the
196same weight will receive roughly the same number of I/O requests and
197store approximately the same amount of data. ``ceph osd reweight``
198sets an override weight on the OSD. This value is in the range 0 to 1,
199and forces CRUSH to re-place (1-weight) of the data that would
200otherwise live on this drive. It does not change the weights assigned
201to the buckets above the OSD in the crush map, and is a corrective
c07f9fc5 202measure in case the normal CRUSH distribution is not working out quite
7c673cae
FG
203right. For instance, if one of your OSDs is at 90% and the others are
204at 50%, you could reduce this weight to try and compensate for it. ::
205
206 ceph osd reweight {osd-num} {weight}
207
208Reweights all the OSDs by reducing the weight of OSDs which are
209heavily overused. By default it will adjust the weights downward on
210OSDs which have 120% of the average utilization, but if you include
211threshold it will use that percentage instead. ::
212
213 ceph osd reweight-by-utilization [threshold]
214
215Describes what reweight-by-utilization would do. ::
216
217 ceph osd test-reweight-by-utilization
218
219Adds/removes the address to/from the blacklist. When adding an address,
220you can specify how long it should be blacklisted in seconds; otherwise,
221it will default to 1 hour. A blacklisted address is prevented from
222connecting to any OSD. Blacklisting is most often used to prevent a
223lagging metadata server from making bad changes to data on the OSDs.
224
225These commands are mostly only useful for failure testing, as
226blacklists are normally maintained automatically and shouldn't need
227manual intervention. ::
228
229 ceph osd blacklist add ADDRESS[:source_port] [TIME]
230 ceph osd blacklist rm ADDRESS[:source_port]
231
232Creates/deletes a snapshot of a pool. ::
233
234 ceph osd pool mksnap {pool-name} {snap-name}
235 ceph osd pool rmsnap {pool-name} {snap-name}
236
237Creates/deletes/renames a storage pool. ::
238
239 ceph osd pool create {pool-name} pg_num [pgp_num]
240 ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]
241 ceph osd pool rename {old-name} {new-name}
242
243Changes a pool setting. ::
244
245 ceph osd pool set {pool-name} {field} {value}
246
247Valid fields are:
248
249 * ``size``: Sets the number of copies of data in the pool.
250 * ``pg_num``: The placement group number.
251 * ``pgp_num``: Effective number when calculating pg placement.
252 * ``crush_ruleset``: rule number for mapping placement.
253
254Get the value of a pool setting. ::
255
256 ceph osd pool get {pool-name} {field}
257
258Valid fields are:
259
260 * ``pg_num``: The placement group number.
261 * ``pgp_num``: Effective number of placement groups when calculating placement.
262 * ``lpg_num``: The number of local placement groups.
263 * ``lpgp_num``: The number used for placing the local placement groups.
264
265
266Sends a scrub command to OSD ``{osd-num}``. To send the command to all OSDs, use ``*``. ::
267
268 ceph osd scrub {osd-num}
269
270Sends a repair command to OSD.N. To send the command to all OSDs, use ``*``. ::
271
272 ceph osd repair N
273
274Runs a simple throughput benchmark against OSD.N, writing ``TOTAL_DATA_BYTES``
275in write requests of ``BYTES_PER_WRITE`` each. By default, the test
276writes 1 GB in total in 4-MB increments.
277The benchmark is non-destructive and will not overwrite existing live
278OSD data, but might temporarily affect the performance of clients
279concurrently accessing the OSD. ::
280
281 ceph tell osd.N bench [TOTAL_DATA_BYTES] [BYTES_PER_WRITE]
282
283
284MDS Subsystem
285=============
286
287Change configuration parameters on a running mds. ::
288
289 ceph tell mds.{mds-id} injectargs --{switch} {value} [--{switch} {value}]
290
291Example::
292
293 ceph tell mds.0 injectargs --debug_ms 1 --debug_mds 10
294
295Enables debug messages. ::
296
297 ceph mds stat
298
299Displays the status of all metadata servers. ::
300
301 ceph mds fail 0
302
303Marks the active MDS as failed, triggering failover to a standby if present.
304
305.. todo:: ``ceph mds`` subcommands missing docs: set, dump, getmap, stop, setmap
306
307
308Mon Subsystem
309=============
310
311Show monitor stats::
312
313 ceph mon stat
314
315 e2: 3 mons at {a=127.0.0.1:40000/0,b=127.0.0.1:40001/0,c=127.0.0.1:40002/0}, election epoch 6, quorum 0,1,2 a,b,c
316
317
318The ``quorum`` list at the end lists monitor nodes that are part of the current quorum.
319
320This is also available more directly::
321
322 ceph quorum_status -f json-pretty
323
324.. code-block:: javascript
325
326 {
327 "election_epoch": 6,
328 "quorum": [
329 0,
330 1,
331 2
332 ],
333 "quorum_names": [
334 "a",
335 "b",
336 "c"
337 ],
338 "quorum_leader_name": "a",
339 "monmap": {
340 "epoch": 2,
341 "fsid": "ba807e74-b64f-4b72-b43f-597dfe60ddbc",
342 "modified": "2016-12-26 14:42:09.288066",
343 "created": "2016-12-26 14:42:03.573585",
344 "features": {
345 "persistent": [
346 "kraken"
347 ],
348 "optional": []
349 },
350 "mons": [
351 {
352 "rank": 0,
353 "name": "a",
354 "addr": "127.0.0.1:40000\/0",
355 "public_addr": "127.0.0.1:40000\/0"
356 },
357 {
358 "rank": 1,
359 "name": "b",
360 "addr": "127.0.0.1:40001\/0",
361 "public_addr": "127.0.0.1:40001\/0"
362 },
363 {
364 "rank": 2,
365 "name": "c",
366 "addr": "127.0.0.1:40002\/0",
367 "public_addr": "127.0.0.1:40002\/0"
368 }
369 ]
370 }
371 }
372
373
374The above will block until a quorum is reached.
375
376For a status of just the monitor you connect to (use ``-m HOST:PORT``
377to select)::
378
379 ceph mon_status -f json-pretty
380
381
382.. code-block:: javascript
383
384 {
385 "name": "b",
386 "rank": 1,
387 "state": "peon",
388 "election_epoch": 6,
389 "quorum": [
390 0,
391 1,
392 2
393 ],
394 "features": {
395 "required_con": "9025616074522624",
396 "required_mon": [
397 "kraken"
398 ],
399 "quorum_con": "1152921504336314367",
400 "quorum_mon": [
401 "kraken"
402 ]
403 },
404 "outside_quorum": [],
405 "extra_probe_peers": [],
406 "sync_provider": [],
407 "monmap": {
408 "epoch": 2,
409 "fsid": "ba807e74-b64f-4b72-b43f-597dfe60ddbc",
410 "modified": "2016-12-26 14:42:09.288066",
411 "created": "2016-12-26 14:42:03.573585",
412 "features": {
413 "persistent": [
414 "kraken"
415 ],
416 "optional": []
417 },
418 "mons": [
419 {
420 "rank": 0,
421 "name": "a",
422 "addr": "127.0.0.1:40000\/0",
423 "public_addr": "127.0.0.1:40000\/0"
424 },
425 {
426 "rank": 1,
427 "name": "b",
428 "addr": "127.0.0.1:40001\/0",
429 "public_addr": "127.0.0.1:40001\/0"
430 },
431 {
432 "rank": 2,
433 "name": "c",
434 "addr": "127.0.0.1:40002\/0",
435 "public_addr": "127.0.0.1:40002\/0"
436 }
437 ]
438 }
439 }
440
441A dump of the monitor state::
442
443 ceph mon dump
444
445 dumped monmap epoch 2
446 epoch 2
447 fsid ba807e74-b64f-4b72-b43f-597dfe60ddbc
448 last_changed 2016-12-26 14:42:09.288066
449 created 2016-12-26 14:42:03.573585
450 0: 127.0.0.1:40000/0 mon.a
451 1: 127.0.0.1:40001/0 mon.b
452 2: 127.0.0.1:40002/0 mon.c
453