]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | .. index:: control, commands |
2 | ||
3 | ================== | |
4 | Control Commands | |
5 | ================== | |
6 | ||
7 | ||
8 | Monitor Commands | |
9 | ================ | |
10 | ||
11 | Monitor commands are issued using the ceph utility:: | |
12 | ||
13 | ceph [-m monhost] {command} | |
14 | ||
15 | The command is usually (though not always) of the form:: | |
16 | ||
17 | ceph {subsystem} {command} | |
18 | ||
19 | ||
20 | System Commands | |
21 | =============== | |
22 | ||
23 | Execute the following to display the current status of the cluster. :: | |
24 | ||
25 | ceph -s | |
26 | ceph status | |
27 | ||
28 | Execute the following to display a running summary of the status of the cluster, | |
29 | and major events. :: | |
30 | ||
31 | ceph -w | |
32 | ||
33 | Execute the following to show the monitor quorum, including which monitors are | |
34 | participating and which one is the leader. :: | |
35 | ||
36 | ceph quorum_status | |
37 | ||
38 | Execute the following to query the status of a single monitor, including whether | |
39 | or not it is in the quorum. :: | |
40 | ||
41 | ceph [-m monhost] mon_status | |
42 | ||
43 | ||
44 | Authentication Subsystem | |
45 | ======================== | |
46 | ||
47 | To add a keyring for an OSD, execute the following:: | |
48 | ||
49 | ceph auth add {osd} {--in-file|-i} {path-to-osd-keyring} | |
50 | ||
51 | To list the cluster's keys and their capabilities, execute the following:: | |
52 | ||
c07f9fc5 | 53 | ceph auth ls |
7c673cae FG |
54 | |
55 | ||
56 | Placement Group Subsystem | |
57 | ========================= | |
58 | ||
59 | To display the statistics for all placement groups, execute the following:: | |
60 | ||
61 | ceph pg dump [--format {format}] | |
62 | ||
63 | The valid formats are ``plain`` (default) and ``json``. | |
64 | ||
65 | To display the statistics for all placement groups stuck in a specified state, | |
66 | execute the following:: | |
67 | ||
68 | ceph pg dump_stuck inactive|unclean|stale|undersized|degraded [--format {format}] [-t|--threshold {seconds}] | |
69 | ||
70 | ||
71 | ``--format`` may be ``plain`` (default) or ``json`` | |
72 | ||
73 | ``--threshold`` defines how many seconds "stuck" is (default: 300) | |
74 | ||
75 | **Inactive** Placement groups cannot process reads or writes because they are waiting for an OSD | |
76 | with the most up-to-date data to come back. | |
77 | ||
78 | **Unclean** Placement groups contain objects that are not replicated the desired number | |
79 | of times. They should be recovering. | |
80 | ||
81 | **Stale** Placement groups are in an unknown state - the OSDs that host them have not | |
82 | reported to the monitor cluster in a while (configured by | |
83 | ``mon_osd_report_timeout``). | |
84 | ||
85 | Delete "lost" objects or revert them to their prior state, either a previous version | |
86 | or delete them if they were just created. :: | |
87 | ||
88 | ceph pg {pgid} mark_unfound_lost revert|delete | |
89 | ||
90 | ||
91 | OSD Subsystem | |
92 | ============= | |
93 | ||
94 | Query OSD subsystem status. :: | |
95 | ||
96 | ceph osd stat | |
97 | ||
98 | Write a copy of the most recent OSD map to a file. See | |
99 | `osdmaptool`_. :: | |
100 | ||
101 | ceph osd getmap -o file | |
102 | ||
103 | .. _osdmaptool: ../../man/8/osdmaptool | |
104 | ||
105 | Write a copy of the crush map from the most recent OSD map to | |
106 | file. :: | |
107 | ||
108 | ceph osd getcrushmap -o file | |
109 | ||
110 | The foregoing functionally equivalent to :: | |
111 | ||
112 | ceph osd getmap -o /tmp/osdmap | |
113 | osdmaptool /tmp/osdmap --export-crush file | |
114 | ||
115 | Dump the OSD map. Valid formats for ``-f`` are ``plain`` and ``json``. If no | |
116 | ``--format`` option is given, the OSD map is dumped as plain text. :: | |
117 | ||
118 | ceph osd dump [--format {format}] | |
119 | ||
120 | Dump the OSD map as a tree with one line per OSD containing weight | |
121 | and state. :: | |
122 | ||
123 | ceph osd tree [--format {format}] | |
124 | ||
125 | Find out where a specific object is or would be stored in the system:: | |
126 | ||
127 | ceph osd map <pool-name> <object-name> | |
128 | ||
129 | Add or move a new item (OSD) with the given id/name/weight at the specified | |
130 | location. :: | |
131 | ||
132 | ceph osd crush set {id} {weight} [{loc1} [{loc2} ...]] | |
133 | ||
134 | Remove an existing item (OSD) from the CRUSH map. :: | |
135 | ||
136 | ceph osd crush remove {name} | |
137 | ||
138 | Remove an existing bucket from the CRUSH map. :: | |
139 | ||
140 | ceph osd crush remove {bucket-name} | |
141 | ||
142 | Move an existing bucket from one position in the hierarchy to another. :: | |
143 | ||
c07f9fc5 | 144 | ceph osd crush move {id} {loc1} [{loc2} ...] |
7c673cae FG |
145 | |
146 | Set the weight of the item given by ``{name}`` to ``{weight}``. :: | |
147 | ||
148 | ceph osd crush reweight {name} {weight} | |
149 | ||
7c673cae FG |
150 | Mark an OSD as lost. This may result in permanent data loss. Use with caution. :: |
151 | ||
152 | ceph osd lost {id} [--yes-i-really-mean-it] | |
153 | ||
154 | Create a new OSD. If no UUID is given, it will be set automatically when the OSD | |
155 | starts up. :: | |
156 | ||
157 | ceph osd create [{uuid}] | |
158 | ||
159 | Remove the given OSD(s). :: | |
160 | ||
161 | ceph osd rm [{id}...] | |
162 | ||
163 | Query the current max_osd parameter in the OSD map. :: | |
164 | ||
165 | ceph osd getmaxosd | |
166 | ||
167 | Import the given crush map. :: | |
168 | ||
169 | ceph osd setcrushmap -i file | |
170 | ||
171 | Set the ``max_osd`` parameter in the OSD map. This is necessary when | |
172 | expanding the storage cluster. :: | |
173 | ||
174 | ceph osd setmaxosd | |
175 | ||
176 | Mark OSD ``{osd-num}`` down. :: | |
177 | ||
178 | ceph osd down {osd-num} | |
179 | ||
180 | Mark OSD ``{osd-num}`` out of the distribution (i.e. allocated no data). :: | |
181 | ||
182 | ceph osd out {osd-num} | |
183 | ||
184 | Mark ``{osd-num}`` in the distribution (i.e. allocated data). :: | |
185 | ||
186 | ceph osd in {osd-num} | |
187 | ||
7c673cae FG |
188 | Set or clear the pause flags in the OSD map. If set, no IO requests |
189 | will be sent to any OSD. Clearing the flags via unpause results in | |
190 | resending pending requests. :: | |
191 | ||
192 | ceph osd pause | |
193 | ceph osd unpause | |
194 | ||
195 | Set the weight of ``{osd-num}`` to ``{weight}``. Two OSDs with the | |
196 | same weight will receive roughly the same number of I/O requests and | |
197 | store approximately the same amount of data. ``ceph osd reweight`` | |
198 | sets an override weight on the OSD. This value is in the range 0 to 1, | |
199 | and forces CRUSH to re-place (1-weight) of the data that would | |
200 | otherwise live on this drive. It does not change the weights assigned | |
201 | to the buckets above the OSD in the crush map, and is a corrective | |
c07f9fc5 | 202 | measure in case the normal CRUSH distribution is not working out quite |
7c673cae FG |
203 | right. For instance, if one of your OSDs is at 90% and the others are |
204 | at 50%, you could reduce this weight to try and compensate for it. :: | |
205 | ||
206 | ceph osd reweight {osd-num} {weight} | |
207 | ||
208 | Reweights all the OSDs by reducing the weight of OSDs which are | |
209 | heavily overused. By default it will adjust the weights downward on | |
210 | OSDs which have 120% of the average utilization, but if you include | |
211 | threshold it will use that percentage instead. :: | |
212 | ||
213 | ceph osd reweight-by-utilization [threshold] | |
214 | ||
215 | Describes what reweight-by-utilization would do. :: | |
216 | ||
217 | ceph osd test-reweight-by-utilization | |
218 | ||
219 | Adds/removes the address to/from the blacklist. When adding an address, | |
220 | you can specify how long it should be blacklisted in seconds; otherwise, | |
221 | it will default to 1 hour. A blacklisted address is prevented from | |
222 | connecting to any OSD. Blacklisting is most often used to prevent a | |
223 | lagging metadata server from making bad changes to data on the OSDs. | |
224 | ||
225 | These commands are mostly only useful for failure testing, as | |
226 | blacklists are normally maintained automatically and shouldn't need | |
227 | manual intervention. :: | |
228 | ||
229 | ceph osd blacklist add ADDRESS[:source_port] [TIME] | |
230 | ceph osd blacklist rm ADDRESS[:source_port] | |
231 | ||
232 | Creates/deletes a snapshot of a pool. :: | |
233 | ||
234 | ceph osd pool mksnap {pool-name} {snap-name} | |
235 | ceph osd pool rmsnap {pool-name} {snap-name} | |
236 | ||
237 | Creates/deletes/renames a storage pool. :: | |
238 | ||
239 | ceph osd pool create {pool-name} pg_num [pgp_num] | |
240 | ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it] | |
241 | ceph osd pool rename {old-name} {new-name} | |
242 | ||
243 | Changes a pool setting. :: | |
244 | ||
245 | ceph osd pool set {pool-name} {field} {value} | |
246 | ||
247 | Valid fields are: | |
248 | ||
249 | * ``size``: Sets the number of copies of data in the pool. | |
250 | * ``pg_num``: The placement group number. | |
251 | * ``pgp_num``: Effective number when calculating pg placement. | |
b32b8144 | 252 | * ``crush_rule``: rule number for mapping placement. |
7c673cae FG |
253 | |
254 | Get the value of a pool setting. :: | |
255 | ||
256 | ceph osd pool get {pool-name} {field} | |
257 | ||
258 | Valid fields are: | |
259 | ||
260 | * ``pg_num``: The placement group number. | |
261 | * ``pgp_num``: Effective number of placement groups when calculating placement. | |
262 | * ``lpg_num``: The number of local placement groups. | |
263 | * ``lpgp_num``: The number used for placing the local placement groups. | |
264 | ||
265 | ||
266 | Sends a scrub command to OSD ``{osd-num}``. To send the command to all OSDs, use ``*``. :: | |
267 | ||
268 | ceph osd scrub {osd-num} | |
269 | ||
270 | Sends a repair command to OSD.N. To send the command to all OSDs, use ``*``. :: | |
271 | ||
272 | ceph osd repair N | |
273 | ||
274 | Runs a simple throughput benchmark against OSD.N, writing ``TOTAL_DATA_BYTES`` | |
275 | in write requests of ``BYTES_PER_WRITE`` each. By default, the test | |
276 | writes 1 GB in total in 4-MB increments. | |
277 | The benchmark is non-destructive and will not overwrite existing live | |
278 | OSD data, but might temporarily affect the performance of clients | |
279 | concurrently accessing the OSD. :: | |
280 | ||
281 | ceph tell osd.N bench [TOTAL_DATA_BYTES] [BYTES_PER_WRITE] | |
282 | ||
283 | ||
284 | MDS Subsystem | |
285 | ============= | |
286 | ||
287 | Change configuration parameters on a running mds. :: | |
288 | ||
289 | ceph tell mds.{mds-id} injectargs --{switch} {value} [--{switch} {value}] | |
290 | ||
291 | Example:: | |
292 | ||
293 | ceph tell mds.0 injectargs --debug_ms 1 --debug_mds 10 | |
294 | ||
295 | Enables debug messages. :: | |
296 | ||
297 | ceph mds stat | |
298 | ||
299 | Displays the status of all metadata servers. :: | |
300 | ||
301 | ceph mds fail 0 | |
302 | ||
303 | Marks the active MDS as failed, triggering failover to a standby if present. | |
304 | ||
305 | .. todo:: ``ceph mds`` subcommands missing docs: set, dump, getmap, stop, setmap | |
306 | ||
307 | ||
308 | Mon Subsystem | |
309 | ============= | |
310 | ||
311 | Show monitor stats:: | |
312 | ||
313 | ceph mon stat | |
314 | ||
315 | e2: 3 mons at {a=127.0.0.1:40000/0,b=127.0.0.1:40001/0,c=127.0.0.1:40002/0}, election epoch 6, quorum 0,1,2 a,b,c | |
316 | ||
317 | ||
318 | The ``quorum`` list at the end lists monitor nodes that are part of the current quorum. | |
319 | ||
320 | This is also available more directly:: | |
321 | ||
322 | ceph quorum_status -f json-pretty | |
323 | ||
324 | .. code-block:: javascript | |
325 | ||
326 | { | |
327 | "election_epoch": 6, | |
328 | "quorum": [ | |
329 | 0, | |
330 | 1, | |
331 | 2 | |
332 | ], | |
333 | "quorum_names": [ | |
334 | "a", | |
335 | "b", | |
336 | "c" | |
337 | ], | |
338 | "quorum_leader_name": "a", | |
339 | "monmap": { | |
340 | "epoch": 2, | |
341 | "fsid": "ba807e74-b64f-4b72-b43f-597dfe60ddbc", | |
342 | "modified": "2016-12-26 14:42:09.288066", | |
343 | "created": "2016-12-26 14:42:03.573585", | |
344 | "features": { | |
345 | "persistent": [ | |
346 | "kraken" | |
347 | ], | |
348 | "optional": [] | |
349 | }, | |
350 | "mons": [ | |
351 | { | |
352 | "rank": 0, | |
353 | "name": "a", | |
354 | "addr": "127.0.0.1:40000\/0", | |
355 | "public_addr": "127.0.0.1:40000\/0" | |
356 | }, | |
357 | { | |
358 | "rank": 1, | |
359 | "name": "b", | |
360 | "addr": "127.0.0.1:40001\/0", | |
361 | "public_addr": "127.0.0.1:40001\/0" | |
362 | }, | |
363 | { | |
364 | "rank": 2, | |
365 | "name": "c", | |
366 | "addr": "127.0.0.1:40002\/0", | |
367 | "public_addr": "127.0.0.1:40002\/0" | |
368 | } | |
369 | ] | |
370 | } | |
371 | } | |
372 | ||
373 | ||
374 | The above will block until a quorum is reached. | |
375 | ||
376 | For a status of just the monitor you connect to (use ``-m HOST:PORT`` | |
377 | to select):: | |
378 | ||
379 | ceph mon_status -f json-pretty | |
380 | ||
381 | ||
382 | .. code-block:: javascript | |
383 | ||
384 | { | |
385 | "name": "b", | |
386 | "rank": 1, | |
387 | "state": "peon", | |
388 | "election_epoch": 6, | |
389 | "quorum": [ | |
390 | 0, | |
391 | 1, | |
392 | 2 | |
393 | ], | |
394 | "features": { | |
395 | "required_con": "9025616074522624", | |
396 | "required_mon": [ | |
397 | "kraken" | |
398 | ], | |
399 | "quorum_con": "1152921504336314367", | |
400 | "quorum_mon": [ | |
401 | "kraken" | |
402 | ] | |
403 | }, | |
404 | "outside_quorum": [], | |
405 | "extra_probe_peers": [], | |
406 | "sync_provider": [], | |
407 | "monmap": { | |
408 | "epoch": 2, | |
409 | "fsid": "ba807e74-b64f-4b72-b43f-597dfe60ddbc", | |
410 | "modified": "2016-12-26 14:42:09.288066", | |
411 | "created": "2016-12-26 14:42:03.573585", | |
412 | "features": { | |
413 | "persistent": [ | |
414 | "kraken" | |
415 | ], | |
416 | "optional": [] | |
417 | }, | |
418 | "mons": [ | |
419 | { | |
420 | "rank": 0, | |
421 | "name": "a", | |
422 | "addr": "127.0.0.1:40000\/0", | |
423 | "public_addr": "127.0.0.1:40000\/0" | |
424 | }, | |
425 | { | |
426 | "rank": 1, | |
427 | "name": "b", | |
428 | "addr": "127.0.0.1:40001\/0", | |
429 | "public_addr": "127.0.0.1:40001\/0" | |
430 | }, | |
431 | { | |
432 | "rank": 2, | |
433 | "name": "c", | |
434 | "addr": "127.0.0.1:40002\/0", | |
435 | "public_addr": "127.0.0.1:40002\/0" | |
436 | } | |
437 | ] | |
438 | } | |
439 | } | |
440 | ||
441 | A dump of the monitor state:: | |
442 | ||
443 | ceph mon dump | |
444 | ||
445 | dumped monmap epoch 2 | |
446 | epoch 2 | |
447 | fsid ba807e74-b64f-4b72-b43f-597dfe60ddbc | |
448 | last_changed 2016-12-26 14:42:09.288066 | |
449 | created 2016-12-26 14:42:03.573585 | |
450 | 0: 127.0.0.1:40000/0 mon.a | |
451 | 1: 127.0.0.1:40001/0 mon.b | |
452 | 2: 127.0.0.1:40002/0 mon.c | |
453 |