]> git.proxmox.com Git - ceph.git/blob - ceph/doc/man/8/ceph.rst
import 15.2.4
[ceph.git] / ceph / doc / man / 8 / ceph.rst
1 :orphan:
2
3 ==================================
4 ceph -- ceph administration tool
5 ==================================
6
7 .. program:: ceph
8
9 Synopsis
10 ========
11
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
13
14 | **ceph** **compact**
15
16 | **ceph** **config** [ *dump* | *ls* | *help* | *get* | *show* | *show-with-defaults* | *set* | *rm* | *log* | *reset* | *assimilate-conf* | *generate-minimal-conf* ] ...
17
18 | **ceph** **config-key** [ *rm* | *exists* | *get* | *ls* | *dump* | *set* ] ...
19
20 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
21
22 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
23
24 | **ceph** **df** *{detail}*
25
26 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
27
28 | **ceph** **fsid**
29
30 | **ceph** **health** *{detail}*
31
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
33
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
35
36 | **ceph** **mds** [ *compat* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *repaired* ] ...
37
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
39
40 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *ls* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
41
42 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
43
44 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
45
46 | **ceph** **osd** **pool** **application** [ *disable* \| *enable* \| *get* \| *rm* \| *set* ] ...
47
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
49
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *stat* ] ...
51
52 | **ceph** **quorum_status**
53
54 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
55
56 | **ceph** **status**
57
58 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
59
60 | **ceph** **tell** *<name (type.id)> <command> [options...]*
61
62 | **ceph** **version**
63
64 Description
65 ===========
66
67 :program:`ceph` is a control utility which is used for manual deployment and maintenance
68 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
69 monitors, OSDs, placement groups, MDS and overall maintenance, administration
70 of the cluster.
71
72 Commands
73 ========
74
75 auth
76 ----
77
78 Manage authentication keys. It is used for adding, removing, exporting
79 or updating of authentication keys for a particular entity such as a monitor or
80 OSD. It uses some additional subcommands.
81
82 Subcommand ``add`` adds authentication info for a particular entity from input
83 file, or random key if no input is given and/or any caps specified in the command.
84
85 Usage::
86
87 ceph auth add <entity> {<caps> [<caps>...]}
88
89 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
90
91 Usage::
92
93 ceph auth caps <entity> <caps> [<caps>...]
94
95 Subcommand ``del`` deletes all caps for ``name``.
96
97 Usage::
98
99 ceph auth del <entity>
100
101 Subcommand ``export`` writes keyring for requested entity, or master keyring if
102 none given.
103
104 Usage::
105
106 ceph auth export {<entity>}
107
108 Subcommand ``get`` writes keyring file with requested key.
109
110 Usage::
111
112 ceph auth get <entity>
113
114 Subcommand ``get-key`` displays requested key.
115
116 Usage::
117
118 ceph auth get-key <entity>
119
120 Subcommand ``get-or-create`` adds authentication info for a particular entity
121 from input file, or random key if no input given and/or any caps specified in the
122 command.
123
124 Usage::
125
126 ceph auth get-or-create <entity> {<caps> [<caps>...]}
127
128 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
129 pairs specified in the command. If key already exists, any given caps must match
130 the existing caps for that key.
131
132 Usage::
133
134 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
135
136 Subcommand ``import`` reads keyring from input file.
137
138 Usage::
139
140 ceph auth import
141
142 Subcommand ``ls`` lists authentication state.
143
144 Usage::
145
146 ceph auth ls
147
148 Subcommand ``print-key`` displays requested key.
149
150 Usage::
151
152 ceph auth print-key <entity>
153
154 Subcommand ``print_key`` displays requested key.
155
156 Usage::
157
158 ceph auth print_key <entity>
159
160
161 compact
162 -------
163
164 Causes compaction of monitor's leveldb storage.
165
166 Usage::
167
168 ceph compact
169
170
171 config
172 ------
173
174 Configure the cluster. By default, Ceph daemons and clients retrieve their
175 configuration options from monitor when they start, and are updated if any of
176 the tracked options is changed at run time. It uses following additional
177 subcommand.
178
179 Subcommand ``dump`` to dump all options for the cluster
180
181 Usage::
182
183 ceph config dump
184
185 Subcommand ``ls`` to list all option names for the cluster
186
187 Usage::
188
189 ceph config ls
190
191 Subcommand ``help`` to describe the specified configuration option
192
193 Usage::
194
195 ceph config help <option>
196
197 Subcommand ``get`` to dump the option(s) for the specified entity.
198
199 Usage::
200
201 ceph config get <who> {<option>}
202
203 Subcommand ``show`` to display the running configuration of the specified
204 entity. Please note, unlike ``get``, which only shows the options managed
205 by monitor, ``show`` displays all the configurations being actively used.
206 These options are pulled from several sources, for instance, the compiled-in
207 default value, the monitor's configuration database, ``ceph.conf`` file on
208 the host. The options can even be overridden at runtime. So, there is chance
209 that the configuration options in the output of ``show`` could be different
210 from those in the output of ``get``.
211
212 Usage::
213
214 ceph config show {<who>}
215
216 Subcommand ``show-with-defaults`` to display the running configuration along with the compiled-in defaults of the specified entity
217
218 Usage::
219
220 ceph config show {<who>}
221
222 Subcommand ``set`` to set an option for one or more specified entities
223
224 Usage::
225
226 ceph config set <who> <option> <value> {--force}
227
228 Subcommand ``rm`` to clear an option for one or more entities
229
230 Usage::
231
232 ceph config rm <who> <option>
233
234 Subcommand ``log`` to show recent history of config changes. If `count` option
235 is omitted it defeaults to 10.
236
237 Usage::
238
239 ceph config log {<count>}
240
241 Subcommand ``reset`` to revert configuration to the specified historical version
242
243 Usage::
244
245 ceph config reset <version>
246
247
248 Subcommand ``assimilate-conf`` to assimilate options from stdin, and return a
249 new, minimal conf file
250
251 Usage::
252
253 ceph config assimilate-conf -i <input-config-path> > <output-config-path>
254 ceph config assimilate-conf < <input-config-path>
255
256 Subcommand ``generate-minimal-conf`` to generate a minimal ``ceph.conf`` file,
257 which can be used for bootstrapping a daemon or a client.
258
259 Usage::
260
261 ceph config generate-minimal-conf > <minimal-config-path>
262
263
264 config-key
265 ----------
266
267 Manage configuration key. Config-key is a general purpose key/value service
268 offered by the monitors. This service is mainly used by Ceph tools and daemons
269 for persisting various settings. Among which, ceph-mgr modules uses it for
270 storing their options. It uses some additional subcommands.
271
272 Subcommand ``rm`` deletes configuration key.
273
274 Usage::
275
276 ceph config-key rm <key>
277
278 Subcommand ``exists`` checks for configuration keys existence.
279
280 Usage::
281
282 ceph config-key exists <key>
283
284 Subcommand ``get`` gets the configuration key.
285
286 Usage::
287
288 ceph config-key get <key>
289
290 Subcommand ``ls`` lists configuration keys.
291
292 Usage::
293
294 ceph config-key ls
295
296 Subcommand ``dump`` dumps configuration keys and values.
297
298 Usage::
299
300 ceph config-key dump
301
302 Subcommand ``set`` puts configuration key and value.
303
304 Usage::
305
306 ceph config-key set <key> {<val>}
307
308
309 daemon
310 ------
311
312 Submit admin-socket commands.
313
314 Usage::
315
316 ceph daemon {daemon_name|socket_path} {command} ...
317
318 Example::
319
320 ceph daemon osd.0 help
321
322
323 daemonperf
324 ----------
325
326 Watch performance counters from a Ceph daemon.
327
328 Usage::
329
330 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
331
332
333 df
334 --
335
336 Show cluster's free space status.
337
338 Usage::
339
340 ceph df {detail}
341
342 .. _ceph features:
343
344 features
345 --------
346
347 Show the releases and features of all connected daemons and clients connected
348 to the cluster, along with the numbers of them in each bucket grouped by the
349 corresponding features/releases. Each release of Ceph supports a different set
350 of features, expressed by the features bitmask. New cluster features require
351 that clients support the feature, or else they are not allowed to connect to
352 these new features. As new features or capabilities are enabled after an
353 upgrade, older clients are prevented from connecting.
354
355 Usage::
356
357 ceph features
358
359 fs
360 --
361
362 Manage cephfs file systems. It uses some additional subcommands.
363
364 Subcommand ``ls`` to list file systems
365
366 Usage::
367
368 ceph fs ls
369
370 Subcommand ``new`` to make a new file system using named pools <metadata> and <data>
371
372 Usage::
373
374 ceph fs new <fs_name> <metadata> <data>
375
376 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
377
378 Usage::
379
380 ceph fs reset <fs_name> {--yes-i-really-mean-it}
381
382 Subcommand ``rm`` to disable the named file system
383
384 Usage::
385
386 ceph fs rm <fs_name> {--yes-i-really-mean-it}
387
388
389 fsid
390 ----
391
392 Show cluster's FSID/UUID.
393
394 Usage::
395
396 ceph fsid
397
398
399 health
400 ------
401
402 Show cluster's health.
403
404 Usage::
405
406 ceph health {detail}
407
408
409 heap
410 ----
411
412 Show heap usage info (available only if compiled with tcmalloc)
413
414 Usage::
415
416 ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
417
418 Subcommand ``release`` to make TCMalloc to releases no-longer-used memory back to the kernel at once.
419
420 Usage::
421
422 ceph tell <name (type.id)> heap release
423
424 Subcommand ``(get|set)_release_rate`` get or set the TCMalloc memory release rate. TCMalloc releases
425 no-longer-used memory back to the kernel gradually. the rate controls how quickly this happens.
426 Increase this setting to make TCMalloc to return unused memory more frequently. 0 means never return
427 memory to system, 1 means wait for 1000 pages after releasing a page to system. It is ``1.0`` by default..
428
429 Usage::
430
431 ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
432
433 injectargs
434 ----------
435
436 Inject configuration arguments into monitor.
437
438 Usage::
439
440 ceph injectargs <injected_args> [<injected_args>...]
441
442
443 log
444 ---
445
446 Log supplied text to the monitor log.
447
448 Usage::
449
450 ceph log <logtext> [<logtext>...]
451
452
453 mds
454 ---
455
456 Manage metadata server configuration and administration. It uses some
457 additional subcommands.
458
459 Subcommand ``compat`` manages compatible features. It uses some additional
460 subcommands.
461
462 Subcommand ``rm_compat`` removes compatible feature.
463
464 Usage::
465
466 ceph mds compat rm_compat <int[0-]>
467
468 Subcommand ``rm_incompat`` removes incompatible feature.
469
470 Usage::
471
472 ceph mds compat rm_incompat <int[0-]>
473
474 Subcommand ``show`` shows mds compatibility settings.
475
476 Usage::
477
478 ceph mds compat show
479
480 Subcommand ``fail`` forces mds to status fail.
481
482 Usage::
483
484 ceph mds fail <role|gid>
485
486 Subcommand ``rm`` removes inactive mds.
487
488 Usage::
489
490 ceph mds rm <int[0-]> <name> (type.id)>
491
492 Subcommand ``rmfailed`` removes failed mds.
493
494 Usage::
495
496 ceph mds rmfailed <int[0-]>
497
498 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
499
500 Usage::
501
502 ceph mds set_state <int[0-]> <int[0-20]>
503
504 Subcommand ``stat`` shows MDS status.
505
506 Usage::
507
508 ceph mds stat
509
510 Subcommand ``repaired`` mark a damaged MDS rank as no longer damaged.
511
512 Usage::
513
514 ceph mds repaired <role>
515
516 mon
517 ---
518
519 Manage monitor configuration and administration. It uses some additional
520 subcommands.
521
522 Subcommand ``add`` adds new monitor named <name> at <addr>.
523
524 Usage::
525
526 ceph mon add <name> <IPaddr[:port]>
527
528 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
529
530 Usage::
531
532 ceph mon dump {<int[0-]>}
533
534 Subcommand ``getmap`` gets monmap.
535
536 Usage::
537
538 ceph mon getmap {<int[0-]>}
539
540 Subcommand ``remove`` removes monitor named <name>.
541
542 Usage::
543
544 ceph mon remove <name>
545
546 Subcommand ``stat`` summarizes monitor status.
547
548 Usage::
549
550 ceph mon stat
551
552 mgr
553 ---
554
555 Ceph manager daemon configuration and management.
556
557 Subcommand ``dump`` dumps the latest MgrMap, which describes the active
558 and standby manager daemons.
559
560 Usage::
561
562 ceph mgr dump
563
564 Subcommand ``fail`` will mark a manager daemon as failed, removing it
565 from the manager map. If it is the active manager daemon a standby
566 will take its place.
567
568 Usage::
569
570 ceph mgr fail <name>
571
572 Subcommand ``module ls`` will list currently enabled manager modules (plugins).
573
574 Usage::
575
576 ceph mgr module ls
577
578 Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
579
580 Usage::
581
582 ceph mgr module enable <module>
583
584 Subcommand ``module disable`` will disable an active manager module.
585
586 Usage::
587
588 ceph mgr module disable <module>
589
590 Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
591
592 Usage::
593
594 ceph mgr metadata [name]
595
596 Subcommand ``versions`` will report a count of running daemon versions.
597
598 Usage::
599
600 ceph mgr versions
601
602 Subcommand ``count-metadata`` will report a count of any daemon metadata field.
603
604 Usage::
605
606 ceph mgr count-metadata <field>
607
608 .. _ceph-admin-osd:
609
610 osd
611 ---
612
613 Manage OSD configuration and administration. It uses some additional
614 subcommands.
615
616 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
617 subcommands.
618
619 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
620 from now)
621
622 Usage::
623
624 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
625
626 Subcommand ``ls`` show blacklisted clients
627
628 Usage::
629
630 ceph osd blacklist ls
631
632 Subcommand ``rm`` remove <addr> from blacklist
633
634 Usage::
635
636 ceph osd blacklist rm <EntityAddr>
637
638 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
639
640 Usage::
641
642 ceph osd blocked-by
643
644 Subcommand ``create`` creates new osd (with optional UUID and ID).
645
646 This command is DEPRECATED as of the Luminous release, and will be removed in
647 a future release.
648
649 Subcommand ``new`` should instead be used.
650
651 Usage::
652
653 ceph osd create {<uuid>} {<id>}
654
655 Subcommand ``new`` can be used to create a new OSD or to recreate a previously
656 destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*,
657 and the command expects a JSON file containing the base64 cephx key for auth
658 entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt
659 lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
660 the accompanying lockbox cephx key.
661
662 Usage::
663
664 ceph osd new {<uuid>} {<id>} -i {<params.json>}
665
666 The parameters JSON file is optional but if provided, is expected to maintain
667 a form of the following format::
668
669 {
670 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
671 "crush_device_class": "myclass"
672 }
673
674 Or::
675
676 {
677 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
678 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
679 "dmcrypt_key": "<dm-crypt key>",
680 "crush_device_class": "myclass"
681 }
682
683 Or::
684
685 {
686 "crush_device_class": "myclass"
687 }
688
689 The "crush_device_class" property is optional. If specified, it will set the
690 initial CRUSH device class for the new OSD.
691
692
693 Subcommand ``crush`` is used for CRUSH management. It uses some additional
694 subcommands.
695
696 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
697 <weight> and location <args>.
698
699 Usage::
700
701 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
702
703 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
704 type <type>.
705
706 Usage::
707
708 ceph osd crush add-bucket <name> <type>
709
710 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
711 <weight> at/to location <args>.
712
713 Usage::
714
715 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
716 [<args>...]
717
718 Subcommand ``dump`` dumps crush map.
719
720 Usage::
721
722 ceph osd crush dump
723
724 Subcommand ``get-tunable`` get crush tunable straw_calc_version
725
726 Usage::
727
728 ceph osd crush get-tunable straw_calc_version
729
730 Subcommand ``link`` links existing entry for <name> under location <args>.
731
732 Usage::
733
734 ceph osd crush link <name> <args> [<args>...]
735
736 Subcommand ``move`` moves existing entry for <name> to location <args>.
737
738 Usage::
739
740 ceph osd crush move <name> <args> [<args>...]
741
742 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
743 <ancestor>).
744
745 Usage::
746
747 ceph osd crush remove <name> {<ancestor>}
748
749 Subcommand ``rename-bucket`` renames bucket <srcname> to <dstname>
750
751 Usage::
752
753 ceph osd crush rename-bucket <srcname> <dstname>
754
755 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
756
757 Usage::
758
759 ceph osd crush reweight <name> <float[0.0-]>
760
761 Subcommand ``reweight-all`` recalculate the weights for the tree to
762 ensure they sum correctly
763
764 Usage::
765
766 ceph osd crush reweight-all
767
768 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
769 to <weight> in crush map
770
771 Usage::
772
773 ceph osd crush reweight-subtree <name> <weight>
774
775 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
776 <ancestor>).
777
778 Usage::
779
780 ceph osd crush rm <name> {<ancestor>}
781
782 Subcommand ``rule`` is used for creating crush rules. It uses some additional
783 subcommands.
784
785 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
786 created with <profile> (default default).
787
788 Usage::
789
790 ceph osd crush rule create-erasure <name> {<profile>}
791
792 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
793 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
794 (default firstn; indep best for erasure pools).
795
796 Usage::
797
798 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
799
800 Subcommand ``dump`` dumps crush rule <name> (default all).
801
802 Usage::
803
804 ceph osd crush rule dump {<name>}
805
806 Subcommand ``ls`` lists crush rules.
807
808 Usage::
809
810 ceph osd crush rule ls
811
812 Subcommand ``rm`` removes crush rule <name>.
813
814 Usage::
815
816 ceph osd crush rule rm <name>
817
818 Subcommand ``set`` used alone, sets crush map from input file.
819
820 Usage::
821
822 ceph osd crush set
823
824 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
825 for <name> to <weight> with location <args>.
826
827 Usage::
828
829 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
830
831 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
832 tunable that can be set is straw_calc_version.
833
834 Usage::
835
836 ceph osd crush set-tunable straw_calc_version <value>
837
838 Subcommand ``show-tunables`` shows current crush tunables.
839
840 Usage::
841
842 ceph osd crush show-tunables
843
844 Subcommand ``tree`` shows the crush buckets and items in a tree view.
845
846 Usage::
847
848 ceph osd crush tree
849
850 Subcommand ``tunables`` sets crush tunables values to <profile>.
851
852 Usage::
853
854 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
855
856 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
857 <ancestor>).
858
859 Usage::
860
861 ceph osd crush unlink <name> {<ancestor>}
862
863 Subcommand ``df`` shows OSD utilization
864
865 Usage::
866
867 ceph osd df {plain|tree}
868
869 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
870
871 Usage::
872
873 ceph osd deep-scrub <who>
874
875 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
876
877 Usage::
878
879 ceph osd down <ids> [<ids>...]
880
881 Subcommand ``dump`` prints summary of OSD map.
882
883 Usage::
884
885 ceph osd dump {<int[0-]>}
886
887 Subcommand ``erasure-code-profile`` is used for managing the erasure code
888 profiles. It uses some additional subcommands.
889
890 Subcommand ``get`` gets erasure code profile <name>.
891
892 Usage::
893
894 ceph osd erasure-code-profile get <name>
895
896 Subcommand ``ls`` lists all erasure code profiles.
897
898 Usage::
899
900 ceph osd erasure-code-profile ls
901
902 Subcommand ``rm`` removes erasure code profile <name>.
903
904 Usage::
905
906 ceph osd erasure-code-profile rm <name>
907
908 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
909 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
910
911 Usage::
912
913 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
914
915 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
916
917 Usage::
918
919 ceph osd find <int[0-]>
920
921 Subcommand ``getcrushmap`` gets CRUSH map.
922
923 Usage::
924
925 ceph osd getcrushmap {<int[0-]>}
926
927 Subcommand ``getmap`` gets OSD map.
928
929 Usage::
930
931 ceph osd getmap {<int[0-]>}
932
933 Subcommand ``getmaxosd`` shows largest OSD id.
934
935 Usage::
936
937 ceph osd getmaxosd
938
939 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
940
941 Usage::
942
943 ceph osd in <ids> [<ids>...]
944
945 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
946 MORE REPLICAS EXIST, BE CAREFUL.
947
948 Usage::
949
950 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
951
952 Subcommand ``ls`` shows all OSD ids.
953
954 Usage::
955
956 ceph osd ls {<int[0-]>}
957
958 Subcommand ``lspools`` lists pools.
959
960 Usage::
961
962 ceph osd lspools {<int>}
963
964 Subcommand ``map`` finds pg for <object> in <pool>.
965
966 Usage::
967
968 ceph osd map <poolname> <objectname>
969
970 Subcommand ``metadata`` fetches metadata for osd <id>.
971
972 Usage::
973
974 ceph osd metadata {int[0-]} (default all)
975
976 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
977
978 Usage::
979
980 ceph osd out <ids> [<ids>...]
981
982 Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
983 stopped without immediately making data unavailable. That is, all
984 data should remain readable and writeable, although data redundancy
985 may be reduced as some PGs may end up in a degraded (but active)
986 state. It will return a success code if it is okay to stop the
987 OSD(s), or an error code and informative message if it is not or if no
988 conclusion can be drawn at the current time.
989
990 Usage::
991
992 ceph osd ok-to-stop <id> [<ids>...]
993
994 Subcommand ``pause`` pauses osd.
995
996 Usage::
997
998 ceph osd pause
999
1000 Subcommand ``perf`` prints dump of OSD perf summary stats.
1001
1002 Usage::
1003
1004 ceph osd perf
1005
1006 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
1007 only).
1008
1009 Usage::
1010
1011 ceph osd pg-temp <pgid> {<id> [<id>...]}
1012
1013 Subcommand ``force-create-pg`` forces creation of pg <pgid>.
1014
1015 Usage::
1016
1017 ceph osd force-create-pg <pgid>
1018
1019
1020 Subcommand ``pool`` is used for managing data pools. It uses some additional
1021 subcommands.
1022
1023 Subcommand ``create`` creates pool.
1024
1025 Usage::
1026
1027 ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
1028 {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
1029
1030 Subcommand ``delete`` deletes pool.
1031
1032 Usage::
1033
1034 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
1035
1036 Subcommand ``get`` gets pool parameter <var>.
1037
1038 Usage::
1039
1040 ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
1041
1042 Only for tiered pools::
1043
1044 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
1045 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
1046 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1047 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
1048
1049 Only for erasure coded pools::
1050
1051 ceph osd pool get <poolname> erasure_code_profile
1052
1053 Use ``all`` to get all pool parameters that apply to the pool's type::
1054
1055 ceph osd pool get <poolname> all
1056
1057 Subcommand ``get-quota`` obtains object or byte limits for pool.
1058
1059 Usage::
1060
1061 ceph osd pool get-quota <poolname>
1062
1063 Subcommand ``ls`` list pools
1064
1065 Usage::
1066
1067 ceph osd pool ls {detail}
1068
1069 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
1070
1071 Usage::
1072
1073 ceph osd pool mksnap <poolname> <snap>
1074
1075 Subcommand ``rename`` renames <srcpool> to <destpool>.
1076
1077 Usage::
1078
1079 ceph osd pool rename <poolname> <poolname>
1080
1081 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
1082
1083 Usage::
1084
1085 ceph osd pool rmsnap <poolname> <snap>
1086
1087 Subcommand ``set`` sets pool parameter <var> to <val>.
1088
1089 Usage::
1090
1091 ceph osd pool set <poolname> size|min_size|pg_num|
1092 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
1093 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
1094 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
1095 cache_target_dirty_high_ratio|
1096 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1097 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
1098 hit_set_search_last_n
1099 <val> {--yes-i-really-mean-it}
1100
1101 Subcommand ``set-quota`` sets object or byte limit on pool.
1102
1103 Usage::
1104
1105 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1106
1107 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1108
1109 Usage::
1110
1111 ceph osd pool stats {<name>}
1112
1113 Subcommand ``application`` is used for adding an annotation to the given
1114 pool. By default, the possible applications are object, block, and file
1115 storage (corresponding app-names are "rgw", "rbd", and "cephfs"). However,
1116 there might be other applications as well. Based on the application, there
1117 may or may not be some processing conducted.
1118
1119 Subcommand ``disable`` disables the given application on the given pool.
1120
1121 Usage::
1122
1123 ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
1124
1125 Subcommand ``enable`` adds an annotation to the given pool for the mentioned
1126 application.
1127
1128 Usage::
1129
1130 ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
1131
1132 Subcommand ``get`` displays the value for the given key that is associated
1133 with the given application of the given pool. Not passing the optional
1134 arguments would display all key-value pairs for all applications for all
1135 pools.
1136
1137 Usage::
1138
1139 ceph osd pool application get {<pool-name>} {<app>} {<key>}
1140
1141 Subcommand ``rm`` removes the key-value pair for the given key in the given
1142 application of the given pool.
1143
1144 Usage::
1145
1146 ceph osd pool application rm <pool-name> <app> <key>
1147
1148 Subcommand ``set`` assosciates or updates, if it already exists, a key-value
1149 pair with the given application for the given pool.
1150
1151 Usage::
1152
1153 ceph osd pool application set <pool-name> <app> <key> <value>
1154
1155 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1156 <= 1.0
1157
1158 Usage::
1159
1160 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1161
1162 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1163 only).
1164
1165 Usage::
1166
1167 ceph osd primary-temp <pgid> <id>
1168
1169 Subcommand ``repair`` initiates repair on a specified osd.
1170
1171 Usage::
1172
1173 ceph osd repair <who>
1174
1175 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1176
1177 Usage::
1178
1179 osd reweight <int[0-]> <float[0.0-1.0]>
1180
1181 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1182 [overload-percentage-for-consideration, default 120].
1183
1184 Usage::
1185
1186 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1187 {--no-increasing}
1188
1189 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
1190 [overload-percentage-for-consideration, default 120].
1191
1192 Usage::
1193
1194 ceph osd reweight-by-utilization {<int[100-]>}
1195 {--no-increasing}
1196
1197 Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1198
1199
1200 Usage::
1201
1202 ceph osd rm <ids> [<ids>...]
1203
1204 Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1205 entity's keys and all of its dm-crypt and daemon-private config key
1206 entries.
1207
1208 This command will not remove the OSD from crush, nor will it remove the
1209 OSD from the OSD map. Instead, once the command successfully completes,
1210 the OSD will show marked as *destroyed*.
1211
1212 In order to mark an OSD as destroyed, the OSD must first be marked as
1213 **lost**.
1214
1215 Usage::
1216
1217 ceph osd destroy <id> {--yes-i-really-mean-it}
1218
1219
1220 Subcommand ``purge`` performs a combination of ``osd destroy``,
1221 ``osd rm`` and ``osd crush remove``.
1222
1223 Usage::
1224
1225 ceph osd purge <id> {--yes-i-really-mean-it}
1226
1227 Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1228 destroy an OSD without reducing overall data redundancy or durability.
1229 It will return a success code if it is definitely safe, or an error
1230 code and informative message if it is not or if no conclusion can be
1231 drawn at the current time.
1232
1233 Usage::
1234
1235 ceph osd safe-to-destroy <id> [<ids>...]
1236
1237 Subcommand ``scrub`` initiates scrub on specified osd.
1238
1239 Usage::
1240
1241 ceph osd scrub <who>
1242
1243 Subcommand ``set`` sets cluster-wide <flag> by updating OSD map.
1244 The ``full`` flag is not honored anymore since the Mimic release, and
1245 ``ceph osd set full`` is not supported in the Octopus release.
1246
1247 Usage::
1248
1249 ceph osd set pause|noup|nodown|noout|noin|nobackfill|
1250 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1251
1252 Subcommand ``setcrushmap`` sets crush map from input file.
1253
1254 Usage::
1255
1256 ceph osd setcrushmap
1257
1258 Subcommand ``setmaxosd`` sets new maximum osd value.
1259
1260 Usage::
1261
1262 ceph osd setmaxosd <int[0-]>
1263
1264 Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1265 compatible with the specified client version. This subcommand prevents you from
1266 making any changes (e.g., crush tunables, or using new features) that
1267 would violate the current setting. Please note, This subcommand will fail if
1268 any connected daemon or client is not compatible with the features offered by
1269 the given <version>. To see the features and releases of all clients connected
1270 to cluster, please see `ceph features`_.
1271
1272 Usage::
1273
1274 ceph osd set-require-min-compat-client <version>
1275
1276 Subcommand ``stat`` prints summary of OSD map.
1277
1278 Usage::
1279
1280 ceph osd stat
1281
1282 Subcommand ``tier`` is used for managing tiers. It uses some additional
1283 subcommands.
1284
1285 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1286 (the first one).
1287
1288 Usage::
1289
1290 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1291
1292 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1293 to existing pool <pool> (the first one).
1294
1295 Usage::
1296
1297 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1298
1299 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1300
1301 Usage::
1302
1303 ceph osd tier cache-mode <poolname> writeback|readproxy|readonly|none
1304
1305 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1306 <pool> (the first one).
1307
1308 Usage::
1309
1310 ceph osd tier remove <poolname> <poolname>
1311
1312 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1313
1314 Usage::
1315
1316 ceph osd tier remove-overlay <poolname>
1317
1318 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1319 <overlaypool>.
1320
1321 Usage::
1322
1323 ceph osd tier set-overlay <poolname> <poolname>
1324
1325 Subcommand ``tree`` prints OSD tree.
1326
1327 Usage::
1328
1329 ceph osd tree {<int[0-]>}
1330
1331 Subcommand ``unpause`` unpauses osd.
1332
1333 Usage::
1334
1335 ceph osd unpause
1336
1337 Subcommand ``unset`` unsets cluster-wide <flag> by updating OSD map.
1338
1339 Usage::
1340
1341 ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
1342 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1343
1344
1345 pg
1346 --
1347
1348 It is used for managing the placement groups in OSDs. It uses some
1349 additional subcommands.
1350
1351 Subcommand ``debug`` shows debug info about pgs.
1352
1353 Usage::
1354
1355 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1356
1357 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1358
1359 Usage::
1360
1361 ceph pg deep-scrub <pgid>
1362
1363 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1364 with plain).
1365
1366 Usage::
1367
1368 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1369
1370 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1371
1372 Usage::
1373
1374 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1375
1376 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1377
1378 Usage::
1379
1380 ceph pg dump_pools_json
1381
1382 Subcommand ``dump_stuck`` shows information about stuck pgs.
1383
1384 Usage::
1385
1386 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1387 {<int>}
1388
1389 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1390
1391 Usage::
1392
1393 ceph pg getmap
1394
1395 Subcommand ``ls`` lists pg with specific pool, osd, state
1396
1397 Usage::
1398
1399 ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
1400
1401 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1402
1403 Usage::
1404
1405 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1406 {<pg-state> [<pg-state>...]}
1407
1408 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1409
1410 Usage::
1411
1412 ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
1413
1414 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1415
1416 Usage::
1417
1418 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1419 {<pg-state> [<pg-state>...]}
1420
1421 Subcommand ``map`` shows mapping of pg to osds.
1422
1423 Usage::
1424
1425 ceph pg map <pgid>
1426
1427 Subcommand ``repair`` starts repair on <pgid>.
1428
1429 Usage::
1430
1431 ceph pg repair <pgid>
1432
1433 Subcommand ``scrub`` starts scrub on <pgid>.
1434
1435 Usage::
1436
1437 ceph pg scrub <pgid>
1438
1439 Subcommand ``stat`` shows placement group status.
1440
1441 Usage::
1442
1443 ceph pg stat
1444
1445
1446 quorum
1447 ------
1448
1449 Cause a specific MON to enter or exit quorum.
1450
1451 Usage::
1452
1453 ceph tell mon.<id> quorum enter|exit
1454
1455 quorum_status
1456 -------------
1457
1458 Reports status of monitor quorum.
1459
1460 Usage::
1461
1462 ceph quorum_status
1463
1464
1465 report
1466 ------
1467
1468 Reports full status of cluster, optional title tag strings.
1469
1470 Usage::
1471
1472 ceph report {<tags> [<tags>...]}
1473
1474
1475 status
1476 ------
1477
1478 Shows cluster status.
1479
1480 Usage::
1481
1482 ceph status
1483
1484
1485 tell
1486 ----
1487
1488 Sends a command to a specific daemon.
1489
1490 Usage::
1491
1492 ceph tell <name (type.id)> <command> [options...]
1493
1494
1495 List all available commands.
1496
1497 Usage::
1498
1499 ceph tell <name (type.id)> help
1500
1501 version
1502 -------
1503
1504 Show mon daemon version
1505
1506 Usage::
1507
1508 ceph version
1509
1510 Options
1511 =======
1512
1513 .. option:: -i infile
1514
1515 will specify an input file to be passed along as a payload with the
1516 command to the monitor cluster. This is only used for specific
1517 monitor commands.
1518
1519 .. option:: -o outfile
1520
1521 will write any payload returned by the monitor cluster with its
1522 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1523 return a payload.
1524
1525 .. option:: --setuser user
1526
1527 will apply the appropriate user ownership to the file specified by
1528 the option '-o'.
1529
1530 .. option:: --setgroup group
1531
1532 will apply the appropriate group ownership to the file specified by
1533 the option '-o'.
1534
1535 .. option:: -c ceph.conf, --conf=ceph.conf
1536
1537 Use ceph.conf configuration file instead of the default
1538 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1539
1540 .. option:: --id CLIENT_ID, --user CLIENT_ID
1541
1542 Client id for authentication.
1543
1544 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1545
1546 Client name for authentication.
1547
1548 .. option:: --cluster CLUSTER
1549
1550 Name of the Ceph cluster.
1551
1552 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1553
1554 Submit admin-socket commands via admin sockets in /var/run/ceph.
1555
1556 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1557
1558 You probably mean --admin-daemon
1559
1560 .. option:: -s, --status
1561
1562 Show cluster status.
1563
1564 .. option:: -w, --watch
1565
1566 Watch live cluster changes on the default 'cluster' channel
1567
1568 .. option:: -W, --watch-channel
1569
1570 Watch live cluster changes on any channel (cluster, audit, cephadm, or * for all)
1571
1572 .. option:: --watch-debug
1573
1574 Watch debug events.
1575
1576 .. option:: --watch-info
1577
1578 Watch info events.
1579
1580 .. option:: --watch-sec
1581
1582 Watch security events.
1583
1584 .. option:: --watch-warn
1585
1586 Watch warning events.
1587
1588 .. option:: --watch-error
1589
1590 Watch error events.
1591
1592 .. option:: --version, -v
1593
1594 Display version.
1595
1596 .. option:: --verbose
1597
1598 Make verbose.
1599
1600 .. option:: --concise
1601
1602 Make less verbose.
1603
1604 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1605
1606 Format of output.
1607
1608 .. option:: --connect-timeout CLUSTER_TIMEOUT
1609
1610 Set a timeout for connecting to the cluster.
1611
1612 .. option:: --no-increasing
1613
1614 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1615 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1616 If this option is used with these commands, it will help not to increase osd weight
1617 even the osd is under utilized.
1618
1619 .. option:: --block
1620
1621 block until completion (scrub and deep-scrub only)
1622
1623 Availability
1624 ============
1625
1626 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1627 the Ceph documentation at http://ceph.com/docs for more information.
1628
1629
1630 See also
1631 ========
1632
1633 :doc:`ceph-mon <ceph-mon>`\(8),
1634 :doc:`ceph-osd <ceph-osd>`\(8),
1635 :doc:`ceph-mds <ceph-mds>`\(8)