]> git.proxmox.com Git - ceph.git/blob - ceph/doc/man/8/ceph.rst
update sources to v12.1.4
[ceph.git] / ceph / doc / man / 8 / ceph.rst
1 :orphan:
2
3 ==================================
4 ceph -- ceph administration tool
5 ==================================
6
7 .. program:: ceph
8
9 Synopsis
10 ========
11
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
13
14 | **ceph** **compact**
15
16 | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
17
18 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
19
20 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
21
22 | **ceph** **df** *{detail}*
23
24 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
25
26 | **ceph** **fsid**
27
28 | **ceph** **health** *{detail}*
29
30 | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
31
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
33
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
35
36 | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
37
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
39
40 | **ceph** **mon_status**
41
42 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
43
44 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
45
46 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
47
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
49
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
51
52 | **ceph** **quorum** [ *enter* \| *exit* ]
53
54 | **ceph** **quorum_status**
55
56 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
57
58 | **ceph** **scrub**
59
60 | **ceph** **status**
61
62 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
63
64 | **ceph** **tell** *<name (type.id)> <args> [<args>...]*
65
66 | **ceph** **version**
67
68 Description
69 ===========
70
71 :program:`ceph` is a control utility which is used for manual deployment and maintenance
72 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73 monitors, OSDs, placement groups, MDS and overall maintenance, administration
74 of the cluster.
75
76 Commands
77 ========
78
79 auth
80 ----
81
82 Manage authentication keys. It is used for adding, removing, exporting
83 or updating of authentication keys for a particular entity such as a monitor or
84 OSD. It uses some additional subcommands.
85
86 Subcommand ``add`` adds authentication info for a particular entity from input
87 file, or random key if no input is given and/or any caps specified in the command.
88
89 Usage::
90
91 ceph auth add <entity> {<caps> [<caps>...]}
92
93 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
94
95 Usage::
96
97 ceph auth caps <entity> <caps> [<caps>...]
98
99 Subcommand ``del`` deletes all caps for ``name``.
100
101 Usage::
102
103 ceph auth del <entity>
104
105 Subcommand ``export`` writes keyring for requested entity, or master keyring if
106 none given.
107
108 Usage::
109
110 ceph auth export {<entity>}
111
112 Subcommand ``get`` writes keyring file with requested key.
113
114 Usage::
115
116 ceph auth get <entity>
117
118 Subcommand ``get-key`` displays requested key.
119
120 Usage::
121
122 ceph auth get-key <entity>
123
124 Subcommand ``get-or-create`` adds authentication info for a particular entity
125 from input file, or random key if no input given and/or any caps specified in the
126 command.
127
128 Usage::
129
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
131
132 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133 pairs specified in the command. If key already exists, any given caps must match
134 the existing caps for that key.
135
136 Usage::
137
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
139
140 Subcommand ``import`` reads keyring from input file.
141
142 Usage::
143
144 ceph auth import
145
146 Subcommand ``ls`` lists authentication state.
147
148 Usage::
149
150 ceph auth ls
151
152 Subcommand ``print-key`` displays requested key.
153
154 Usage::
155
156 ceph auth print-key <entity>
157
158 Subcommand ``print_key`` displays requested key.
159
160 Usage::
161
162 ceph auth print_key <entity>
163
164
165 compact
166 -------
167
168 Causes compaction of monitor's leveldb storage.
169
170 Usage::
171
172 ceph compact
173
174
175 config-key
176 ----------
177
178 Manage configuration key. It uses some additional subcommands.
179
180 Subcommand ``del`` deletes configuration key.
181
182 Usage::
183
184 ceph config-key del <key>
185
186 Subcommand ``exists`` checks for configuration keys existence.
187
188 Usage::
189
190 ceph config-key exists <key>
191
192 Subcommand ``get`` gets the configuration key.
193
194 Usage::
195
196 ceph config-key get <key>
197
198 Subcommand ``list`` lists configuration keys.
199
200 Usage::
201
202 ceph config-key ls
203
204 Subcommand ``dump`` dumps configuration keys and values.
205
206 Usage::
207
208 ceph config-key dump
209
210 Subcommand ``set`` puts configuration key and value.
211
212 Usage::
213
214 ceph config-key set <key> {<val>}
215
216
217 daemon
218 ------
219
220 Submit admin-socket commands.
221
222 Usage::
223
224 ceph daemon {daemon_name|socket_path} {command} ...
225
226 Example::
227
228 ceph daemon osd.0 help
229
230
231 daemonperf
232 ----------
233
234 Watch performance counters from a Ceph daemon.
235
236 Usage::
237
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
239
240
241 df
242 --
243
244 Show cluster's free space status.
245
246 Usage::
247
248 ceph df {detail}
249
250 .. _ceph features:
251
252 features
253 --------
254
255 Show the releases and features of all connected daemons and clients connected
256 to the cluster, along with the numbers of them in each bucket grouped by the
257 corresponding features/releases. Each release of Ceph supports a different set
258 of features, expressed by the features bitmask. New cluster features require
259 that clients support the feature, or else they are not allowed to connect to
260 these new features. As new features or capabilities are enabled after an
261 upgrade, older clients are prevented from connecting.
262
263 Usage::
264
265 ceph features
266
267 fs
268 --
269
270 Manage cephfs filesystems. It uses some additional subcommands.
271
272 Subcommand ``ls`` to list filesystems
273
274 Usage::
275
276 ceph fs ls
277
278 Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
279
280 Usage::
281
282 ceph fs new <fs_name> <metadata> <data>
283
284 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
285
286 Usage::
287
288 ceph fs reset <fs_name> {--yes-i-really-mean-it}
289
290 Subcommand ``rm`` to disable the named filesystem
291
292 Usage::
293
294 ceph fs rm <fs_name> {--yes-i-really-mean-it}
295
296
297 fsid
298 ----
299
300 Show cluster's FSID/UUID.
301
302 Usage::
303
304 ceph fsid
305
306
307 health
308 ------
309
310 Show cluster's health.
311
312 Usage::
313
314 ceph health {detail}
315
316
317 heap
318 ----
319
320 Show heap usage info (available only if compiled with tcmalloc)
321
322 Usage::
323
324 ceph heap dump|start_profiler|stop_profiler|release|stats
325
326
327 injectargs
328 ----------
329
330 Inject configuration arguments into monitor.
331
332 Usage::
333
334 ceph injectargs <injected_args> [<injected_args>...]
335
336
337 log
338 ---
339
340 Log supplied text to the monitor log.
341
342 Usage::
343
344 ceph log <logtext> [<logtext>...]
345
346
347 mds
348 ---
349
350 Manage metadata server configuration and administration. It uses some
351 additional subcommands.
352
353 Subcommand ``compat`` manages compatible features. It uses some additional
354 subcommands.
355
356 Subcommand ``rm_compat`` removes compatible feature.
357
358 Usage::
359
360 ceph mds compat rm_compat <int[0-]>
361
362 Subcommand ``rm_incompat`` removes incompatible feature.
363
364 Usage::
365
366 ceph mds compat rm_incompat <int[0-]>
367
368 Subcommand ``show`` shows mds compatibility settings.
369
370 Usage::
371
372 ceph mds compat show
373
374 Subcommand ``deactivate`` stops mds.
375
376 Usage::
377
378 ceph mds deactivate <who>
379
380 Subcommand ``fail`` forces mds to status fail.
381
382 Usage::
383
384 ceph mds fail <who>
385
386 Subcommand ``rm`` removes inactive mds.
387
388 Usage::
389
390 ceph mds rm <int[0-]> <name> (type.id)>
391
392 Subcommand ``rmfailed`` removes failed mds.
393
394 Usage::
395
396 ceph mds rmfailed <int[0-]>
397
398 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
399
400 Usage::
401
402 ceph mds set_state <int[0-]> <int[0-20]>
403
404 Subcommand ``stat`` shows MDS status.
405
406 Usage::
407
408 ceph mds stat
409
410 Subcommand ``tell`` sends command to particular mds.
411
412 Usage::
413
414 ceph mds tell <who> <args> [<args>...]
415
416 mon
417 ---
418
419 Manage monitor configuration and administration. It uses some additional
420 subcommands.
421
422 Subcommand ``add`` adds new monitor named <name> at <addr>.
423
424 Usage::
425
426 ceph mon add <name> <IPaddr[:port]>
427
428 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
429
430 Usage::
431
432 ceph mon dump {<int[0-]>}
433
434 Subcommand ``getmap`` gets monmap.
435
436 Usage::
437
438 ceph mon getmap {<int[0-]>}
439
440 Subcommand ``remove`` removes monitor named <name>.
441
442 Usage::
443
444 ceph mon remove <name>
445
446 Subcommand ``stat`` summarizes monitor status.
447
448 Usage::
449
450 ceph mon stat
451
452 mon_status
453 ----------
454
455 Reports status of monitors.
456
457 Usage::
458
459 ceph mon_status
460
461 mgr
462 ---
463
464 Ceph manager daemon configuration and management.
465
466 Subcommand ``dump`` dumps the latest MgrMap, which describes the active
467 and standby manager daemons.
468
469 Usage::
470
471 ceph mgr dump
472
473 Subcommand ``fail`` will mark a manager daemon as failed, removing it
474 from the manager map. If it is the active manager daemon a standby
475 will take its place.
476
477 Usage::
478
479 ceph mgr fail <name>
480
481 Subcommand ``module ls`` will list currently enabled manager modules (plugins).
482
483 Usage::
484
485 ceph mgr module ls
486
487 Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
488
489 Usage::
490
491 ceph mgr module enable <module>
492
493 Subcommand ``module disable`` will disable an active manager module.
494
495 Usage::
496
497 ceph mgr module disable <module>
498
499 Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
500
501 Usage::
502
503 ceph mgr metadata [name]
504
505 Subcommand ``versions`` will report a count of running daemon versions.
506
507 Usage::
508
509 ceph mgr versions
510
511 Subcommand ``count-metadata`` will report a count of any daemon metadata field.
512
513 Usage::
514
515 ceph mgr count-metadata <field>
516
517
518 osd
519 ---
520
521 Manage OSD configuration and administration. It uses some additional
522 subcommands.
523
524 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
525 subcommands.
526
527 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
528 from now)
529
530 Usage::
531
532 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
533
534 Subcommand ``ls`` show blacklisted clients
535
536 Usage::
537
538 ceph osd blacklist ls
539
540 Subcommand ``rm`` remove <addr> from blacklist
541
542 Usage::
543
544 ceph osd blacklist rm <EntityAddr>
545
546 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
547
548 Usage::
549
550 ceph osd blocked-by
551
552 Subcommand ``create`` creates new osd (with optional UUID and ID).
553
554 This command is DEPRECATED as of the Luminous release, and will be removed in
555 a future release.
556
557 Subcommand ``new`` should instead be used.
558
559 Usage::
560
561 ceph osd create {<uuid>} {<id>}
562
563 Subcommand ``new`` reuses a previously destroyed OSD *id*. The new OSD will
564 have the specified *uuid*, and the command expects a JSON file containing
565 the base64 cephx key for auth entity *client.osd.<id>*, as well as optional
566 base64 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying
567 a dm-crypt requires specifying the accompanying lockbox cephx key.
568
569 Usage::
570
571 ceph osd new {<id>} {<uuid>} -i {<secrets.json>}
572
573 The secrets JSON file is expected to maintain a form of the following format::
574
575 {
576 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg=="
577 }
578
579 Or::
580
581 {
582 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
583 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
584 "dmcrypt_key": "<dm-crypt key>"
585 }
586
587
588 Subcommand ``crush`` is used for CRUSH management. It uses some additional
589 subcommands.
590
591 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
592 <weight> and location <args>.
593
594 Usage::
595
596 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
597
598 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
599 type <type>.
600
601 Usage::
602
603 ceph osd crush add-bucket <name> <type>
604
605 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
606 <weight> at/to location <args>.
607
608 Usage::
609
610 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
611 [<args>...]
612
613 Subcommand ``dump`` dumps crush map.
614
615 Usage::
616
617 ceph osd crush dump
618
619 Subcommand ``get-tunable`` get crush tunable straw_calc_version
620
621 Usage::
622
623 ceph osd crush get-tunable straw_calc_version
624
625 Subcommand ``link`` links existing entry for <name> under location <args>.
626
627 Usage::
628
629 ceph osd crush link <name> <args> [<args>...]
630
631 Subcommand ``move`` moves existing entry for <name> to location <args>.
632
633 Usage::
634
635 ceph osd crush move <name> <args> [<args>...]
636
637 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
638 <ancestor>).
639
640 Usage::
641
642 ceph osd crush remove <name> {<ancestor>}
643
644 Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
645
646 Usage::
647
648 ceph osd crush rename-bucket <srcname> <dstname>
649
650 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
651
652 Usage::
653
654 ceph osd crush reweight <name> <float[0.0-]>
655
656 Subcommand ``reweight-all`` recalculate the weights for the tree to
657 ensure they sum correctly
658
659 Usage::
660
661 ceph osd crush reweight-all
662
663 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
664 to <weight> in crush map
665
666 Usage::
667
668 ceph osd crush reweight-subtree <name> <weight>
669
670 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
671 <ancestor>).
672
673 Usage::
674
675 ceph osd crush rm <name> {<ancestor>}
676
677 Subcommand ``rule`` is used for creating crush rules. It uses some additional
678 subcommands.
679
680 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
681 created with <profile> (default default).
682
683 Usage::
684
685 ceph osd crush rule create-erasure <name> {<profile>}
686
687 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
688 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
689 (default firstn; indep best for erasure pools).
690
691 Usage::
692
693 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
694
695 Subcommand ``dump`` dumps crush rule <name> (default all).
696
697 Usage::
698
699 ceph osd crush rule dump {<name>}
700
701 Subcommand ``ls`` lists crush rules.
702
703 Usage::
704
705 ceph osd crush rule ls
706
707 Subcommand ``rm`` removes crush rule <name>.
708
709 Usage::
710
711 ceph osd crush rule rm <name>
712
713 Subcommand ``set`` used alone, sets crush map from input file.
714
715 Usage::
716
717 ceph osd crush set
718
719 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
720 for <name> to <weight> with location <args>.
721
722 Usage::
723
724 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
725
726 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
727 tunable that can be set is straw_calc_version.
728
729 Usage::
730
731 ceph osd crush set-tunable straw_calc_version <value>
732
733 Subcommand ``show-tunables`` shows current crush tunables.
734
735 Usage::
736
737 ceph osd crush show-tunables
738
739 Subcommand ``tree`` shows the crush buckets and items in a tree view.
740
741 Usage::
742
743 ceph osd crush tree
744
745 Subcommand ``tunables`` sets crush tunables values to <profile>.
746
747 Usage::
748
749 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
750
751 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
752 <ancestor>).
753
754 Usage::
755
756 ceph osd crush unlink <name> {<ancestor>}
757
758 Subcommand ``df`` shows OSD utilization
759
760 Usage::
761
762 ceph osd df {plain|tree}
763
764 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
765
766 Usage::
767
768 ceph osd deep-scrub <who>
769
770 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
771
772 Usage::
773
774 ceph osd down <ids> [<ids>...]
775
776 Subcommand ``dump`` prints summary of OSD map.
777
778 Usage::
779
780 ceph osd dump {<int[0-]>}
781
782 Subcommand ``erasure-code-profile`` is used for managing the erasure code
783 profiles. It uses some additional subcommands.
784
785 Subcommand ``get`` gets erasure code profile <name>.
786
787 Usage::
788
789 ceph osd erasure-code-profile get <name>
790
791 Subcommand ``ls`` lists all erasure code profiles.
792
793 Usage::
794
795 ceph osd erasure-code-profile ls
796
797 Subcommand ``rm`` removes erasure code profile <name>.
798
799 Usage::
800
801 ceph osd erasure-code-profile rm <name>
802
803 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
804 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
805
806 Usage::
807
808 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
809
810 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
811
812 Usage::
813
814 ceph osd find <int[0-]>
815
816 Subcommand ``getcrushmap`` gets CRUSH map.
817
818 Usage::
819
820 ceph osd getcrushmap {<int[0-]>}
821
822 Subcommand ``getmap`` gets OSD map.
823
824 Usage::
825
826 ceph osd getmap {<int[0-]>}
827
828 Subcommand ``getmaxosd`` shows largest OSD id.
829
830 Usage::
831
832 ceph osd getmaxosd
833
834 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
835
836 Usage::
837
838 ceph osd in <ids> [<ids>...]
839
840 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
841 MORE REPLICAS EXIST, BE CAREFUL.
842
843 Usage::
844
845 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
846
847 Subcommand ``ls`` shows all OSD ids.
848
849 Usage::
850
851 ceph osd ls {<int[0-]>}
852
853 Subcommand ``lspools`` lists pools.
854
855 Usage::
856
857 ceph osd lspools {<int>}
858
859 Subcommand ``map`` finds pg for <object> in <pool>.
860
861 Usage::
862
863 ceph osd map <poolname> <objectname>
864
865 Subcommand ``metadata`` fetches metadata for osd <id>.
866
867 Usage::
868
869 ceph osd metadata {int[0-]} (default all)
870
871 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
872
873 Usage::
874
875 ceph osd out <ids> [<ids>...]
876
877 Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
878 stopped without immediately making data unavailable. That is, all
879 data should remain readable and writeable, although data redundancy
880 may be reduced as some PGs may end up in a degraded (but active)
881 state. It will return a success code if it is okay to stop the
882 OSD(s), or an error code and informative message if it is not or if no
883 conclusion can be drawn at the current time.
884
885 Usage::
886
887 ceph osd ok-to-stop <id> [<ids>...]
888
889 Subcommand ``pause`` pauses osd.
890
891 Usage::
892
893 ceph osd pause
894
895 Subcommand ``perf`` prints dump of OSD perf summary stats.
896
897 Usage::
898
899 ceph osd perf
900
901 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
902 only).
903
904 Usage::
905
906 ceph osd pg-temp <pgid> {<id> [<id>...]}
907
908 Subcommand ``force-create-pg`` forces creation of pg <pgid>.
909
910 Usage::
911
912 ceph osd force-create-pg <pgid>
913
914
915 Subcommand ``pool`` is used for managing data pools. It uses some additional
916 subcommands.
917
918 Subcommand ``create`` creates pool.
919
920 Usage::
921
922 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
923 {<erasure_code_profile>} {<ruleset>} {<int>}
924
925 Subcommand ``delete`` deletes pool.
926
927 Usage::
928
929 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
930
931 Subcommand ``get`` gets pool parameter <var>.
932
933 Usage::
934
935 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
936 pgp_num|crush_ruleset|auid|write_fadvise_dontneed
937
938 Only for tiered pools::
939
940 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
941 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
942 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
943 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
944
945 Only for erasure coded pools::
946
947 ceph osd pool get <poolname> erasure_code_profile
948
949 Use ``all`` to get all pool parameters that apply to the pool's type::
950
951 ceph osd pool get <poolname> all
952
953 Subcommand ``get-quota`` obtains object or byte limits for pool.
954
955 Usage::
956
957 ceph osd pool get-quota <poolname>
958
959 Subcommand ``ls`` list pools
960
961 Usage::
962
963 ceph osd pool ls {detail}
964
965 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
966
967 Usage::
968
969 ceph osd pool mksnap <poolname> <snap>
970
971 Subcommand ``rename`` renames <srcpool> to <destpool>.
972
973 Usage::
974
975 ceph osd pool rename <poolname> <poolname>
976
977 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
978
979 Usage::
980
981 ceph osd pool rmsnap <poolname> <snap>
982
983 Subcommand ``set`` sets pool parameter <var> to <val>.
984
985 Usage::
986
987 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
988 pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange|
989 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
990 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
991 cache_target_dirty_high_ratio|
992 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
993 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
994 hit_set_search_last_n
995 <val> {--yes-i-really-mean-it}
996
997 Subcommand ``set-quota`` sets object or byte limit on pool.
998
999 Usage::
1000
1001 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1002
1003 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1004
1005 Usage::
1006
1007 ceph osd pool stats {<name>}
1008
1009 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1010 <= 1.0
1011
1012 Usage::
1013
1014 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1015
1016 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1017 only).
1018
1019 Usage::
1020
1021 ceph osd primary-temp <pgid> <id>
1022
1023 Subcommand ``repair`` initiates repair on a specified osd.
1024
1025 Usage::
1026
1027 ceph osd repair <who>
1028
1029 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1030
1031 Usage::
1032
1033 osd reweight <int[0-]> <float[0.0-1.0]>
1034
1035 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1036 [overload-percentage-for-consideration, default 120].
1037
1038 Usage::
1039
1040 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1041 {--no-increasing}
1042
1043 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
1044 [overload-percentage-for-consideration, default 120].
1045
1046 Usage::
1047
1048 ceph osd reweight-by-utilization {<int[100-]>}
1049 {--no-increasing}
1050
1051 Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1052
1053
1054 Usage::
1055
1056 ceph osd rm <ids> [<ids>...]
1057
1058 Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1059 entity's keys and all of its dm-crypt and daemon-private config key
1060 entries.
1061
1062 This command will not remove the OSD from crush, nor will it remove the
1063 OSD from the OSD map. Instead, once the command successfully completes,
1064 the OSD will show marked as *destroyed*.
1065
1066 In order to mark an OSD as destroyed, the OSD must first be marked as
1067 **lost**.
1068
1069 Usage::
1070
1071 ceph osd destroy <id> {--yes-i-really-mean-it}
1072
1073
1074 Subcommand ``purge`` performs a combination of ``osd destroy``,
1075 ``osd rm`` and ``osd crush remove``.
1076
1077 Usage::
1078
1079 ceph osd purge <id> {--yes-i-really-mean-it}
1080
1081 Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1082 destroy an OSD without reducing overall data redundancy or durability.
1083 It will return a success code if it is definitely safe, or an error
1084 code and informative message if it is not or if no conclusion can be
1085 drawn at the current time.
1086
1087 Usage::
1088
1089 ceph osd safe-to-destroy <id> [<ids>...]
1090
1091 Subcommand ``scrub`` initiates scrub on specified osd.
1092
1093 Usage::
1094
1095 ceph osd scrub <who>
1096
1097 Subcommand ``set`` sets <key>.
1098
1099 Usage::
1100
1101 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
1102 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1103
1104 Subcommand ``setcrushmap`` sets crush map from input file.
1105
1106 Usage::
1107
1108 ceph osd setcrushmap
1109
1110 Subcommand ``setmaxosd`` sets new maximum osd value.
1111
1112 Usage::
1113
1114 ceph osd setmaxosd <int[0-]>
1115
1116 Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1117 compatible with the specified client version. This subcommand prevents you from
1118 making any changes (e.g., crush tunables, or using new features) that
1119 would violate the current setting. Please note, This subcommand will fail if
1120 any connected daemon or client is not compatible with the features offered by
1121 the given <version>. To see the features and releases of all clients connected
1122 to cluster, please see `ceph features`_.
1123
1124 Usage::
1125
1126 ceph osd set-require-min-compat-client <version>
1127
1128 Subcommand ``stat`` prints summary of OSD map.
1129
1130 Usage::
1131
1132 ceph osd stat
1133
1134 Subcommand ``tier`` is used for managing tiers. It uses some additional
1135 subcommands.
1136
1137 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1138 (the first one).
1139
1140 Usage::
1141
1142 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1143
1144 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1145 to existing pool <pool> (the first one).
1146
1147 Usage::
1148
1149 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1150
1151 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1152
1153 Usage::
1154
1155 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
1156 readforward|readproxy
1157
1158 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1159 <pool> (the first one).
1160
1161 Usage::
1162
1163 ceph osd tier remove <poolname> <poolname>
1164
1165 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1166
1167 Usage::
1168
1169 ceph osd tier remove-overlay <poolname>
1170
1171 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1172 <overlaypool>.
1173
1174 Usage::
1175
1176 ceph osd tier set-overlay <poolname> <poolname>
1177
1178 Subcommand ``tree`` prints OSD tree.
1179
1180 Usage::
1181
1182 ceph osd tree {<int[0-]>}
1183
1184 Subcommand ``unpause`` unpauses osd.
1185
1186 Usage::
1187
1188 ceph osd unpause
1189
1190 Subcommand ``unset`` unsets <key>.
1191
1192 Usage::
1193
1194 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1195 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1196
1197
1198 pg
1199 --
1200
1201 It is used for managing the placement groups in OSDs. It uses some
1202 additional subcommands.
1203
1204 Subcommand ``debug`` shows debug info about pgs.
1205
1206 Usage::
1207
1208 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1209
1210 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1211
1212 Usage::
1213
1214 ceph pg deep-scrub <pgid>
1215
1216 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1217 with plain).
1218
1219 Usage::
1220
1221 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1222
1223 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1224
1225 Usage::
1226
1227 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1228
1229 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1230
1231 Usage::
1232
1233 ceph pg dump_pools_json
1234
1235 Subcommand ``dump_stuck`` shows information about stuck pgs.
1236
1237 Usage::
1238
1239 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1240 {<int>}
1241
1242 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1243
1244 Usage::
1245
1246 ceph pg getmap
1247
1248 Subcommand ``ls`` lists pg with specific pool, osd, state
1249
1250 Usage::
1251
1252 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1253 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1254 recovery|backfill_wait|incomplete|stale| remapped|
1255 deep_scrub|backfill|backfill_toofull|recovery_wait|
1256 undersized [active|clean|down|replay|splitting|
1257 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1258 recovery|backfill_wait|incomplete|stale|remapped|
1259 deep_scrub|backfill|backfill_toofull|recovery_wait|
1260 undersized...]}
1261
1262 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1263
1264 Usage::
1265
1266 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1267 {active|clean|down|replay|splitting|
1268 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1269 recovery|backfill_wait|incomplete|stale| remapped|
1270 deep_scrub|backfill|backfill_toofull|recovery_wait|
1271 undersized [active|clean|down|replay|splitting|
1272 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1273 recovery|backfill_wait|incomplete|stale|remapped|
1274 deep_scrub|backfill|backfill_toofull|recovery_wait|
1275 undersized...]}
1276
1277 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1278
1279 Usage::
1280
1281 ceph pg ls-by-pool <poolstr> {<int>} {active|
1282 clean|down|replay|splitting|
1283 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1284 recovery|backfill_wait|incomplete|stale| remapped|
1285 deep_scrub|backfill|backfill_toofull|recovery_wait|
1286 undersized [active|clean|down|replay|splitting|
1287 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1288 recovery|backfill_wait|incomplete|stale|remapped|
1289 deep_scrub|backfill|backfill_toofull|recovery_wait|
1290 undersized...]}
1291
1292 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1293
1294 Usage::
1295
1296 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1297 {active|clean|down|replay|splitting|
1298 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1299 recovery|backfill_wait|incomplete|stale| remapped|
1300 deep_scrub|backfill|backfill_toofull|recovery_wait|
1301 undersized [active|clean|down|replay|splitting|
1302 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1303 recovery|backfill_wait|incomplete|stale|remapped|
1304 deep_scrub|backfill|backfill_toofull|recovery_wait|
1305 undersized...]}
1306
1307 Subcommand ``map`` shows mapping of pg to osds.
1308
1309 Usage::
1310
1311 ceph pg map <pgid>
1312
1313 Subcommand ``repair`` starts repair on <pgid>.
1314
1315 Usage::
1316
1317 ceph pg repair <pgid>
1318
1319 Subcommand ``scrub`` starts scrub on <pgid>.
1320
1321 Usage::
1322
1323 ceph pg scrub <pgid>
1324
1325 Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1326
1327 Usage::
1328
1329 ceph pg set_full_ratio <float[0.0-1.0]>
1330
1331 Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1332
1333 Usage::
1334
1335 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1336
1337 Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1338 full.
1339
1340 Usage::
1341
1342 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1343
1344 Subcommand ``stat`` shows placement group status.
1345
1346 Usage::
1347
1348 ceph pg stat
1349
1350
1351 quorum
1352 ------
1353
1354 Cause MON to enter or exit quorum.
1355
1356 Usage::
1357
1358 ceph quorum enter|exit
1359
1360 Note: this only works on the MON to which the ``ceph`` command is connected.
1361 If you want a specific MON to enter or exit quorum, use this syntax::
1362
1363 ceph tell mon.<id> quorum enter|exit
1364
1365 quorum_status
1366 -------------
1367
1368 Reports status of monitor quorum.
1369
1370 Usage::
1371
1372 ceph quorum_status
1373
1374
1375 report
1376 ------
1377
1378 Reports full status of cluster, optional title tag strings.
1379
1380 Usage::
1381
1382 ceph report {<tags> [<tags>...]}
1383
1384
1385 scrub
1386 -----
1387
1388 Scrubs the monitor stores.
1389
1390 Usage::
1391
1392 ceph scrub
1393
1394
1395 status
1396 ------
1397
1398 Shows cluster status.
1399
1400 Usage::
1401
1402 ceph status
1403
1404
1405 sync force
1406 ----------
1407
1408 Forces sync of and clear monitor store.
1409
1410 Usage::
1411
1412 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1413
1414
1415 tell
1416 ----
1417
1418 Sends a command to a specific daemon.
1419
1420 Usage::
1421
1422 ceph tell <name (type.id)> <args> [<args>...]
1423
1424
1425 List all available commands.
1426
1427 Usage::
1428
1429 ceph tell <name (type.id)> help
1430
1431 version
1432 -------
1433
1434 Show mon daemon version
1435
1436 Usage::
1437
1438 ceph version
1439
1440 Options
1441 =======
1442
1443 .. option:: -i infile
1444
1445 will specify an input file to be passed along as a payload with the
1446 command to the monitor cluster. This is only used for specific
1447 monitor commands.
1448
1449 .. option:: -o outfile
1450
1451 will write any payload returned by the monitor cluster with its
1452 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1453 return a payload.
1454
1455 .. option:: -c ceph.conf, --conf=ceph.conf
1456
1457 Use ceph.conf configuration file instead of the default
1458 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1459
1460 .. option:: --id CLIENT_ID, --user CLIENT_ID
1461
1462 Client id for authentication.
1463
1464 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1465
1466 Client name for authentication.
1467
1468 .. option:: --cluster CLUSTER
1469
1470 Name of the Ceph cluster.
1471
1472 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1473
1474 Submit admin-socket commands via admin sockets in /var/run/ceph.
1475
1476 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1477
1478 You probably mean --admin-daemon
1479
1480 .. option:: -s, --status
1481
1482 Show cluster status.
1483
1484 .. option:: -w, --watch
1485
1486 Watch live cluster changes.
1487
1488 .. option:: --watch-debug
1489
1490 Watch debug events.
1491
1492 .. option:: --watch-info
1493
1494 Watch info events.
1495
1496 .. option:: --watch-sec
1497
1498 Watch security events.
1499
1500 .. option:: --watch-warn
1501
1502 Watch warning events.
1503
1504 .. option:: --watch-error
1505
1506 Watch error events.
1507
1508 .. option:: --version, -v
1509
1510 Display version.
1511
1512 .. option:: --verbose
1513
1514 Make verbose.
1515
1516 .. option:: --concise
1517
1518 Make less verbose.
1519
1520 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1521
1522 Format of output.
1523
1524 .. option:: --connect-timeout CLUSTER_TIMEOUT
1525
1526 Set a timeout for connecting to the cluster.
1527
1528 .. option:: --no-increasing
1529
1530 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1531 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1532 If this option is used with these commands, it will help not to increase osd weight
1533 even the osd is under utilized.
1534
1535
1536 Availability
1537 ============
1538
1539 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1540 the Ceph documentation at http://ceph.com/docs for more information.
1541
1542
1543 See also
1544 ========
1545
1546 :doc:`ceph-mon <ceph-mon>`\(8),
1547 :doc:`ceph-osd <ceph-osd>`\(8),
1548 :doc:`ceph-mds <ceph-mds>`\(8)