]> git.proxmox.com Git - ceph.git/blame - ceph/doc/man/8/ceph.rst
update sources to v12.2.3
[ceph.git] / ceph / doc / man / 8 / ceph.rst
CommitLineData
7c673cae
FG
1:orphan:
2
3==================================
4 ceph -- ceph administration tool
5==================================
6
7.. program:: ceph
8
9Synopsis
10========
11
12| **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
13
14| **ceph** **compact**
15
16| **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
17
18| **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
19
20| **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
21
22| **ceph** **df** *{detail}*
23
24| **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
25
26| **ceph** **fsid**
27
28| **ceph** **health** *{detail}*
29
30| **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
31
32| **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
33
34| **ceph** **log** *<logtext>* [ *<logtext>*... ]
35
36| **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
37
38| **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
39
40| **ceph** **mon_status**
41
35e4c445 42| **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
7c673cae
FG
43
44| **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
45
46| **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
47
48| **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
49
50| **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
51
52| **ceph** **quorum** [ *enter* \| *exit* ]
53
54| **ceph** **quorum_status**
55
56| **ceph** **report** { *<tags>* [ *<tags>...* ] }
57
58| **ceph** **scrub**
59
60| **ceph** **status**
61
62| **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
63
64| **ceph** **tell** *<name (type.id)> <args> [<args>...]*
65
66| **ceph** **version**
67
68Description
69===========
70
71:program:`ceph` is a control utility which is used for manual deployment and maintenance
72of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73monitors, OSDs, placement groups, MDS and overall maintenance, administration
74of the cluster.
75
76Commands
77========
78
79auth
80----
81
82Manage authentication keys. It is used for adding, removing, exporting
83or updating of authentication keys for a particular entity such as a monitor or
84OSD. It uses some additional subcommands.
85
86Subcommand ``add`` adds authentication info for a particular entity from input
87file, or random key if no input is given and/or any caps specified in the command.
88
89Usage::
90
91 ceph auth add <entity> {<caps> [<caps>...]}
92
93Subcommand ``caps`` updates caps for **name** from caps specified in the command.
94
95Usage::
96
97 ceph auth caps <entity> <caps> [<caps>...]
98
99Subcommand ``del`` deletes all caps for ``name``.
100
101Usage::
102
103 ceph auth del <entity>
104
105Subcommand ``export`` writes keyring for requested entity, or master keyring if
106none given.
107
108Usage::
109
110 ceph auth export {<entity>}
111
112Subcommand ``get`` writes keyring file with requested key.
113
114Usage::
115
116 ceph auth get <entity>
117
118Subcommand ``get-key`` displays requested key.
119
120Usage::
121
122 ceph auth get-key <entity>
123
124Subcommand ``get-or-create`` adds authentication info for a particular entity
125from input file, or random key if no input given and/or any caps specified in the
126command.
127
128Usage::
129
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
131
132Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133pairs specified in the command. If key already exists, any given caps must match
134the existing caps for that key.
135
136Usage::
137
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
139
140Subcommand ``import`` reads keyring from input file.
141
142Usage::
143
144 ceph auth import
145
c07f9fc5 146Subcommand ``ls`` lists authentication state.
7c673cae
FG
147
148Usage::
149
c07f9fc5 150 ceph auth ls
7c673cae
FG
151
152Subcommand ``print-key`` displays requested key.
153
154Usage::
155
156 ceph auth print-key <entity>
157
158Subcommand ``print_key`` displays requested key.
159
160Usage::
161
162 ceph auth print_key <entity>
163
164
165compact
166-------
167
168Causes compaction of monitor's leveldb storage.
169
170Usage::
171
172 ceph compact
173
174
175config-key
176----------
177
178Manage configuration key. It uses some additional subcommands.
179
180Subcommand ``del`` deletes configuration key.
181
182Usage::
183
184 ceph config-key del <key>
185
186Subcommand ``exists`` checks for configuration keys existence.
187
188Usage::
189
190 ceph config-key exists <key>
191
192Subcommand ``get`` gets the configuration key.
193
194Usage::
195
196 ceph config-key get <key>
197
198Subcommand ``list`` lists configuration keys.
199
200Usage::
201
c07f9fc5 202 ceph config-key ls
7c673cae
FG
203
204Subcommand ``dump`` dumps configuration keys and values.
205
206Usage::
207
208 ceph config-key dump
209
c07f9fc5 210Subcommand ``set`` puts configuration key and value.
7c673cae
FG
211
212Usage::
213
c07f9fc5 214 ceph config-key set <key> {<val>}
7c673cae
FG
215
216
217daemon
218------
219
220Submit admin-socket commands.
221
222Usage::
223
224 ceph daemon {daemon_name|socket_path} {command} ...
225
226Example::
227
228 ceph daemon osd.0 help
229
230
231daemonperf
232----------
233
234Watch performance counters from a Ceph daemon.
235
236Usage::
237
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
239
240
241df
242--
243
244Show cluster's free space status.
245
246Usage::
247
248 ceph df {detail}
249
c07f9fc5
FG
250.. _ceph features:
251
252features
253--------
254
255Show the releases and features of all connected daemons and clients connected
256to the cluster, along with the numbers of them in each bucket grouped by the
257corresponding features/releases. Each release of Ceph supports a different set
258of features, expressed by the features bitmask. New cluster features require
259that clients support the feature, or else they are not allowed to connect to
260these new features. As new features or capabilities are enabled after an
261upgrade, older clients are prevented from connecting.
262
263Usage::
264
265 ceph features
7c673cae
FG
266
267fs
268--
269
270Manage cephfs filesystems. It uses some additional subcommands.
271
272Subcommand ``ls`` to list filesystems
273
274Usage::
275
276 ceph fs ls
277
278Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
279
280Usage::
281
282 ceph fs new <fs_name> <metadata> <data>
283
284Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
285
286Usage::
287
288 ceph fs reset <fs_name> {--yes-i-really-mean-it}
289
290Subcommand ``rm`` to disable the named filesystem
291
292Usage::
293
294 ceph fs rm <fs_name> {--yes-i-really-mean-it}
295
296
297fsid
298----
299
300Show cluster's FSID/UUID.
301
302Usage::
303
304 ceph fsid
305
306
307health
308------
309
310Show cluster's health.
311
312Usage::
313
314 ceph health {detail}
315
316
317heap
318----
319
320Show heap usage info (available only if compiled with tcmalloc)
321
322Usage::
323
324 ceph heap dump|start_profiler|stop_profiler|release|stats
325
326
327injectargs
328----------
329
330Inject configuration arguments into monitor.
331
332Usage::
333
334 ceph injectargs <injected_args> [<injected_args>...]
335
336
337log
338---
339
340Log supplied text to the monitor log.
341
342Usage::
343
344 ceph log <logtext> [<logtext>...]
345
346
347mds
348---
349
350Manage metadata server configuration and administration. It uses some
351additional subcommands.
352
353Subcommand ``compat`` manages compatible features. It uses some additional
354subcommands.
355
356Subcommand ``rm_compat`` removes compatible feature.
357
358Usage::
359
360 ceph mds compat rm_compat <int[0-]>
361
362Subcommand ``rm_incompat`` removes incompatible feature.
363
364Usage::
365
366 ceph mds compat rm_incompat <int[0-]>
367
368Subcommand ``show`` shows mds compatibility settings.
369
370Usage::
371
372 ceph mds compat show
373
374Subcommand ``deactivate`` stops mds.
375
376Usage::
377
378 ceph mds deactivate <who>
379
380Subcommand ``fail`` forces mds to status fail.
381
382Usage::
383
384 ceph mds fail <who>
385
386Subcommand ``rm`` removes inactive mds.
387
388Usage::
389
390 ceph mds rm <int[0-]> <name> (type.id)>
391
392Subcommand ``rmfailed`` removes failed mds.
393
394Usage::
395
396 ceph mds rmfailed <int[0-]>
397
398Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
399
400Usage::
401
402 ceph mds set_state <int[0-]> <int[0-20]>
403
404Subcommand ``stat`` shows MDS status.
405
406Usage::
407
408 ceph mds stat
409
410Subcommand ``tell`` sends command to particular mds.
411
412Usage::
413
414 ceph mds tell <who> <args> [<args>...]
415
416mon
417---
418
419Manage monitor configuration and administration. It uses some additional
420subcommands.
421
422Subcommand ``add`` adds new monitor named <name> at <addr>.
423
424Usage::
425
426 ceph mon add <name> <IPaddr[:port]>
427
428Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
429
430Usage::
431
432 ceph mon dump {<int[0-]>}
433
434Subcommand ``getmap`` gets monmap.
435
436Usage::
437
438 ceph mon getmap {<int[0-]>}
439
440Subcommand ``remove`` removes monitor named <name>.
441
442Usage::
443
444 ceph mon remove <name>
445
446Subcommand ``stat`` summarizes monitor status.
447
448Usage::
449
450 ceph mon stat
451
452mon_status
453----------
454
455Reports status of monitors.
456
457Usage::
458
459 ceph mon_status
460
c07f9fc5
FG
461mgr
462---
463
464Ceph manager daemon configuration and management.
465
466Subcommand ``dump`` dumps the latest MgrMap, which describes the active
467and standby manager daemons.
468
469Usage::
470
471 ceph mgr dump
472
473Subcommand ``fail`` will mark a manager daemon as failed, removing it
474from the manager map. If it is the active manager daemon a standby
475will take its place.
476
477Usage::
478
479 ceph mgr fail <name>
480
481Subcommand ``module ls`` will list currently enabled manager modules (plugins).
482
483Usage::
484
485 ceph mgr module ls
486
487Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
488
489Usage::
490
491 ceph mgr module enable <module>
492
493Subcommand ``module disable`` will disable an active manager module.
494
495Usage::
496
497 ceph mgr module disable <module>
498
499Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
500
501Usage::
502
503 ceph mgr metadata [name]
504
505Subcommand ``versions`` will report a count of running daemon versions.
506
507Usage::
508
509 ceph mgr versions
510
511Subcommand ``count-metadata`` will report a count of any daemon metadata field.
512
513Usage::
514
515 ceph mgr count-metadata <field>
516
517
7c673cae
FG
518osd
519---
520
521Manage OSD configuration and administration. It uses some additional
522subcommands.
523
524Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
525subcommands.
526
527Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
528from now)
529
530Usage::
531
532 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
533
534Subcommand ``ls`` show blacklisted clients
535
536Usage::
537
538 ceph osd blacklist ls
539
540Subcommand ``rm`` remove <addr> from blacklist
541
542Usage::
543
544 ceph osd blacklist rm <EntityAddr>
545
546Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
547
548Usage::
549
550 ceph osd blocked-by
551
552Subcommand ``create`` creates new osd (with optional UUID and ID).
553
31f18b77
FG
554This command is DEPRECATED as of the Luminous release, and will be removed in
555a future release.
556
557Subcommand ``new`` should instead be used.
558
7c673cae
FG
559Usage::
560
561 ceph osd create {<uuid>} {<id>}
562
b5b8bbf5
FG
563Subcommand ``new`` can be used to create a new OSD or to recreate a previously
564destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*,
565and the command expects a JSON file containing the base64 cephx key for auth
566entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt
567lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
568the accompanying lockbox cephx key.
31f18b77
FG
569
570Usage::
571
181888fb 572 ceph osd new {<uuid>} {<id>} -i {<secrets.json>}
31f18b77 573
b5b8bbf5
FG
574The secrets JSON file is optional but if provided, is expected to maintain
575a form of the following format::
31f18b77
FG
576
577 {
578 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg=="
579 }
580
581Or::
582
583 {
584 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
585 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
586 "dmcrypt_key": "<dm-crypt key>"
587 }
588
589
7c673cae
FG
590Subcommand ``crush`` is used for CRUSH management. It uses some additional
591subcommands.
592
593Subcommand ``add`` adds or updates crushmap position and weight for <name> with
594<weight> and location <args>.
595
596Usage::
597
598 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
599
600Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
601type <type>.
602
603Usage::
604
605 ceph osd crush add-bucket <name> <type>
606
607Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
608<weight> at/to location <args>.
609
610Usage::
611
612 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
613 [<args>...]
614
615Subcommand ``dump`` dumps crush map.
616
617Usage::
618
619 ceph osd crush dump
620
621Subcommand ``get-tunable`` get crush tunable straw_calc_version
622
623Usage::
624
625 ceph osd crush get-tunable straw_calc_version
626
627Subcommand ``link`` links existing entry for <name> under location <args>.
628
629Usage::
630
631 ceph osd crush link <name> <args> [<args>...]
632
633Subcommand ``move`` moves existing entry for <name> to location <args>.
634
635Usage::
636
637 ceph osd crush move <name> <args> [<args>...]
638
639Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
640<ancestor>).
641
642Usage::
643
644 ceph osd crush remove <name> {<ancestor>}
645
646Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
647
648Usage::
649
650 ceph osd crush rename-bucket <srcname> <dstname>
651
652Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
653
654Usage::
655
656 ceph osd crush reweight <name> <float[0.0-]>
657
658Subcommand ``reweight-all`` recalculate the weights for the tree to
659ensure they sum correctly
660
661Usage::
662
663 ceph osd crush reweight-all
664
665Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
666to <weight> in crush map
667
668Usage::
669
670 ceph osd crush reweight-subtree <name> <weight>
671
672Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
673<ancestor>).
674
675Usage::
676
677 ceph osd crush rm <name> {<ancestor>}
678
679Subcommand ``rule`` is used for creating crush rules. It uses some additional
680subcommands.
681
682Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
683created with <profile> (default default).
684
685Usage::
686
687 ceph osd crush rule create-erasure <name> {<profile>}
688
689Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
690replicate across buckets of type <type>, using a choose mode of <firstn|indep>
691(default firstn; indep best for erasure pools).
692
693Usage::
694
695 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
696
697Subcommand ``dump`` dumps crush rule <name> (default all).
698
699Usage::
700
701 ceph osd crush rule dump {<name>}
702
7c673cae
FG
703Subcommand ``ls`` lists crush rules.
704
705Usage::
706
707 ceph osd crush rule ls
708
709Subcommand ``rm`` removes crush rule <name>.
710
711Usage::
712
713 ceph osd crush rule rm <name>
714
715Subcommand ``set`` used alone, sets crush map from input file.
716
717Usage::
718
719 ceph osd crush set
720
721Subcommand ``set`` with osdname/osd.id update crushmap position and weight
722for <name> to <weight> with location <args>.
723
724Usage::
725
726 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
727
728Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
729tunable that can be set is straw_calc_version.
730
731Usage::
732
733 ceph osd crush set-tunable straw_calc_version <value>
734
735Subcommand ``show-tunables`` shows current crush tunables.
736
737Usage::
738
739 ceph osd crush show-tunables
740
741Subcommand ``tree`` shows the crush buckets and items in a tree view.
742
743Usage::
744
745 ceph osd crush tree
746
747Subcommand ``tunables`` sets crush tunables values to <profile>.
748
749Usage::
750
751 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
752
753Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
754<ancestor>).
755
756Usage::
757
758 ceph osd crush unlink <name> {<ancestor>}
759
760Subcommand ``df`` shows OSD utilization
761
762Usage::
763
764 ceph osd df {plain|tree}
765
766Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
767
768Usage::
769
770 ceph osd deep-scrub <who>
771
772Subcommand ``down`` sets osd(s) <id> [<id>...] down.
773
774Usage::
775
776 ceph osd down <ids> [<ids>...]
777
778Subcommand ``dump`` prints summary of OSD map.
779
780Usage::
781
782 ceph osd dump {<int[0-]>}
783
784Subcommand ``erasure-code-profile`` is used for managing the erasure code
785profiles. It uses some additional subcommands.
786
787Subcommand ``get`` gets erasure code profile <name>.
788
789Usage::
790
791 ceph osd erasure-code-profile get <name>
792
793Subcommand ``ls`` lists all erasure code profiles.
794
795Usage::
796
797 ceph osd erasure-code-profile ls
798
799Subcommand ``rm`` removes erasure code profile <name>.
800
801Usage::
802
803 ceph osd erasure-code-profile rm <name>
804
805Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
806pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
807
808Usage::
809
810 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
811
812Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
813
814Usage::
815
816 ceph osd find <int[0-]>
817
818Subcommand ``getcrushmap`` gets CRUSH map.
819
820Usage::
821
822 ceph osd getcrushmap {<int[0-]>}
823
824Subcommand ``getmap`` gets OSD map.
825
826Usage::
827
828 ceph osd getmap {<int[0-]>}
829
830Subcommand ``getmaxosd`` shows largest OSD id.
831
832Usage::
833
834 ceph osd getmaxosd
835
836Subcommand ``in`` sets osd(s) <id> [<id>...] in.
837
838Usage::
839
840 ceph osd in <ids> [<ids>...]
841
842Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
843MORE REPLICAS EXIST, BE CAREFUL.
844
845Usage::
846
847 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
848
849Subcommand ``ls`` shows all OSD ids.
850
851Usage::
852
853 ceph osd ls {<int[0-]>}
854
855Subcommand ``lspools`` lists pools.
856
857Usage::
858
859 ceph osd lspools {<int>}
860
861Subcommand ``map`` finds pg for <object> in <pool>.
862
863Usage::
864
865 ceph osd map <poolname> <objectname>
866
867Subcommand ``metadata`` fetches metadata for osd <id>.
868
869Usage::
870
871 ceph osd metadata {int[0-]} (default all)
872
873Subcommand ``out`` sets osd(s) <id> [<id>...] out.
874
875Usage::
876
877 ceph osd out <ids> [<ids>...]
878
35e4c445
FG
879Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
880stopped without immediately making data unavailable. That is, all
881data should remain readable and writeable, although data redundancy
882may be reduced as some PGs may end up in a degraded (but active)
883state. It will return a success code if it is okay to stop the
884OSD(s), or an error code and informative message if it is not or if no
885conclusion can be drawn at the current time.
886
887Usage::
888
889 ceph osd ok-to-stop <id> [<ids>...]
890
7c673cae
FG
891Subcommand ``pause`` pauses osd.
892
893Usage::
894
895 ceph osd pause
896
897Subcommand ``perf`` prints dump of OSD perf summary stats.
898
899Usage::
900
901 ceph osd perf
902
903Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
904only).
905
906Usage::
907
908 ceph osd pg-temp <pgid> {<id> [<id>...]}
909
c07f9fc5
FG
910Subcommand ``force-create-pg`` forces creation of pg <pgid>.
911
912Usage::
913
914 ceph osd force-create-pg <pgid>
915
916
7c673cae
FG
917Subcommand ``pool`` is used for managing data pools. It uses some additional
918subcommands.
919
920Subcommand ``create`` creates pool.
921
922Usage::
923
924 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
b32b8144 925 {<erasure_code_profile>} {<rule>} {<int>}
7c673cae
FG
926
927Subcommand ``delete`` deletes pool.
928
929Usage::
930
931 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
932
933Subcommand ``get`` gets pool parameter <var>.
934
935Usage::
936
937 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
b32b8144 938 pgp_num|crush_rule|auid|write_fadvise_dontneed
7c673cae
FG
939
940Only for tiered pools::
941
942 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
943 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
944 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
945 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
946
947Only for erasure coded pools::
948
949 ceph osd pool get <poolname> erasure_code_profile
950
951Use ``all`` to get all pool parameters that apply to the pool's type::
952
953 ceph osd pool get <poolname> all
954
955Subcommand ``get-quota`` obtains object or byte limits for pool.
956
957Usage::
958
959 ceph osd pool get-quota <poolname>
960
961Subcommand ``ls`` list pools
962
963Usage::
964
965 ceph osd pool ls {detail}
966
967Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
968
969Usage::
970
971 ceph osd pool mksnap <poolname> <snap>
972
973Subcommand ``rename`` renames <srcpool> to <destpool>.
974
975Usage::
976
977 ceph osd pool rename <poolname> <poolname>
978
979Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
980
981Usage::
982
983 ceph osd pool rmsnap <poolname> <snap>
984
985Subcommand ``set`` sets pool parameter <var> to <val>.
986
987Usage::
988
989 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
b32b8144 990 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
7c673cae
FG
991 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
992 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
993 cache_target_dirty_high_ratio|
994 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
995 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
996 hit_set_search_last_n
997 <val> {--yes-i-really-mean-it}
998
999Subcommand ``set-quota`` sets object or byte limit on pool.
1000
1001Usage::
1002
1003 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1004
1005Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1006
1007Usage::
1008
1009 ceph osd pool stats {<name>}
1010
1011Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1012<= 1.0
1013
1014Usage::
1015
1016 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1017
1018Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1019only).
1020
1021Usage::
1022
1023 ceph osd primary-temp <pgid> <id>
1024
1025Subcommand ``repair`` initiates repair on a specified osd.
1026
1027Usage::
1028
1029 ceph osd repair <who>
1030
1031Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1032
1033Usage::
1034
1035 osd reweight <int[0-]> <float[0.0-1.0]>
1036
1037Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1038[overload-percentage-for-consideration, default 120].
1039
1040Usage::
1041
1042 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1043 {--no-increasing}
1044
1045Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
1046[overload-percentage-for-consideration, default 120].
1047
1048Usage::
1049
1050 ceph osd reweight-by-utilization {<int[100-]>}
1051 {--no-increasing}
1052
c07f9fc5
FG
1053Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1054
7c673cae
FG
1055
1056Usage::
1057
1058 ceph osd rm <ids> [<ids>...]
1059
31f18b77
FG
1060Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1061entity's keys and all of its dm-crypt and daemon-private config key
1062entries.
1063
1064This command will not remove the OSD from crush, nor will it remove the
1065OSD from the OSD map. Instead, once the command successfully completes,
1066the OSD will show marked as *destroyed*.
1067
1068In order to mark an OSD as destroyed, the OSD must first be marked as
1069**lost**.
1070
1071Usage::
1072
1073 ceph osd destroy <id> {--yes-i-really-mean-it}
1074
1075
1076Subcommand ``purge`` performs a combination of ``osd destroy``,
1077``osd rm`` and ``osd crush remove``.
1078
1079Usage::
1080
1081 ceph osd purge <id> {--yes-i-really-mean-it}
1082
35e4c445
FG
1083Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1084destroy an OSD without reducing overall data redundancy or durability.
1085It will return a success code if it is definitely safe, or an error
1086code and informative message if it is not or if no conclusion can be
1087drawn at the current time.
1088
1089Usage::
1090
1091 ceph osd safe-to-destroy <id> [<ids>...]
1092
7c673cae
FG
1093Subcommand ``scrub`` initiates scrub on specified osd.
1094
1095Usage::
1096
1097 ceph osd scrub <who>
1098
1099Subcommand ``set`` sets <key>.
1100
1101Usage::
1102
1103 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
1104 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1105
1106Subcommand ``setcrushmap`` sets crush map from input file.
1107
1108Usage::
1109
1110 ceph osd setcrushmap
1111
1112Subcommand ``setmaxosd`` sets new maximum osd value.
1113
1114Usage::
1115
1116 ceph osd setmaxosd <int[0-]>
1117
c07f9fc5
FG
1118Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1119compatible with the specified client version. This subcommand prevents you from
1120making any changes (e.g., crush tunables, or using new features) that
1121would violate the current setting. Please note, This subcommand will fail if
1122any connected daemon or client is not compatible with the features offered by
1123the given <version>. To see the features and releases of all clients connected
1124to cluster, please see `ceph features`_.
1125
1126Usage::
1127
1128 ceph osd set-require-min-compat-client <version>
1129
7c673cae
FG
1130Subcommand ``stat`` prints summary of OSD map.
1131
1132Usage::
1133
1134 ceph osd stat
1135
1136Subcommand ``tier`` is used for managing tiers. It uses some additional
1137subcommands.
1138
1139Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1140(the first one).
1141
1142Usage::
1143
1144 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1145
1146Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1147to existing pool <pool> (the first one).
1148
1149Usage::
1150
1151 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1152
1153Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1154
1155Usage::
1156
1157 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
1158 readforward|readproxy
1159
1160Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1161<pool> (the first one).
1162
1163Usage::
1164
1165 ceph osd tier remove <poolname> <poolname>
1166
1167Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1168
1169Usage::
1170
1171 ceph osd tier remove-overlay <poolname>
1172
1173Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1174<overlaypool>.
1175
1176Usage::
1177
1178 ceph osd tier set-overlay <poolname> <poolname>
1179
1180Subcommand ``tree`` prints OSD tree.
1181
1182Usage::
1183
1184 ceph osd tree {<int[0-]>}
1185
1186Subcommand ``unpause`` unpauses osd.
1187
1188Usage::
1189
1190 ceph osd unpause
1191
1192Subcommand ``unset`` unsets <key>.
1193
1194Usage::
1195
1196 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1197 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1198
1199
1200pg
1201--
1202
1203It is used for managing the placement groups in OSDs. It uses some
1204additional subcommands.
1205
1206Subcommand ``debug`` shows debug info about pgs.
1207
1208Usage::
1209
1210 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1211
1212Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1213
1214Usage::
1215
1216 ceph pg deep-scrub <pgid>
1217
1218Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1219with plain).
1220
1221Usage::
1222
1223 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1224
1225Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1226
1227Usage::
1228
1229 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1230
1231Subcommand ``dump_pools_json`` shows pg pools info in json only.
1232
1233Usage::
1234
1235 ceph pg dump_pools_json
1236
1237Subcommand ``dump_stuck`` shows information about stuck pgs.
1238
1239Usage::
1240
1241 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1242 {<int>}
1243
7c673cae
FG
1244Subcommand ``getmap`` gets binary pg map to -o/stdout.
1245
1246Usage::
1247
1248 ceph pg getmap
1249
1250Subcommand ``ls`` lists pg with specific pool, osd, state
1251
1252Usage::
1253
1254 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1255 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1256 recovery|backfill_wait|incomplete|stale| remapped|
1257 deep_scrub|backfill|backfill_toofull|recovery_wait|
1258 undersized [active|clean|down|replay|splitting|
1259 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1260 recovery|backfill_wait|incomplete|stale|remapped|
1261 deep_scrub|backfill|backfill_toofull|recovery_wait|
1262 undersized...]}
1263
1264Subcommand ``ls-by-osd`` lists pg on osd [osd]
1265
1266Usage::
1267
1268 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1269 {active|clean|down|replay|splitting|
1270 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1271 recovery|backfill_wait|incomplete|stale| remapped|
1272 deep_scrub|backfill|backfill_toofull|recovery_wait|
1273 undersized [active|clean|down|replay|splitting|
1274 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1275 recovery|backfill_wait|incomplete|stale|remapped|
1276 deep_scrub|backfill|backfill_toofull|recovery_wait|
1277 undersized...]}
1278
1279Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1280
1281Usage::
1282
1283 ceph pg ls-by-pool <poolstr> {<int>} {active|
1284 clean|down|replay|splitting|
1285 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1286 recovery|backfill_wait|incomplete|stale| remapped|
1287 deep_scrub|backfill|backfill_toofull|recovery_wait|
1288 undersized [active|clean|down|replay|splitting|
1289 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1290 recovery|backfill_wait|incomplete|stale|remapped|
1291 deep_scrub|backfill|backfill_toofull|recovery_wait|
1292 undersized...]}
1293
1294Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1295
1296Usage::
1297
1298 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1299 {active|clean|down|replay|splitting|
1300 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1301 recovery|backfill_wait|incomplete|stale| remapped|
1302 deep_scrub|backfill|backfill_toofull|recovery_wait|
1303 undersized [active|clean|down|replay|splitting|
1304 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1305 recovery|backfill_wait|incomplete|stale|remapped|
1306 deep_scrub|backfill|backfill_toofull|recovery_wait|
1307 undersized...]}
1308
1309Subcommand ``map`` shows mapping of pg to osds.
1310
1311Usage::
1312
1313 ceph pg map <pgid>
1314
1315Subcommand ``repair`` starts repair on <pgid>.
1316
1317Usage::
1318
1319 ceph pg repair <pgid>
1320
1321Subcommand ``scrub`` starts scrub on <pgid>.
1322
1323Usage::
1324
1325 ceph pg scrub <pgid>
1326
1327Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1328
1329Usage::
1330
1331 ceph pg set_full_ratio <float[0.0-1.0]>
1332
1333Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1334
1335Usage::
1336
1337 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1338
1339Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1340full.
1341
1342Usage::
1343
1344 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1345
1346Subcommand ``stat`` shows placement group status.
1347
1348Usage::
1349
1350 ceph pg stat
1351
1352
1353quorum
1354------
1355
1356Cause MON to enter or exit quorum.
1357
1358Usage::
1359
1360 ceph quorum enter|exit
1361
1362Note: this only works on the MON to which the ``ceph`` command is connected.
1363If you want a specific MON to enter or exit quorum, use this syntax::
1364
1365 ceph tell mon.<id> quorum enter|exit
1366
1367quorum_status
1368-------------
1369
1370Reports status of monitor quorum.
1371
1372Usage::
1373
1374 ceph quorum_status
1375
1376
1377report
1378------
1379
1380Reports full status of cluster, optional title tag strings.
1381
1382Usage::
1383
1384 ceph report {<tags> [<tags>...]}
1385
1386
1387scrub
1388-----
1389
1390Scrubs the monitor stores.
1391
1392Usage::
1393
1394 ceph scrub
1395
1396
1397status
1398------
1399
1400Shows cluster status.
1401
1402Usage::
1403
1404 ceph status
1405
1406
1407sync force
1408----------
1409
1410Forces sync of and clear monitor store.
1411
1412Usage::
1413
1414 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1415
1416
1417tell
1418----
1419
1420Sends a command to a specific daemon.
1421
1422Usage::
1423
1424 ceph tell <name (type.id)> <args> [<args>...]
1425
31f18b77
FG
1426
1427List all available commands.
1428
1429Usage::
1430
1431 ceph tell <name (type.id)> help
1432
7c673cae
FG
1433version
1434-------
1435
1436Show mon daemon version
1437
1438Usage::
1439
1440 ceph version
1441
1442Options
1443=======
1444
1445.. option:: -i infile
1446
1447 will specify an input file to be passed along as a payload with the
1448 command to the monitor cluster. This is only used for specific
1449 monitor commands.
1450
1451.. option:: -o outfile
1452
1453 will write any payload returned by the monitor cluster with its
1454 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1455 return a payload.
1456
1457.. option:: -c ceph.conf, --conf=ceph.conf
1458
1459 Use ceph.conf configuration file instead of the default
1460 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1461
1462.. option:: --id CLIENT_ID, --user CLIENT_ID
1463
1464 Client id for authentication.
1465
1466.. option:: --name CLIENT_NAME, -n CLIENT_NAME
1467
1468 Client name for authentication.
1469
1470.. option:: --cluster CLUSTER
1471
1472 Name of the Ceph cluster.
1473
1474.. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1475
1476 Submit admin-socket commands via admin sockets in /var/run/ceph.
1477
1478.. option:: --admin-socket ADMIN_SOCKET_NOPE
1479
1480 You probably mean --admin-daemon
1481
1482.. option:: -s, --status
1483
1484 Show cluster status.
1485
1486.. option:: -w, --watch
1487
1488 Watch live cluster changes.
1489
1490.. option:: --watch-debug
1491
1492 Watch debug events.
1493
1494.. option:: --watch-info
1495
1496 Watch info events.
1497
1498.. option:: --watch-sec
1499
1500 Watch security events.
1501
1502.. option:: --watch-warn
1503
1504 Watch warning events.
1505
1506.. option:: --watch-error
1507
1508 Watch error events.
1509
1510.. option:: --version, -v
1511
1512 Display version.
1513
1514.. option:: --verbose
1515
1516 Make verbose.
1517
1518.. option:: --concise
1519
1520 Make less verbose.
1521
1522.. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1523
1524 Format of output.
1525
1526.. option:: --connect-timeout CLUSTER_TIMEOUT
1527
1528 Set a timeout for connecting to the cluster.
1529
1530.. option:: --no-increasing
1531
1532 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1533 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1534 If this option is used with these commands, it will help not to increase osd weight
1535 even the osd is under utilized.
1536
1537
1538Availability
1539============
1540
1541:program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1542the Ceph documentation at http://ceph.com/docs for more information.
1543
1544
1545See also
1546========
1547
1548:doc:`ceph-mon <ceph-mon>`\(8),
1549:doc:`ceph-osd <ceph-osd>`\(8),
1550:doc:`ceph-mds <ceph-mds>`\(8)