]> git.proxmox.com Git - ceph.git/blame - ceph/doc/man/8/ceph.rst
import 15.2.5
[ceph.git] / ceph / doc / man / 8 / ceph.rst
CommitLineData
7c673cae
FG
1:orphan:
2
3==================================
4 ceph -- ceph administration tool
5==================================
6
7.. program:: ceph
8
9Synopsis
10========
11
12| **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
13
14| **ceph** **compact**
15
9f95a23c
TL
16| **ceph** **config** [ *dump* | *ls* | *help* | *get* | *show* | *show-with-defaults* | *set* | *rm* | *log* | *reset* | *assimilate-conf* | *generate-minimal-conf* ] ...
17
11fdf7f2 18| **ceph** **config-key** [ *rm* | *exists* | *get* | *ls* | *dump* | *set* ] ...
7c673cae
FG
19
20| **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
21
22| **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
23
24| **ceph** **df** *{detail}*
25
26| **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
27
28| **ceph** **fsid**
29
30| **ceph** **health** *{detail}*
31
7c673cae
FG
32| **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
33
34| **ceph** **log** *<logtext>* [ *<logtext>*... ]
35
11fdf7f2 36| **ceph** **mds** [ *compat* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *repaired* ] ...
7c673cae
FG
37
38| **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
39
11fdf7f2 40| **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *ls* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
7c673cae
FG
41
42| **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
43
44| **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
45
11fdf7f2
TL
46| **ceph** **osd** **pool** **application** [ *disable* \| *enable* \| *get* \| *rm* \| *set* ] ...
47
7c673cae
FG
48| **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
49
11fdf7f2 50| **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *stat* ] ...
7c673cae 51
7c673cae
FG
52| **ceph** **quorum_status**
53
54| **ceph** **report** { *<tags>* [ *<tags>...* ] }
55
7c673cae
FG
56| **ceph** **status**
57
58| **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
59
11fdf7f2 60| **ceph** **tell** *<name (type.id)> <command> [options...]*
7c673cae
FG
61
62| **ceph** **version**
63
64Description
65===========
66
67:program:`ceph` is a control utility which is used for manual deployment and maintenance
68of a Ceph cluster. It provides a diverse set of commands that allows deployment of
69monitors, OSDs, placement groups, MDS and overall maintenance, administration
70of the cluster.
71
72Commands
73========
74
75auth
76----
77
78Manage authentication keys. It is used for adding, removing, exporting
79or updating of authentication keys for a particular entity such as a monitor or
80OSD. It uses some additional subcommands.
81
82Subcommand ``add`` adds authentication info for a particular entity from input
83file, or random key if no input is given and/or any caps specified in the command.
84
85Usage::
86
87 ceph auth add <entity> {<caps> [<caps>...]}
88
89Subcommand ``caps`` updates caps for **name** from caps specified in the command.
90
91Usage::
92
93 ceph auth caps <entity> <caps> [<caps>...]
94
95Subcommand ``del`` deletes all caps for ``name``.
96
97Usage::
98
99 ceph auth del <entity>
100
101Subcommand ``export`` writes keyring for requested entity, or master keyring if
102none given.
103
104Usage::
105
106 ceph auth export {<entity>}
107
108Subcommand ``get`` writes keyring file with requested key.
109
110Usage::
111
112 ceph auth get <entity>
113
114Subcommand ``get-key`` displays requested key.
115
116Usage::
117
118 ceph auth get-key <entity>
119
120Subcommand ``get-or-create`` adds authentication info for a particular entity
121from input file, or random key if no input given and/or any caps specified in the
122command.
123
124Usage::
125
126 ceph auth get-or-create <entity> {<caps> [<caps>...]}
127
128Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
129pairs specified in the command. If key already exists, any given caps must match
130the existing caps for that key.
131
132Usage::
133
134 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
135
136Subcommand ``import`` reads keyring from input file.
137
138Usage::
139
140 ceph auth import
141
c07f9fc5 142Subcommand ``ls`` lists authentication state.
7c673cae
FG
143
144Usage::
145
c07f9fc5 146 ceph auth ls
7c673cae
FG
147
148Subcommand ``print-key`` displays requested key.
149
150Usage::
151
152 ceph auth print-key <entity>
153
154Subcommand ``print_key`` displays requested key.
155
156Usage::
157
158 ceph auth print_key <entity>
159
160
161compact
162-------
163
164Causes compaction of monitor's leveldb storage.
165
166Usage::
167
168 ceph compact
169
170
9f95a23c
TL
171config
172------
173
174Configure the cluster. By default, Ceph daemons and clients retrieve their
175configuration options from monitor when they start, and are updated if any of
176the tracked options is changed at run time. It uses following additional
177subcommand.
178
179Subcommand ``dump`` to dump all options for the cluster
180
181Usage::
182
183 ceph config dump
184
185Subcommand ``ls`` to list all option names for the cluster
186
187Usage::
188
189 ceph config ls
190
191Subcommand ``help`` to describe the specified configuration option
192
193Usage::
194
195 ceph config help <option>
196
197Subcommand ``get`` to dump the option(s) for the specified entity.
198
199Usage::
200
201 ceph config get <who> {<option>}
202
203Subcommand ``show`` to display the running configuration of the specified
204entity. Please note, unlike ``get``, which only shows the options managed
205by monitor, ``show`` displays all the configurations being actively used.
206These options are pulled from several sources, for instance, the compiled-in
207default value, the monitor's configuration database, ``ceph.conf`` file on
208the host. The options can even be overridden at runtime. So, there is chance
209that the configuration options in the output of ``show`` could be different
210from those in the output of ``get``.
211
212Usage::
213
214 ceph config show {<who>}
215
216Subcommand ``show-with-defaults`` to display the running configuration along with the compiled-in defaults of the specified entity
217
218Usage::
219
220 ceph config show {<who>}
221
222Subcommand ``set`` to set an option for one or more specified entities
223
224Usage::
225
226 ceph config set <who> <option> <value> {--force}
227
228Subcommand ``rm`` to clear an option for one or more entities
229
230Usage::
231
232 ceph config rm <who> <option>
233
234Subcommand ``log`` to show recent history of config changes. If `count` option
235is omitted it defeaults to 10.
236
237Usage::
238
239 ceph config log {<count>}
240
241Subcommand ``reset`` to revert configuration to the specified historical version
242
243Usage::
244
245 ceph config reset <version>
246
247
248Subcommand ``assimilate-conf`` to assimilate options from stdin, and return a
249new, minimal conf file
250
251Usage::
252
253 ceph config assimilate-conf -i <input-config-path> > <output-config-path>
254 ceph config assimilate-conf < <input-config-path>
255
256Subcommand ``generate-minimal-conf`` to generate a minimal ``ceph.conf`` file,
257which can be used for bootstrapping a daemon or a client.
258
259Usage::
260
261 ceph config generate-minimal-conf > <minimal-config-path>
262
263
7c673cae
FG
264config-key
265----------
266
9f95a23c
TL
267Manage configuration key. Config-key is a general purpose key/value service
268offered by the monitors. This service is mainly used by Ceph tools and daemons
269for persisting various settings. Among which, ceph-mgr modules uses it for
270storing their options. It uses some additional subcommands.
7c673cae 271
11fdf7f2 272Subcommand ``rm`` deletes configuration key.
7c673cae
FG
273
274Usage::
275
11fdf7f2 276 ceph config-key rm <key>
7c673cae
FG
277
278Subcommand ``exists`` checks for configuration keys existence.
279
280Usage::
281
282 ceph config-key exists <key>
283
284Subcommand ``get`` gets the configuration key.
285
286Usage::
287
288 ceph config-key get <key>
289
11fdf7f2 290Subcommand ``ls`` lists configuration keys.
7c673cae
FG
291
292Usage::
293
c07f9fc5 294 ceph config-key ls
7c673cae
FG
295
296Subcommand ``dump`` dumps configuration keys and values.
297
298Usage::
299
300 ceph config-key dump
301
c07f9fc5 302Subcommand ``set`` puts configuration key and value.
7c673cae
FG
303
304Usage::
305
c07f9fc5 306 ceph config-key set <key> {<val>}
7c673cae
FG
307
308
309daemon
310------
311
312Submit admin-socket commands.
313
314Usage::
315
316 ceph daemon {daemon_name|socket_path} {command} ...
317
318Example::
319
320 ceph daemon osd.0 help
321
322
323daemonperf
324----------
325
326Watch performance counters from a Ceph daemon.
327
328Usage::
329
330 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
331
332
333df
334--
335
336Show cluster's free space status.
337
338Usage::
339
340 ceph df {detail}
341
c07f9fc5
FG
342.. _ceph features:
343
344features
345--------
346
347Show the releases and features of all connected daemons and clients connected
348to the cluster, along with the numbers of them in each bucket grouped by the
349corresponding features/releases. Each release of Ceph supports a different set
350of features, expressed by the features bitmask. New cluster features require
351that clients support the feature, or else they are not allowed to connect to
352these new features. As new features or capabilities are enabled after an
353upgrade, older clients are prevented from connecting.
354
355Usage::
356
357 ceph features
7c673cae
FG
358
359fs
360--
361
9f95a23c 362Manage cephfs file systems. It uses some additional subcommands.
7c673cae 363
9f95a23c 364Subcommand ``ls`` to list file systems
7c673cae
FG
365
366Usage::
367
368 ceph fs ls
369
9f95a23c 370Subcommand ``new`` to make a new file system using named pools <metadata> and <data>
7c673cae
FG
371
372Usage::
373
374 ceph fs new <fs_name> <metadata> <data>
375
376Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
377
378Usage::
379
380 ceph fs reset <fs_name> {--yes-i-really-mean-it}
381
9f95a23c 382Subcommand ``rm`` to disable the named file system
7c673cae
FG
383
384Usage::
385
386 ceph fs rm <fs_name> {--yes-i-really-mean-it}
387
388
389fsid
390----
391
392Show cluster's FSID/UUID.
393
394Usage::
395
396 ceph fsid
397
398
399health
400------
401
402Show cluster's health.
403
404Usage::
405
406 ceph health {detail}
407
408
409heap
410----
411
412Show heap usage info (available only if compiled with tcmalloc)
413
414Usage::
415
9f95a23c 416 ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
11fdf7f2
TL
417
418Subcommand ``release`` to make TCMalloc to releases no-longer-used memory back to the kernel at once.
419
420Usage::
421
9f95a23c 422 ceph tell <name (type.id)> heap release
11fdf7f2
TL
423
424Subcommand ``(get|set)_release_rate`` get or set the TCMalloc memory release rate. TCMalloc releases
425no-longer-used memory back to the kernel gradually. the rate controls how quickly this happens.
426Increase this setting to make TCMalloc to return unused memory more frequently. 0 means never return
427memory to system, 1 means wait for 1000 pages after releasing a page to system. It is ``1.0`` by default..
7c673cae 428
11fdf7f2
TL
429Usage::
430
9f95a23c 431 ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
7c673cae
FG
432
433injectargs
434----------
435
436Inject configuration arguments into monitor.
437
438Usage::
439
440 ceph injectargs <injected_args> [<injected_args>...]
441
442
443log
444---
445
446Log supplied text to the monitor log.
447
448Usage::
449
450 ceph log <logtext> [<logtext>...]
451
452
453mds
454---
455
456Manage metadata server configuration and administration. It uses some
457additional subcommands.
458
459Subcommand ``compat`` manages compatible features. It uses some additional
460subcommands.
461
462Subcommand ``rm_compat`` removes compatible feature.
463
464Usage::
465
466 ceph mds compat rm_compat <int[0-]>
467
468Subcommand ``rm_incompat`` removes incompatible feature.
469
470Usage::
471
472 ceph mds compat rm_incompat <int[0-]>
473
474Subcommand ``show`` shows mds compatibility settings.
475
476Usage::
477
478 ceph mds compat show
479
7c673cae
FG
480Subcommand ``fail`` forces mds to status fail.
481
482Usage::
483
11fdf7f2 484 ceph mds fail <role|gid>
7c673cae
FG
485
486Subcommand ``rm`` removes inactive mds.
487
488Usage::
489
490 ceph mds rm <int[0-]> <name> (type.id)>
491
492Subcommand ``rmfailed`` removes failed mds.
493
494Usage::
495
496 ceph mds rmfailed <int[0-]>
497
498Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
499
500Usage::
501
502 ceph mds set_state <int[0-]> <int[0-20]>
503
504Subcommand ``stat`` shows MDS status.
505
506Usage::
507
508 ceph mds stat
509
11fdf7f2 510Subcommand ``repaired`` mark a damaged MDS rank as no longer damaged.
7c673cae
FG
511
512Usage::
513
11fdf7f2 514 ceph mds repaired <role>
7c673cae
FG
515
516mon
517---
518
519Manage monitor configuration and administration. It uses some additional
520subcommands.
521
522Subcommand ``add`` adds new monitor named <name> at <addr>.
523
524Usage::
525
526 ceph mon add <name> <IPaddr[:port]>
527
528Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
529
530Usage::
531
532 ceph mon dump {<int[0-]>}
533
534Subcommand ``getmap`` gets monmap.
535
536Usage::
537
538 ceph mon getmap {<int[0-]>}
539
540Subcommand ``remove`` removes monitor named <name>.
541
542Usage::
543
544 ceph mon remove <name>
545
546Subcommand ``stat`` summarizes monitor status.
547
548Usage::
549
550 ceph mon stat
551
c07f9fc5
FG
552mgr
553---
554
555Ceph manager daemon configuration and management.
556
557Subcommand ``dump`` dumps the latest MgrMap, which describes the active
558and standby manager daemons.
559
560Usage::
561
562 ceph mgr dump
563
564Subcommand ``fail`` will mark a manager daemon as failed, removing it
565from the manager map. If it is the active manager daemon a standby
566will take its place.
567
568Usage::
569
570 ceph mgr fail <name>
571
572Subcommand ``module ls`` will list currently enabled manager modules (plugins).
573
574Usage::
575
576 ceph mgr module ls
577
578Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
579
580Usage::
581
582 ceph mgr module enable <module>
583
584Subcommand ``module disable`` will disable an active manager module.
585
586Usage::
587
588 ceph mgr module disable <module>
589
590Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
591
592Usage::
593
594 ceph mgr metadata [name]
595
596Subcommand ``versions`` will report a count of running daemon versions.
597
598Usage::
599
600 ceph mgr versions
601
602Subcommand ``count-metadata`` will report a count of any daemon metadata field.
603
604Usage::
605
606 ceph mgr count-metadata <field>
607
11fdf7f2 608.. _ceph-admin-osd:
c07f9fc5 609
7c673cae
FG
610osd
611---
612
613Manage OSD configuration and administration. It uses some additional
614subcommands.
615
616Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
617subcommands.
618
619Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
620from now)
621
622Usage::
623
624 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
625
626Subcommand ``ls`` show blacklisted clients
627
628Usage::
629
630 ceph osd blacklist ls
631
632Subcommand ``rm`` remove <addr> from blacklist
633
634Usage::
635
636 ceph osd blacklist rm <EntityAddr>
637
638Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
639
640Usage::
641
642 ceph osd blocked-by
643
644Subcommand ``create`` creates new osd (with optional UUID and ID).
645
31f18b77
FG
646This command is DEPRECATED as of the Luminous release, and will be removed in
647a future release.
648
649Subcommand ``new`` should instead be used.
650
7c673cae
FG
651Usage::
652
653 ceph osd create {<uuid>} {<id>}
654
b5b8bbf5
FG
655Subcommand ``new`` can be used to create a new OSD or to recreate a previously
656destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*,
657and the command expects a JSON file containing the base64 cephx key for auth
658entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt
659lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
660the accompanying lockbox cephx key.
31f18b77
FG
661
662Usage::
663
3a9019d9 664 ceph osd new {<uuid>} {<id>} -i {<params.json>}
31f18b77 665
3a9019d9 666The parameters JSON file is optional but if provided, is expected to maintain
b5b8bbf5 667a form of the following format::
31f18b77
FG
668
669 {
3a9019d9
FG
670 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
671 "crush_device_class": "myclass"
31f18b77
FG
672 }
673
674Or::
675
676 {
677 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
678 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
3a9019d9
FG
679 "dmcrypt_key": "<dm-crypt key>",
680 "crush_device_class": "myclass"
681 }
682
683Or::
684
685 {
686 "crush_device_class": "myclass"
31f18b77 687 }
3a9019d9
FG
688
689The "crush_device_class" property is optional. If specified, it will set the
690initial CRUSH device class for the new OSD.
691
31f18b77 692
7c673cae
FG
693Subcommand ``crush`` is used for CRUSH management. It uses some additional
694subcommands.
695
696Subcommand ``add`` adds or updates crushmap position and weight for <name> with
697<weight> and location <args>.
698
699Usage::
700
701 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
702
703Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
704type <type>.
705
706Usage::
707
708 ceph osd crush add-bucket <name> <type>
709
710Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
711<weight> at/to location <args>.
712
713Usage::
714
715 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
716 [<args>...]
717
718Subcommand ``dump`` dumps crush map.
719
720Usage::
721
722 ceph osd crush dump
723
724Subcommand ``get-tunable`` get crush tunable straw_calc_version
725
726Usage::
727
728 ceph osd crush get-tunable straw_calc_version
729
730Subcommand ``link`` links existing entry for <name> under location <args>.
731
732Usage::
733
734 ceph osd crush link <name> <args> [<args>...]
735
736Subcommand ``move`` moves existing entry for <name> to location <args>.
737
738Usage::
739
740 ceph osd crush move <name> <args> [<args>...]
741
742Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
743<ancestor>).
744
745Usage::
746
747 ceph osd crush remove <name> {<ancestor>}
748
11fdf7f2 749Subcommand ``rename-bucket`` renames bucket <srcname> to <dstname>
7c673cae
FG
750
751Usage::
752
753 ceph osd crush rename-bucket <srcname> <dstname>
754
755Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
756
757Usage::
758
759 ceph osd crush reweight <name> <float[0.0-]>
760
761Subcommand ``reweight-all`` recalculate the weights for the tree to
762ensure they sum correctly
763
764Usage::
765
766 ceph osd crush reweight-all
767
768Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
769to <weight> in crush map
770
771Usage::
772
773 ceph osd crush reweight-subtree <name> <weight>
774
775Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
776<ancestor>).
777
778Usage::
779
780 ceph osd crush rm <name> {<ancestor>}
781
782Subcommand ``rule`` is used for creating crush rules. It uses some additional
783subcommands.
784
785Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
786created with <profile> (default default).
787
788Usage::
789
790 ceph osd crush rule create-erasure <name> {<profile>}
791
792Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
793replicate across buckets of type <type>, using a choose mode of <firstn|indep>
794(default firstn; indep best for erasure pools).
795
796Usage::
797
798 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
799
800Subcommand ``dump`` dumps crush rule <name> (default all).
801
802Usage::
803
804 ceph osd crush rule dump {<name>}
805
7c673cae
FG
806Subcommand ``ls`` lists crush rules.
807
808Usage::
809
810 ceph osd crush rule ls
811
812Subcommand ``rm`` removes crush rule <name>.
813
814Usage::
815
816 ceph osd crush rule rm <name>
817
818Subcommand ``set`` used alone, sets crush map from input file.
819
820Usage::
821
822 ceph osd crush set
823
824Subcommand ``set`` with osdname/osd.id update crushmap position and weight
825for <name> to <weight> with location <args>.
826
827Usage::
828
829 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
830
831Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
832tunable that can be set is straw_calc_version.
833
834Usage::
835
836 ceph osd crush set-tunable straw_calc_version <value>
837
838Subcommand ``show-tunables`` shows current crush tunables.
839
840Usage::
841
842 ceph osd crush show-tunables
843
844Subcommand ``tree`` shows the crush buckets and items in a tree view.
845
846Usage::
847
848 ceph osd crush tree
849
850Subcommand ``tunables`` sets crush tunables values to <profile>.
851
852Usage::
853
854 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
855
856Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
857<ancestor>).
858
859Usage::
860
861 ceph osd crush unlink <name> {<ancestor>}
862
863Subcommand ``df`` shows OSD utilization
864
865Usage::
866
867 ceph osd df {plain|tree}
868
869Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
870
871Usage::
872
873 ceph osd deep-scrub <who>
874
875Subcommand ``down`` sets osd(s) <id> [<id>...] down.
876
877Usage::
878
879 ceph osd down <ids> [<ids>...]
880
881Subcommand ``dump`` prints summary of OSD map.
882
883Usage::
884
885 ceph osd dump {<int[0-]>}
886
887Subcommand ``erasure-code-profile`` is used for managing the erasure code
888profiles. It uses some additional subcommands.
889
890Subcommand ``get`` gets erasure code profile <name>.
891
892Usage::
893
894 ceph osd erasure-code-profile get <name>
895
896Subcommand ``ls`` lists all erasure code profiles.
897
898Usage::
899
900 ceph osd erasure-code-profile ls
901
902Subcommand ``rm`` removes erasure code profile <name>.
903
904Usage::
905
906 ceph osd erasure-code-profile rm <name>
907
908Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
909pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
910
911Usage::
912
913 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
914
915Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
916
917Usage::
918
919 ceph osd find <int[0-]>
920
921Subcommand ``getcrushmap`` gets CRUSH map.
922
923Usage::
924
925 ceph osd getcrushmap {<int[0-]>}
926
927Subcommand ``getmap`` gets OSD map.
928
929Usage::
930
931 ceph osd getmap {<int[0-]>}
932
933Subcommand ``getmaxosd`` shows largest OSD id.
934
935Usage::
936
937 ceph osd getmaxosd
938
939Subcommand ``in`` sets osd(s) <id> [<id>...] in.
940
941Usage::
942
943 ceph osd in <ids> [<ids>...]
944
945Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
946MORE REPLICAS EXIST, BE CAREFUL.
947
948Usage::
949
950 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
951
952Subcommand ``ls`` shows all OSD ids.
953
954Usage::
955
956 ceph osd ls {<int[0-]>}
957
958Subcommand ``lspools`` lists pools.
959
960Usage::
961
962 ceph osd lspools {<int>}
963
964Subcommand ``map`` finds pg for <object> in <pool>.
965
966Usage::
967
968 ceph osd map <poolname> <objectname>
969
970Subcommand ``metadata`` fetches metadata for osd <id>.
971
972Usage::
973
974 ceph osd metadata {int[0-]} (default all)
975
976Subcommand ``out`` sets osd(s) <id> [<id>...] out.
977
978Usage::
979
980 ceph osd out <ids> [<ids>...]
981
35e4c445
FG
982Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
983stopped without immediately making data unavailable. That is, all
984data should remain readable and writeable, although data redundancy
985may be reduced as some PGs may end up in a degraded (but active)
986state. It will return a success code if it is okay to stop the
987OSD(s), or an error code and informative message if it is not or if no
988conclusion can be drawn at the current time.
989
990Usage::
991
992 ceph osd ok-to-stop <id> [<ids>...]
993
7c673cae
FG
994Subcommand ``pause`` pauses osd.
995
996Usage::
997
998 ceph osd pause
999
1000Subcommand ``perf`` prints dump of OSD perf summary stats.
1001
1002Usage::
1003
1004 ceph osd perf
1005
1006Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
1007only).
1008
1009Usage::
1010
1011 ceph osd pg-temp <pgid> {<id> [<id>...]}
1012
c07f9fc5
FG
1013Subcommand ``force-create-pg`` forces creation of pg <pgid>.
1014
1015Usage::
1016
1017 ceph osd force-create-pg <pgid>
1018
1019
7c673cae
FG
1020Subcommand ``pool`` is used for managing data pools. It uses some additional
1021subcommands.
1022
1023Subcommand ``create`` creates pool.
1024
1025Usage::
1026
9f95a23c
TL
1027 ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
1028 {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
7c673cae
FG
1029
1030Subcommand ``delete`` deletes pool.
1031
1032Usage::
1033
1034 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
1035
1036Subcommand ``get`` gets pool parameter <var>.
1037
1038Usage::
1039
11fdf7f2 1040 ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
7c673cae
FG
1041
1042Only for tiered pools::
1043
1044 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
1045 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
1046 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1047 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
1048
1049Only for erasure coded pools::
1050
1051 ceph osd pool get <poolname> erasure_code_profile
1052
1053Use ``all`` to get all pool parameters that apply to the pool's type::
1054
1055 ceph osd pool get <poolname> all
1056
1057Subcommand ``get-quota`` obtains object or byte limits for pool.
1058
1059Usage::
1060
1061 ceph osd pool get-quota <poolname>
1062
1063Subcommand ``ls`` list pools
1064
1065Usage::
1066
1067 ceph osd pool ls {detail}
1068
1069Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
1070
1071Usage::
1072
1073 ceph osd pool mksnap <poolname> <snap>
1074
1075Subcommand ``rename`` renames <srcpool> to <destpool>.
1076
1077Usage::
1078
1079 ceph osd pool rename <poolname> <poolname>
1080
1081Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
1082
1083Usage::
1084
1085 ceph osd pool rmsnap <poolname> <snap>
1086
1087Subcommand ``set`` sets pool parameter <var> to <val>.
1088
1089Usage::
1090
11fdf7f2 1091 ceph osd pool set <poolname> size|min_size|pg_num|
b32b8144 1092 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
7c673cae
FG
1093 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
1094 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
1095 cache_target_dirty_high_ratio|
11fdf7f2 1096 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
7c673cae
FG
1097 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
1098 hit_set_search_last_n
1099 <val> {--yes-i-really-mean-it}
1100
1101Subcommand ``set-quota`` sets object or byte limit on pool.
1102
1103Usage::
1104
1105 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1106
1107Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1108
1109Usage::
1110
1111 ceph osd pool stats {<name>}
1112
11fdf7f2
TL
1113Subcommand ``application`` is used for adding an annotation to the given
1114pool. By default, the possible applications are object, block, and file
1115storage (corresponding app-names are "rgw", "rbd", and "cephfs"). However,
1116there might be other applications as well. Based on the application, there
1117may or may not be some processing conducted.
1118
1119Subcommand ``disable`` disables the given application on the given pool.
1120
1121Usage::
1122
1123 ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
1124
1125Subcommand ``enable`` adds an annotation to the given pool for the mentioned
1126application.
1127
1128Usage::
1129
1130 ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
1131
9f95a23c 1132Subcommand ``get`` displays the value for the given key that is associated
11fdf7f2
TL
1133with the given application of the given pool. Not passing the optional
1134arguments would display all key-value pairs for all applications for all
1135pools.
1136
1137Usage::
1138
1139 ceph osd pool application get {<pool-name>} {<app>} {<key>}
1140
1141Subcommand ``rm`` removes the key-value pair for the given key in the given
1142application of the given pool.
1143
1144Usage::
1145
1146 ceph osd pool application rm <pool-name> <app> <key>
1147
1148Subcommand ``set`` assosciates or updates, if it already exists, a key-value
1149pair with the given application for the given pool.
1150
1151Usage::
1152
1153 ceph osd pool application set <pool-name> <app> <key> <value>
1154
7c673cae
FG
1155Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1156<= 1.0
1157
1158Usage::
1159
1160 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1161
1162Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1163only).
1164
1165Usage::
1166
1167 ceph osd primary-temp <pgid> <id>
1168
1169Subcommand ``repair`` initiates repair on a specified osd.
1170
1171Usage::
1172
1173 ceph osd repair <who>
1174
1175Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1176
1177Usage::
1178
1179 osd reweight <int[0-]> <float[0.0-1.0]>
1180
1181Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1182[overload-percentage-for-consideration, default 120].
1183
1184Usage::
1185
1186 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1187 {--no-increasing}
1188
1189Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
1190[overload-percentage-for-consideration, default 120].
1191
1192Usage::
1193
1194 ceph osd reweight-by-utilization {<int[100-]>}
1195 {--no-increasing}
1196
c07f9fc5
FG
1197Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1198
7c673cae
FG
1199
1200Usage::
1201
1202 ceph osd rm <ids> [<ids>...]
1203
31f18b77
FG
1204Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1205entity's keys and all of its dm-crypt and daemon-private config key
1206entries.
1207
1208This command will not remove the OSD from crush, nor will it remove the
1209OSD from the OSD map. Instead, once the command successfully completes,
1210the OSD will show marked as *destroyed*.
1211
1212In order to mark an OSD as destroyed, the OSD must first be marked as
1213**lost**.
1214
1215Usage::
1216
1217 ceph osd destroy <id> {--yes-i-really-mean-it}
1218
1219
1220Subcommand ``purge`` performs a combination of ``osd destroy``,
1221``osd rm`` and ``osd crush remove``.
1222
1223Usage::
1224
1225 ceph osd purge <id> {--yes-i-really-mean-it}
1226
35e4c445
FG
1227Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1228destroy an OSD without reducing overall data redundancy or durability.
1229It will return a success code if it is definitely safe, or an error
1230code and informative message if it is not or if no conclusion can be
1231drawn at the current time.
1232
1233Usage::
1234
1235 ceph osd safe-to-destroy <id> [<ids>...]
1236
7c673cae
FG
1237Subcommand ``scrub`` initiates scrub on specified osd.
1238
1239Usage::
1240
1241 ceph osd scrub <who>
1242
9f95a23c
TL
1243Subcommand ``set`` sets cluster-wide <flag> by updating OSD map.
1244The ``full`` flag is not honored anymore since the Mimic release, and
1245``ceph osd set full`` is not supported in the Octopus release.
7c673cae
FG
1246
1247Usage::
1248
9f95a23c 1249 ceph osd set pause|noup|nodown|noout|noin|nobackfill|
7c673cae
FG
1250 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1251
1252Subcommand ``setcrushmap`` sets crush map from input file.
1253
1254Usage::
1255
1256 ceph osd setcrushmap
1257
1258Subcommand ``setmaxosd`` sets new maximum osd value.
1259
1260Usage::
1261
1262 ceph osd setmaxosd <int[0-]>
1263
c07f9fc5
FG
1264Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1265compatible with the specified client version. This subcommand prevents you from
1266making any changes (e.g., crush tunables, or using new features) that
1267would violate the current setting. Please note, This subcommand will fail if
1268any connected daemon or client is not compatible with the features offered by
1269the given <version>. To see the features and releases of all clients connected
1270to cluster, please see `ceph features`_.
1271
1272Usage::
1273
1274 ceph osd set-require-min-compat-client <version>
1275
7c673cae
FG
1276Subcommand ``stat`` prints summary of OSD map.
1277
1278Usage::
1279
1280 ceph osd stat
1281
1282Subcommand ``tier`` is used for managing tiers. It uses some additional
1283subcommands.
1284
1285Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1286(the first one).
1287
1288Usage::
1289
1290 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1291
1292Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1293to existing pool <pool> (the first one).
1294
1295Usage::
1296
1297 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1298
1299Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1300
1301Usage::
1302
e306af50 1303 ceph osd tier cache-mode <poolname> writeback|readproxy|readonly|none
7c673cae
FG
1304
1305Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1306<pool> (the first one).
1307
1308Usage::
1309
1310 ceph osd tier remove <poolname> <poolname>
1311
1312Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1313
1314Usage::
1315
1316 ceph osd tier remove-overlay <poolname>
1317
1318Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1319<overlaypool>.
1320
1321Usage::
1322
1323 ceph osd tier set-overlay <poolname> <poolname>
1324
1325Subcommand ``tree`` prints OSD tree.
1326
1327Usage::
1328
1329 ceph osd tree {<int[0-]>}
1330
1331Subcommand ``unpause`` unpauses osd.
1332
1333Usage::
1334
1335 ceph osd unpause
1336
9f95a23c 1337Subcommand ``unset`` unsets cluster-wide <flag> by updating OSD map.
7c673cae
FG
1338
1339Usage::
1340
9f95a23c 1341 ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
7c673cae
FG
1342 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1343
1344
1345pg
1346--
1347
1348It is used for managing the placement groups in OSDs. It uses some
1349additional subcommands.
1350
1351Subcommand ``debug`` shows debug info about pgs.
1352
1353Usage::
1354
1355 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1356
1357Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1358
1359Usage::
1360
1361 ceph pg deep-scrub <pgid>
1362
1363Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1364with plain).
1365
1366Usage::
1367
1368 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1369
1370Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1371
1372Usage::
1373
1374 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1375
1376Subcommand ``dump_pools_json`` shows pg pools info in json only.
1377
1378Usage::
1379
1380 ceph pg dump_pools_json
1381
1382Subcommand ``dump_stuck`` shows information about stuck pgs.
1383
1384Usage::
1385
1386 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1387 {<int>}
1388
7c673cae
FG
1389Subcommand ``getmap`` gets binary pg map to -o/stdout.
1390
1391Usage::
1392
1393 ceph pg getmap
1394
1395Subcommand ``ls`` lists pg with specific pool, osd, state
1396
1397Usage::
1398
11fdf7f2 1399 ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
7c673cae
FG
1400
1401Subcommand ``ls-by-osd`` lists pg on osd [osd]
1402
1403Usage::
1404
1405 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
11fdf7f2 1406 {<pg-state> [<pg-state>...]}
7c673cae
FG
1407
1408Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1409
1410Usage::
1411
11fdf7f2 1412 ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
7c673cae
FG
1413
1414Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1415
1416Usage::
1417
1418 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
11fdf7f2 1419 {<pg-state> [<pg-state>...]}
7c673cae
FG
1420
1421Subcommand ``map`` shows mapping of pg to osds.
1422
1423Usage::
1424
1425 ceph pg map <pgid>
1426
1427Subcommand ``repair`` starts repair on <pgid>.
1428
1429Usage::
1430
1431 ceph pg repair <pgid>
1432
1433Subcommand ``scrub`` starts scrub on <pgid>.
1434
1435Usage::
1436
1437 ceph pg scrub <pgid>
1438
7c673cae
FG
1439Subcommand ``stat`` shows placement group status.
1440
1441Usage::
1442
1443 ceph pg stat
1444
1445
1446quorum
1447------
1448
9f95a23c 1449Cause a specific MON to enter or exit quorum.
7c673cae
FG
1450
1451Usage::
1452
7c673cae
FG
1453 ceph tell mon.<id> quorum enter|exit
1454
1455quorum_status
1456-------------
1457
1458Reports status of monitor quorum.
1459
1460Usage::
1461
1462 ceph quorum_status
1463
1464
1465report
1466------
1467
1468Reports full status of cluster, optional title tag strings.
1469
1470Usage::
1471
1472 ceph report {<tags> [<tags>...]}
1473
1474
7c673cae
FG
1475status
1476------
1477
1478Shows cluster status.
1479
1480Usage::
1481
1482 ceph status
1483
1484
7c673cae
FG
1485tell
1486----
1487
1488Sends a command to a specific daemon.
1489
1490Usage::
1491
11fdf7f2 1492 ceph tell <name (type.id)> <command> [options...]
7c673cae 1493
31f18b77
FG
1494
1495List all available commands.
1496
1497Usage::
1498
1499 ceph tell <name (type.id)> help
1500
7c673cae
FG
1501version
1502-------
1503
1504Show mon daemon version
1505
1506Usage::
1507
1508 ceph version
1509
1510Options
1511=======
1512
1513.. option:: -i infile
1514
1515 will specify an input file to be passed along as a payload with the
1516 command to the monitor cluster. This is only used for specific
1517 monitor commands.
1518
1519.. option:: -o outfile
1520
1521 will write any payload returned by the monitor cluster with its
1522 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1523 return a payload.
1524
a8e16298
TL
1525.. option:: --setuser user
1526
1527 will apply the appropriate user ownership to the file specified by
1528 the option '-o'.
1529
1530.. option:: --setgroup group
1531
1532 will apply the appropriate group ownership to the file specified by
1533 the option '-o'.
1534
7c673cae
FG
1535.. option:: -c ceph.conf, --conf=ceph.conf
1536
1537 Use ceph.conf configuration file instead of the default
1538 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1539
1540.. option:: --id CLIENT_ID, --user CLIENT_ID
1541
1542 Client id for authentication.
1543
1544.. option:: --name CLIENT_NAME, -n CLIENT_NAME
1545
1546 Client name for authentication.
1547
1548.. option:: --cluster CLUSTER
1549
1550 Name of the Ceph cluster.
1551
1552.. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1553
1554 Submit admin-socket commands via admin sockets in /var/run/ceph.
1555
1556.. option:: --admin-socket ADMIN_SOCKET_NOPE
1557
1558 You probably mean --admin-daemon
1559
1560.. option:: -s, --status
1561
1562 Show cluster status.
1563
1564.. option:: -w, --watch
1565
9f95a23c
TL
1566 Watch live cluster changes on the default 'cluster' channel
1567
1568.. option:: -W, --watch-channel
1569
1570 Watch live cluster changes on any channel (cluster, audit, cephadm, or * for all)
7c673cae
FG
1571
1572.. option:: --watch-debug
1573
1574 Watch debug events.
1575
1576.. option:: --watch-info
1577
1578 Watch info events.
1579
1580.. option:: --watch-sec
1581
1582 Watch security events.
1583
1584.. option:: --watch-warn
1585
1586 Watch warning events.
1587
1588.. option:: --watch-error
1589
1590 Watch error events.
1591
1592.. option:: --version, -v
1593
1594 Display version.
1595
1596.. option:: --verbose
1597
1598 Make verbose.
1599
1600.. option:: --concise
1601
1602 Make less verbose.
1603
1604.. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1605
1606 Format of output.
1607
1608.. option:: --connect-timeout CLUSTER_TIMEOUT
1609
1610 Set a timeout for connecting to the cluster.
1611
1612.. option:: --no-increasing
1613
1614 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1615 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1616 If this option is used with these commands, it will help not to increase osd weight
1617 even the osd is under utilized.
1618
11fdf7f2
TL
1619.. option:: --block
1620
1621 block until completion (scrub and deep-scrub only)
7c673cae
FG
1622
1623Availability
1624============
1625
1626:program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1627the Ceph documentation at http://ceph.com/docs for more information.
1628
1629
1630See also
1631========
1632
1633:doc:`ceph-mon <ceph-mon>`\(8),
1634:doc:`ceph-osd <ceph-osd>`\(8),
1635:doc:`ceph-mds <ceph-mds>`\(8)