]> git.proxmox.com Git - ceph.git/blame - ceph/doc/man/8/ceph.rst
bump version to 18.2.2-pve1
[ceph.git] / ceph / doc / man / 8 / ceph.rst
CommitLineData
7c673cae
FG
1:orphan:
2
3==================================
4 ceph -- ceph administration tool
5==================================
6
7.. program:: ceph
8
9Synopsis
10========
11
12| **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
13
14| **ceph** **compact**
15
9f95a23c
TL
16| **ceph** **config** [ *dump* | *ls* | *help* | *get* | *show* | *show-with-defaults* | *set* | *rm* | *log* | *reset* | *assimilate-conf* | *generate-minimal-conf* ] ...
17
11fdf7f2 18| **ceph** **config-key** [ *rm* | *exists* | *get* | *ls* | *dump* | *set* ] ...
7c673cae
FG
19
20| **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
21
22| **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
23
24| **ceph** **df** *{detail}*
25
f67539c2 26| **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* \| *authorize* ] ...
7c673cae
FG
27
28| **ceph** **fsid**
29
30| **ceph** **health** *{detail}*
31
7c673cae
FG
32| **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
33
34| **ceph** **log** *<logtext>* [ *<logtext>*... ]
35
11fdf7f2 36| **ceph** **mds** [ *compat* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *repaired* ] ...
7c673cae 37
1e59de90 38| **ceph** **mon** [ *add* \| *dump* \| *enable_stretch_mode* \| *getmap* \| *remove* \| *stat* ] ...
7c673cae 39
f67539c2 40| **ceph** **osd** [ *blocklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *ls* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
7c673cae
FG
41
42| **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
43
44| **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
45
11fdf7f2
TL
46| **ceph** **osd** **pool** **application** [ *disable* \| *enable* \| *get* \| *rm* \| *set* ] ...
47
7c673cae
FG
48| **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
49
11fdf7f2 50| **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *stat* ] ...
7c673cae 51
7c673cae
FG
52| **ceph** **quorum_status**
53
54| **ceph** **report** { *<tags>* [ *<tags>...* ] }
55
7c673cae
FG
56| **ceph** **status**
57
58| **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
59
11fdf7f2 60| **ceph** **tell** *<name (type.id)> <command> [options...]*
7c673cae
FG
61
62| **ceph** **version**
63
64Description
65===========
66
67:program:`ceph` is a control utility which is used for manual deployment and maintenance
68of a Ceph cluster. It provides a diverse set of commands that allows deployment of
69monitors, OSDs, placement groups, MDS and overall maintenance, administration
70of the cluster.
71
72Commands
73========
74
75auth
76----
77
78Manage authentication keys. It is used for adding, removing, exporting
79or updating of authentication keys for a particular entity such as a monitor or
80OSD. It uses some additional subcommands.
81
82Subcommand ``add`` adds authentication info for a particular entity from input
83file, or random key if no input is given and/or any caps specified in the command.
84
85Usage::
86
87 ceph auth add <entity> {<caps> [<caps>...]}
88
89Subcommand ``caps`` updates caps for **name** from caps specified in the command.
90
91Usage::
92
93 ceph auth caps <entity> <caps> [<caps>...]
94
95Subcommand ``del`` deletes all caps for ``name``.
96
97Usage::
98
99 ceph auth del <entity>
100
101Subcommand ``export`` writes keyring for requested entity, or master keyring if
102none given.
103
104Usage::
105
106 ceph auth export {<entity>}
107
108Subcommand ``get`` writes keyring file with requested key.
109
110Usage::
111
112 ceph auth get <entity>
113
114Subcommand ``get-key`` displays requested key.
115
116Usage::
117
118 ceph auth get-key <entity>
119
120Subcommand ``get-or-create`` adds authentication info for a particular entity
121from input file, or random key if no input given and/or any caps specified in the
122command.
123
124Usage::
125
126 ceph auth get-or-create <entity> {<caps> [<caps>...]}
127
128Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
129pairs specified in the command. If key already exists, any given caps must match
130the existing caps for that key.
131
132Usage::
133
134 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
135
136Subcommand ``import`` reads keyring from input file.
137
138Usage::
139
140 ceph auth import
141
c07f9fc5 142Subcommand ``ls`` lists authentication state.
7c673cae
FG
143
144Usage::
145
c07f9fc5 146 ceph auth ls
7c673cae
FG
147
148Subcommand ``print-key`` displays requested key.
149
150Usage::
151
152 ceph auth print-key <entity>
153
154Subcommand ``print_key`` displays requested key.
155
156Usage::
157
158 ceph auth print_key <entity>
159
160
161compact
162-------
163
164Causes compaction of monitor's leveldb storage.
165
166Usage::
167
168 ceph compact
169
170
9f95a23c
TL
171config
172------
173
174Configure the cluster. By default, Ceph daemons and clients retrieve their
175configuration options from monitor when they start, and are updated if any of
176the tracked options is changed at run time. It uses following additional
177subcommand.
178
179Subcommand ``dump`` to dump all options for the cluster
180
181Usage::
182
183 ceph config dump
184
185Subcommand ``ls`` to list all option names for the cluster
186
187Usage::
188
189 ceph config ls
190
191Subcommand ``help`` to describe the specified configuration option
192
193Usage::
194
195 ceph config help <option>
196
197Subcommand ``get`` to dump the option(s) for the specified entity.
198
199Usage::
200
201 ceph config get <who> {<option>}
202
203Subcommand ``show`` to display the running configuration of the specified
204entity. Please note, unlike ``get``, which only shows the options managed
205by monitor, ``show`` displays all the configurations being actively used.
206These options are pulled from several sources, for instance, the compiled-in
207default value, the monitor's configuration database, ``ceph.conf`` file on
208the host. The options can even be overridden at runtime. So, there is chance
209that the configuration options in the output of ``show`` could be different
210from those in the output of ``get``.
211
212Usage::
213
214 ceph config show {<who>}
215
216Subcommand ``show-with-defaults`` to display the running configuration along with the compiled-in defaults of the specified entity
217
218Usage::
219
220 ceph config show {<who>}
221
222Subcommand ``set`` to set an option for one or more specified entities
223
224Usage::
225
226 ceph config set <who> <option> <value> {--force}
227
228Subcommand ``rm`` to clear an option for one or more entities
229
230Usage::
231
232 ceph config rm <who> <option>
233
234Subcommand ``log`` to show recent history of config changes. If `count` option
1e59de90 235is omitted it defaults to 10.
9f95a23c
TL
236
237Usage::
238
239 ceph config log {<count>}
240
241Subcommand ``reset`` to revert configuration to the specified historical version
242
243Usage::
244
245 ceph config reset <version>
246
247
248Subcommand ``assimilate-conf`` to assimilate options from stdin, and return a
249new, minimal conf file
250
251Usage::
252
253 ceph config assimilate-conf -i <input-config-path> > <output-config-path>
254 ceph config assimilate-conf < <input-config-path>
255
256Subcommand ``generate-minimal-conf`` to generate a minimal ``ceph.conf`` file,
257which can be used for bootstrapping a daemon or a client.
258
259Usage::
260
261 ceph config generate-minimal-conf > <minimal-config-path>
262
263
7c673cae
FG
264config-key
265----------
266
9f95a23c
TL
267Manage configuration key. Config-key is a general purpose key/value service
268offered by the monitors. This service is mainly used by Ceph tools and daemons
269for persisting various settings. Among which, ceph-mgr modules uses it for
270storing their options. It uses some additional subcommands.
7c673cae 271
11fdf7f2 272Subcommand ``rm`` deletes configuration key.
7c673cae
FG
273
274Usage::
275
11fdf7f2 276 ceph config-key rm <key>
7c673cae
FG
277
278Subcommand ``exists`` checks for configuration keys existence.
279
280Usage::
281
282 ceph config-key exists <key>
283
284Subcommand ``get`` gets the configuration key.
285
286Usage::
287
288 ceph config-key get <key>
289
11fdf7f2 290Subcommand ``ls`` lists configuration keys.
7c673cae
FG
291
292Usage::
293
c07f9fc5 294 ceph config-key ls
7c673cae
FG
295
296Subcommand ``dump`` dumps configuration keys and values.
297
298Usage::
299
300 ceph config-key dump
301
c07f9fc5 302Subcommand ``set`` puts configuration key and value.
7c673cae
FG
303
304Usage::
305
c07f9fc5 306 ceph config-key set <key> {<val>}
7c673cae
FG
307
308
309daemon
310------
311
312Submit admin-socket commands.
313
314Usage::
315
316 ceph daemon {daemon_name|socket_path} {command} ...
317
318Example::
319
320 ceph daemon osd.0 help
321
322
323daemonperf
324----------
325
326Watch performance counters from a Ceph daemon.
327
328Usage::
329
330 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
331
332
333df
334--
335
336Show cluster's free space status.
337
338Usage::
339
340 ceph df {detail}
341
c07f9fc5
FG
342.. _ceph features:
343
344features
345--------
346
347Show the releases and features of all connected daemons and clients connected
348to the cluster, along with the numbers of them in each bucket grouped by the
349corresponding features/releases. Each release of Ceph supports a different set
350of features, expressed by the features bitmask. New cluster features require
351that clients support the feature, or else they are not allowed to connect to
352these new features. As new features or capabilities are enabled after an
353upgrade, older clients are prevented from connecting.
354
355Usage::
356
357 ceph features
7c673cae
FG
358
359fs
360--
361
9f95a23c 362Manage cephfs file systems. It uses some additional subcommands.
7c673cae 363
9f95a23c 364Subcommand ``ls`` to list file systems
7c673cae
FG
365
366Usage::
367
368 ceph fs ls
369
9f95a23c 370Subcommand ``new`` to make a new file system using named pools <metadata> and <data>
7c673cae
FG
371
372Usage::
373
374 ceph fs new <fs_name> <metadata> <data>
375
376Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
377
378Usage::
379
380 ceph fs reset <fs_name> {--yes-i-really-mean-it}
381
9f95a23c 382Subcommand ``rm`` to disable the named file system
7c673cae
FG
383
384Usage::
385
386 ceph fs rm <fs_name> {--yes-i-really-mean-it}
387
f67539c2
TL
388Subcommand ``authorize`` creates a new client that will be authorized for the
389given path in ``<fs_name>``. Pass ``/`` to authorize for the entire FS.
390``<perms>`` below can be ``r``, ``rw`` or ``rwp``.
391
392Usage::
393
394 ceph fs authorize <fs_name> client.<client_id> <path> <perms> [<path> <perms>...]
7c673cae
FG
395
396fsid
397----
398
399Show cluster's FSID/UUID.
400
401Usage::
402
403 ceph fsid
404
405
406health
407------
408
409Show cluster's health.
410
411Usage::
412
413 ceph health {detail}
414
415
416heap
417----
418
419Show heap usage info (available only if compiled with tcmalloc)
420
421Usage::
422
9f95a23c 423 ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
11fdf7f2
TL
424
425Subcommand ``release`` to make TCMalloc to releases no-longer-used memory back to the kernel at once.
426
427Usage::
428
9f95a23c 429 ceph tell <name (type.id)> heap release
11fdf7f2
TL
430
431Subcommand ``(get|set)_release_rate`` get or set the TCMalloc memory release rate. TCMalloc releases
432no-longer-used memory back to the kernel gradually. the rate controls how quickly this happens.
433Increase this setting to make TCMalloc to return unused memory more frequently. 0 means never return
434memory to system, 1 means wait for 1000 pages after releasing a page to system. It is ``1.0`` by default..
7c673cae 435
11fdf7f2
TL
436Usage::
437
9f95a23c 438 ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
7c673cae
FG
439
440injectargs
441----------
442
443Inject configuration arguments into monitor.
444
445Usage::
446
447 ceph injectargs <injected_args> [<injected_args>...]
448
449
450log
451---
452
453Log supplied text to the monitor log.
454
455Usage::
456
457 ceph log <logtext> [<logtext>...]
458
459
460mds
461---
462
463Manage metadata server configuration and administration. It uses some
464additional subcommands.
465
466Subcommand ``compat`` manages compatible features. It uses some additional
467subcommands.
468
469Subcommand ``rm_compat`` removes compatible feature.
470
471Usage::
472
473 ceph mds compat rm_compat <int[0-]>
474
475Subcommand ``rm_incompat`` removes incompatible feature.
476
477Usage::
478
479 ceph mds compat rm_incompat <int[0-]>
480
481Subcommand ``show`` shows mds compatibility settings.
482
483Usage::
484
485 ceph mds compat show
486
7c673cae
FG
487Subcommand ``fail`` forces mds to status fail.
488
489Usage::
490
11fdf7f2 491 ceph mds fail <role|gid>
7c673cae
FG
492
493Subcommand ``rm`` removes inactive mds.
494
495Usage::
496
497 ceph mds rm <int[0-]> <name> (type.id)>
498
499Subcommand ``rmfailed`` removes failed mds.
500
501Usage::
502
503 ceph mds rmfailed <int[0-]>
504
505Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
506
507Usage::
508
509 ceph mds set_state <int[0-]> <int[0-20]>
510
511Subcommand ``stat`` shows MDS status.
512
513Usage::
514
515 ceph mds stat
516
11fdf7f2 517Subcommand ``repaired`` mark a damaged MDS rank as no longer damaged.
7c673cae
FG
518
519Usage::
520
11fdf7f2 521 ceph mds repaired <role>
7c673cae
FG
522
523mon
524---
525
526Manage monitor configuration and administration. It uses some additional
527subcommands.
528
529Subcommand ``add`` adds new monitor named <name> at <addr>.
530
531Usage::
532
533 ceph mon add <name> <IPaddr[:port]>
534
535Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
536
537Usage::
538
539 ceph mon dump {<int[0-]>}
540
541Subcommand ``getmap`` gets monmap.
542
543Usage::
544
545 ceph mon getmap {<int[0-]>}
546
1e59de90
TL
547Subcommand ``enable_stretch_mode`` enables stretch mode, changing the peering
548rules and failure handling on all pools. For a given PG to successfully peer
549and be marked active, ``min_size`` replicas will now need to be active under all
550(currently two) CRUSH buckets of type <dividing_bucket>.
551
552<tiebreaker_mon> is the tiebreaker mon to use if a network split happens.
553
554<dividing_bucket> is the bucket type across which to stretch.
555This will typically be ``datacenter`` or other CRUSH hierarchy bucket type that
556denotes physically or logically distant subdivisions.
557
558<new_crush_rule> will be set as CRUSH rule for all pools.
559
560Usage::
561
562 ceph mon enable_stretch_mode <tiebreaker_mon> <new_crush_rule> <dividing_bucket>
563
7c673cae
FG
564Subcommand ``remove`` removes monitor named <name>.
565
566Usage::
567
568 ceph mon remove <name>
569
570Subcommand ``stat`` summarizes monitor status.
571
572Usage::
573
574 ceph mon stat
575
c07f9fc5
FG
576mgr
577---
578
579Ceph manager daemon configuration and management.
580
581Subcommand ``dump`` dumps the latest MgrMap, which describes the active
582and standby manager daemons.
583
584Usage::
585
586 ceph mgr dump
587
588Subcommand ``fail`` will mark a manager daemon as failed, removing it
589from the manager map. If it is the active manager daemon a standby
590will take its place.
591
592Usage::
593
594 ceph mgr fail <name>
595
596Subcommand ``module ls`` will list currently enabled manager modules (plugins).
597
598Usage::
599
600 ceph mgr module ls
601
602Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
603
604Usage::
605
606 ceph mgr module enable <module>
607
608Subcommand ``module disable`` will disable an active manager module.
609
610Usage::
611
612 ceph mgr module disable <module>
613
614Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
615
616Usage::
617
618 ceph mgr metadata [name]
619
620Subcommand ``versions`` will report a count of running daemon versions.
621
622Usage::
623
624 ceph mgr versions
625
626Subcommand ``count-metadata`` will report a count of any daemon metadata field.
627
628Usage::
629
630 ceph mgr count-metadata <field>
631
11fdf7f2 632.. _ceph-admin-osd:
c07f9fc5 633
7c673cae
FG
634osd
635---
636
637Manage OSD configuration and administration. It uses some additional
638subcommands.
639
f67539c2 640Subcommand ``blocklist`` manage blocklisted clients. It uses some additional
7c673cae
FG
641subcommands.
642
f67539c2 643Subcommand ``add`` add <addr> to blocklist (optionally until <expire> seconds
7c673cae
FG
644from now)
645
646Usage::
647
f67539c2 648 ceph osd blocklist add <EntityAddr> {<float[0.0-]>}
7c673cae 649
f67539c2 650Subcommand ``ls`` show blocklisted clients
7c673cae
FG
651
652Usage::
653
f67539c2 654 ceph osd blocklist ls
7c673cae 655
f67539c2 656Subcommand ``rm`` remove <addr> from blocklist
7c673cae
FG
657
658Usage::
659
f67539c2 660 ceph osd blocklist rm <EntityAddr>
7c673cae
FG
661
662Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
663
664Usage::
665
666 ceph osd blocked-by
667
668Subcommand ``create`` creates new osd (with optional UUID and ID).
669
31f18b77
FG
670This command is DEPRECATED as of the Luminous release, and will be removed in
671a future release.
672
673Subcommand ``new`` should instead be used.
674
7c673cae
FG
675Usage::
676
677 ceph osd create {<uuid>} {<id>}
678
b5b8bbf5
FG
679Subcommand ``new`` can be used to create a new OSD or to recreate a previously
680destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*,
681and the command expects a JSON file containing the base64 cephx key for auth
682entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt
683lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
684the accompanying lockbox cephx key.
31f18b77
FG
685
686Usage::
687
3a9019d9 688 ceph osd new {<uuid>} {<id>} -i {<params.json>}
31f18b77 689
3a9019d9 690The parameters JSON file is optional but if provided, is expected to maintain
b5b8bbf5 691a form of the following format::
31f18b77
FG
692
693 {
3a9019d9
FG
694 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
695 "crush_device_class": "myclass"
31f18b77
FG
696 }
697
698Or::
699
700 {
701 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
702 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
3a9019d9
FG
703 "dmcrypt_key": "<dm-crypt key>",
704 "crush_device_class": "myclass"
705 }
706
707Or::
708
709 {
710 "crush_device_class": "myclass"
31f18b77 711 }
3a9019d9
FG
712
713The "crush_device_class" property is optional. If specified, it will set the
714initial CRUSH device class for the new OSD.
715
31f18b77 716
7c673cae
FG
717Subcommand ``crush`` is used for CRUSH management. It uses some additional
718subcommands.
719
720Subcommand ``add`` adds or updates crushmap position and weight for <name> with
721<weight> and location <args>.
722
723Usage::
724
725 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
726
727Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
728type <type>.
729
730Usage::
731
732 ceph osd crush add-bucket <name> <type>
733
734Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
735<weight> at/to location <args>.
736
737Usage::
738
739 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
740 [<args>...]
741
742Subcommand ``dump`` dumps crush map.
743
744Usage::
745
746 ceph osd crush dump
747
748Subcommand ``get-tunable`` get crush tunable straw_calc_version
749
750Usage::
751
752 ceph osd crush get-tunable straw_calc_version
753
754Subcommand ``link`` links existing entry for <name> under location <args>.
755
756Usage::
757
758 ceph osd crush link <name> <args> [<args>...]
759
760Subcommand ``move`` moves existing entry for <name> to location <args>.
761
762Usage::
763
764 ceph osd crush move <name> <args> [<args>...]
765
766Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
767<ancestor>).
768
769Usage::
770
771 ceph osd crush remove <name> {<ancestor>}
772
11fdf7f2 773Subcommand ``rename-bucket`` renames bucket <srcname> to <dstname>
7c673cae
FG
774
775Usage::
776
777 ceph osd crush rename-bucket <srcname> <dstname>
778
779Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
780
781Usage::
782
783 ceph osd crush reweight <name> <float[0.0-]>
784
785Subcommand ``reweight-all`` recalculate the weights for the tree to
786ensure they sum correctly
787
788Usage::
789
790 ceph osd crush reweight-all
791
792Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
793to <weight> in crush map
794
795Usage::
796
797 ceph osd crush reweight-subtree <name> <weight>
798
799Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
800<ancestor>).
801
802Usage::
803
804 ceph osd crush rm <name> {<ancestor>}
805
806Subcommand ``rule`` is used for creating crush rules. It uses some additional
807subcommands.
808
809Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
810created with <profile> (default default).
811
812Usage::
813
814 ceph osd crush rule create-erasure <name> {<profile>}
815
816Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
817replicate across buckets of type <type>, using a choose mode of <firstn|indep>
818(default firstn; indep best for erasure pools).
819
820Usage::
821
822 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
823
824Subcommand ``dump`` dumps crush rule <name> (default all).
825
826Usage::
827
828 ceph osd crush rule dump {<name>}
829
7c673cae
FG
830Subcommand ``ls`` lists crush rules.
831
832Usage::
833
834 ceph osd crush rule ls
835
836Subcommand ``rm`` removes crush rule <name>.
837
838Usage::
839
840 ceph osd crush rule rm <name>
841
842Subcommand ``set`` used alone, sets crush map from input file.
843
844Usage::
845
846 ceph osd crush set
847
848Subcommand ``set`` with osdname/osd.id update crushmap position and weight
849for <name> to <weight> with location <args>.
850
851Usage::
852
853 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
854
855Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
856tunable that can be set is straw_calc_version.
857
858Usage::
859
860 ceph osd crush set-tunable straw_calc_version <value>
861
862Subcommand ``show-tunables`` shows current crush tunables.
863
864Usage::
865
866 ceph osd crush show-tunables
867
868Subcommand ``tree`` shows the crush buckets and items in a tree view.
869
870Usage::
871
872 ceph osd crush tree
873
874Subcommand ``tunables`` sets crush tunables values to <profile>.
875
876Usage::
877
878 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
879
880Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
881<ancestor>).
882
883Usage::
884
885 ceph osd crush unlink <name> {<ancestor>}
886
887Subcommand ``df`` shows OSD utilization
888
889Usage::
890
891 ceph osd df {plain|tree}
892
893Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
894
895Usage::
896
897 ceph osd deep-scrub <who>
898
899Subcommand ``down`` sets osd(s) <id> [<id>...] down.
900
901Usage::
902
903 ceph osd down <ids> [<ids>...]
904
905Subcommand ``dump`` prints summary of OSD map.
906
907Usage::
908
909 ceph osd dump {<int[0-]>}
910
911Subcommand ``erasure-code-profile`` is used for managing the erasure code
912profiles. It uses some additional subcommands.
913
914Subcommand ``get`` gets erasure code profile <name>.
915
916Usage::
917
918 ceph osd erasure-code-profile get <name>
919
920Subcommand ``ls`` lists all erasure code profiles.
921
922Usage::
923
924 ceph osd erasure-code-profile ls
925
926Subcommand ``rm`` removes erasure code profile <name>.
927
928Usage::
929
930 ceph osd erasure-code-profile rm <name>
931
932Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
933pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
934
935Usage::
936
937 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
938
939Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
940
941Usage::
942
943 ceph osd find <int[0-]>
944
945Subcommand ``getcrushmap`` gets CRUSH map.
946
947Usage::
948
949 ceph osd getcrushmap {<int[0-]>}
950
951Subcommand ``getmap`` gets OSD map.
952
953Usage::
954
955 ceph osd getmap {<int[0-]>}
956
957Subcommand ``getmaxosd`` shows largest OSD id.
958
959Usage::
960
961 ceph osd getmaxosd
962
963Subcommand ``in`` sets osd(s) <id> [<id>...] in.
964
965Usage::
966
967 ceph osd in <ids> [<ids>...]
968
969Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
970MORE REPLICAS EXIST, BE CAREFUL.
971
972Usage::
973
974 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
975
976Subcommand ``ls`` shows all OSD ids.
977
978Usage::
979
980 ceph osd ls {<int[0-]>}
981
982Subcommand ``lspools`` lists pools.
983
984Usage::
985
986 ceph osd lspools {<int>}
987
988Subcommand ``map`` finds pg for <object> in <pool>.
989
990Usage::
991
992 ceph osd map <poolname> <objectname>
993
994Subcommand ``metadata`` fetches metadata for osd <id>.
995
996Usage::
997
998 ceph osd metadata {int[0-]} (default all)
999
1000Subcommand ``out`` sets osd(s) <id> [<id>...] out.
1001
1002Usage::
1003
1004 ceph osd out <ids> [<ids>...]
1005
35e4c445
FG
1006Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
1007stopped without immediately making data unavailable. That is, all
1008data should remain readable and writeable, although data redundancy
1009may be reduced as some PGs may end up in a degraded (but active)
1010state. It will return a success code if it is okay to stop the
1011OSD(s), or an error code and informative message if it is not or if no
f67539c2
TL
1012conclusion can be drawn at the current time. When ``--max <num>`` is
1013provided, up to <num> OSDs IDs will return (including the provided
1014OSDs) that can all be stopped simultaneously. This allows larger sets
1015of stoppable OSDs to be generated easily by providing a single
1016starting OSD and a max. Additional OSDs are drawn from adjacent locations
1017in the CRUSH hierarchy.
35e4c445
FG
1018
1019Usage::
1020
f67539c2 1021 ceph osd ok-to-stop <id> [<ids>...] [--max <num>]
35e4c445 1022
7c673cae
FG
1023Subcommand ``pause`` pauses osd.
1024
1025Usage::
1026
1027 ceph osd pause
1028
1029Subcommand ``perf`` prints dump of OSD perf summary stats.
1030
1031Usage::
1032
1033 ceph osd perf
1034
1035Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
1036only).
1037
1038Usage::
1039
1040 ceph osd pg-temp <pgid> {<id> [<id>...]}
1041
c07f9fc5
FG
1042Subcommand ``force-create-pg`` forces creation of pg <pgid>.
1043
1044Usage::
1045
1046 ceph osd force-create-pg <pgid>
1047
1048
7c673cae
FG
1049Subcommand ``pool`` is used for managing data pools. It uses some additional
1050subcommands.
1051
1052Subcommand ``create`` creates pool.
1053
1054Usage::
1055
9f95a23c
TL
1056 ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
1057 {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
7c673cae
FG
1058
1059Subcommand ``delete`` deletes pool.
1060
1061Usage::
1062
1063 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
1064
1065Subcommand ``get`` gets pool parameter <var>.
1066
1067Usage::
1068
11fdf7f2 1069 ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
7c673cae
FG
1070
1071Only for tiered pools::
1072
1073 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
1074 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
1075 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1076 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
1077
1078Only for erasure coded pools::
1079
1080 ceph osd pool get <poolname> erasure_code_profile
1081
1082Use ``all`` to get all pool parameters that apply to the pool's type::
1083
1084 ceph osd pool get <poolname> all
1085
1086Subcommand ``get-quota`` obtains object or byte limits for pool.
1087
1088Usage::
1089
1090 ceph osd pool get-quota <poolname>
1091
1092Subcommand ``ls`` list pools
1093
1094Usage::
1095
1096 ceph osd pool ls {detail}
1097
1098Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
1099
1100Usage::
1101
1102 ceph osd pool mksnap <poolname> <snap>
1103
1104Subcommand ``rename`` renames <srcpool> to <destpool>.
1105
1106Usage::
1107
1108 ceph osd pool rename <poolname> <poolname>
1109
1110Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
1111
1112Usage::
1113
1114 ceph osd pool rmsnap <poolname> <snap>
1115
1116Subcommand ``set`` sets pool parameter <var> to <val>.
1117
1118Usage::
1119
11fdf7f2 1120 ceph osd pool set <poolname> size|min_size|pg_num|
b32b8144 1121 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
7c673cae
FG
1122 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
1123 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
1124 cache_target_dirty_high_ratio|
11fdf7f2 1125 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
7c673cae
FG
1126 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
1127 hit_set_search_last_n
1128 <val> {--yes-i-really-mean-it}
1129
1130Subcommand ``set-quota`` sets object or byte limit on pool.
1131
1132Usage::
1133
1134 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1135
1136Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1137
1138Usage::
1139
1140 ceph osd pool stats {<name>}
1141
11fdf7f2
TL
1142Subcommand ``application`` is used for adding an annotation to the given
1143pool. By default, the possible applications are object, block, and file
1144storage (corresponding app-names are "rgw", "rbd", and "cephfs"). However,
1145there might be other applications as well. Based on the application, there
1146may or may not be some processing conducted.
1147
1148Subcommand ``disable`` disables the given application on the given pool.
1149
1150Usage::
1151
1152 ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
1153
1154Subcommand ``enable`` adds an annotation to the given pool for the mentioned
1155application.
1156
1157Usage::
1158
1159 ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
1160
9f95a23c 1161Subcommand ``get`` displays the value for the given key that is associated
11fdf7f2
TL
1162with the given application of the given pool. Not passing the optional
1163arguments would display all key-value pairs for all applications for all
1164pools.
1165
1166Usage::
1167
1168 ceph osd pool application get {<pool-name>} {<app>} {<key>}
1169
1170Subcommand ``rm`` removes the key-value pair for the given key in the given
1171application of the given pool.
1172
1173Usage::
1174
1175 ceph osd pool application rm <pool-name> <app> <key>
1176
f67539c2 1177Subcommand ``set`` associates or updates, if it already exists, a key-value
11fdf7f2
TL
1178pair with the given application for the given pool.
1179
1180Usage::
1181
1182 ceph osd pool application set <pool-name> <app> <key> <value>
1183
7c673cae
FG
1184Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1185<= 1.0
1186
1187Usage::
1188
1189 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1190
1191Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1192only).
1193
1194Usage::
1195
1196 ceph osd primary-temp <pgid> <id>
1197
1198Subcommand ``repair`` initiates repair on a specified osd.
1199
1200Usage::
1201
1202 ceph osd repair <who>
1203
1204Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1205
1206Usage::
1207
1208 osd reweight <int[0-]> <float[0.0-1.0]>
1209
1210Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1211[overload-percentage-for-consideration, default 120].
1212
1213Usage::
1214
1215 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1216 {--no-increasing}
1217
f67539c2
TL
1218Subcommand ``reweight-by-utilization`` reweights OSDs by utilization. It only reweights
1219outlier OSDs whose utilization exceeds the average, eg. the default 120%
1220limits reweight to those OSDs that are more than 20% over the average.
1221[overload-threshold, default 120 [max_weight_change, default 0.05 [max_osds_to_adjust, default 4]]]
7c673cae
FG
1222
1223Usage::
1224
f67539c2 1225 ceph osd reweight-by-utilization {<int[100-]> {<float[0.0-]> {<int[0-]>}}}
7c673cae
FG
1226 {--no-increasing}
1227
c07f9fc5
FG
1228Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1229
7c673cae
FG
1230
1231Usage::
1232
1233 ceph osd rm <ids> [<ids>...]
1234
31f18b77
FG
1235Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1236entity's keys and all of its dm-crypt and daemon-private config key
1237entries.
1238
1239This command will not remove the OSD from crush, nor will it remove the
1240OSD from the OSD map. Instead, once the command successfully completes,
1241the OSD will show marked as *destroyed*.
1242
1243In order to mark an OSD as destroyed, the OSD must first be marked as
1244**lost**.
1245
1246Usage::
1247
1248 ceph osd destroy <id> {--yes-i-really-mean-it}
1249
1250
1251Subcommand ``purge`` performs a combination of ``osd destroy``,
1252``osd rm`` and ``osd crush remove``.
1253
1254Usage::
1255
1256 ceph osd purge <id> {--yes-i-really-mean-it}
1257
35e4c445
FG
1258Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1259destroy an OSD without reducing overall data redundancy or durability.
1260It will return a success code if it is definitely safe, or an error
1261code and informative message if it is not or if no conclusion can be
1262drawn at the current time.
1263
1264Usage::
1265
1266 ceph osd safe-to-destroy <id> [<ids>...]
1267
7c673cae
FG
1268Subcommand ``scrub`` initiates scrub on specified osd.
1269
1270Usage::
1271
1272 ceph osd scrub <who>
1273
9f95a23c
TL
1274Subcommand ``set`` sets cluster-wide <flag> by updating OSD map.
1275The ``full`` flag is not honored anymore since the Mimic release, and
1276``ceph osd set full`` is not supported in the Octopus release.
7c673cae
FG
1277
1278Usage::
1279
9f95a23c 1280 ceph osd set pause|noup|nodown|noout|noin|nobackfill|
7c673cae
FG
1281 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1282
1283Subcommand ``setcrushmap`` sets crush map from input file.
1284
1285Usage::
1286
1287 ceph osd setcrushmap
1288
1289Subcommand ``setmaxosd`` sets new maximum osd value.
1290
1291Usage::
1292
1293 ceph osd setmaxosd <int[0-]>
1294
c07f9fc5
FG
1295Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1296compatible with the specified client version. This subcommand prevents you from
1297making any changes (e.g., crush tunables, or using new features) that
1298would violate the current setting. Please note, This subcommand will fail if
1299any connected daemon or client is not compatible with the features offered by
1300the given <version>. To see the features and releases of all clients connected
1301to cluster, please see `ceph features`_.
1302
1303Usage::
1304
1305 ceph osd set-require-min-compat-client <version>
1306
7c673cae
FG
1307Subcommand ``stat`` prints summary of OSD map.
1308
1309Usage::
1310
1311 ceph osd stat
1312
1313Subcommand ``tier`` is used for managing tiers. It uses some additional
1314subcommands.
1315
1316Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1317(the first one).
1318
1319Usage::
1320
1321 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1322
1323Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1324to existing pool <pool> (the first one).
1325
1326Usage::
1327
1328 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1329
1330Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1331
1332Usage::
1333
1e59de90 1334 ceph osd tier cache-mode <poolname> writeback|proxy|readproxy|readonly|none
7c673cae
FG
1335
1336Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1337<pool> (the first one).
1338
1339Usage::
1340
1341 ceph osd tier remove <poolname> <poolname>
1342
1343Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1344
1345Usage::
1346
1347 ceph osd tier remove-overlay <poolname>
1348
1349Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1350<overlaypool>.
1351
1352Usage::
1353
1354 ceph osd tier set-overlay <poolname> <poolname>
1355
1356Subcommand ``tree`` prints OSD tree.
1357
1358Usage::
1359
1360 ceph osd tree {<int[0-]>}
1361
1362Subcommand ``unpause`` unpauses osd.
1363
1364Usage::
1365
1366 ceph osd unpause
1367
9f95a23c 1368Subcommand ``unset`` unsets cluster-wide <flag> by updating OSD map.
7c673cae
FG
1369
1370Usage::
1371
9f95a23c 1372 ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
7c673cae
FG
1373 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1374
1375
1376pg
1377--
1378
1379It is used for managing the placement groups in OSDs. It uses some
1380additional subcommands.
1381
1382Subcommand ``debug`` shows debug info about pgs.
1383
1384Usage::
1385
1386 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1387
1388Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1389
1390Usage::
1391
1392 ceph pg deep-scrub <pgid>
1393
1394Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1395with plain).
1396
1397Usage::
1398
1399 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1400
1401Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1402
1403Usage::
1404
1405 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1406
1407Subcommand ``dump_pools_json`` shows pg pools info in json only.
1408
1409Usage::
1410
1411 ceph pg dump_pools_json
1412
1413Subcommand ``dump_stuck`` shows information about stuck pgs.
1414
1415Usage::
1416
1417 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1418 {<int>}
1419
7c673cae
FG
1420Subcommand ``getmap`` gets binary pg map to -o/stdout.
1421
1422Usage::
1423
1424 ceph pg getmap
1425
1426Subcommand ``ls`` lists pg with specific pool, osd, state
1427
1428Usage::
1429
11fdf7f2 1430 ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
7c673cae
FG
1431
1432Subcommand ``ls-by-osd`` lists pg on osd [osd]
1433
1434Usage::
1435
1436 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
11fdf7f2 1437 {<pg-state> [<pg-state>...]}
7c673cae
FG
1438
1439Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1440
1441Usage::
1442
11fdf7f2 1443 ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
7c673cae
FG
1444
1445Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1446
1447Usage::
1448
1449 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
11fdf7f2 1450 {<pg-state> [<pg-state>...]}
7c673cae
FG
1451
1452Subcommand ``map`` shows mapping of pg to osds.
1453
1454Usage::
1455
1456 ceph pg map <pgid>
1457
1458Subcommand ``repair`` starts repair on <pgid>.
1459
1460Usage::
1461
1462 ceph pg repair <pgid>
1463
1464Subcommand ``scrub`` starts scrub on <pgid>.
1465
1466Usage::
1467
1468 ceph pg scrub <pgid>
1469
7c673cae
FG
1470Subcommand ``stat`` shows placement group status.
1471
1472Usage::
1473
1474 ceph pg stat
1475
1476
1477quorum
1478------
1479
9f95a23c 1480Cause a specific MON to enter or exit quorum.
7c673cae
FG
1481
1482Usage::
1483
7c673cae
FG
1484 ceph tell mon.<id> quorum enter|exit
1485
1486quorum_status
1487-------------
1488
1489Reports status of monitor quorum.
1490
1491Usage::
1492
1493 ceph quorum_status
1494
1495
1496report
1497------
1498
1499Reports full status of cluster, optional title tag strings.
1500
1501Usage::
1502
1503 ceph report {<tags> [<tags>...]}
1504
1505
7c673cae
FG
1506status
1507------
1508
1509Shows cluster status.
1510
1511Usage::
1512
1513 ceph status
1514
1515
7c673cae
FG
1516tell
1517----
1518
1519Sends a command to a specific daemon.
1520
1521Usage::
1522
11fdf7f2 1523 ceph tell <name (type.id)> <command> [options...]
7c673cae 1524
31f18b77
FG
1525
1526List all available commands.
1527
1528Usage::
1529
1530 ceph tell <name (type.id)> help
1531
7c673cae
FG
1532version
1533-------
1534
1535Show mon daemon version
1536
1537Usage::
1538
1539 ceph version
1540
1541Options
1542=======
1543
1544.. option:: -i infile
1545
1546 will specify an input file to be passed along as a payload with the
1547 command to the monitor cluster. This is only used for specific
1548 monitor commands.
1549
1550.. option:: -o outfile
1551
1552 will write any payload returned by the monitor cluster with its
1553 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1554 return a payload.
1555
a8e16298
TL
1556.. option:: --setuser user
1557
1558 will apply the appropriate user ownership to the file specified by
1559 the option '-o'.
1560
1561.. option:: --setgroup group
1562
1563 will apply the appropriate group ownership to the file specified by
1564 the option '-o'.
1565
7c673cae
FG
1566.. option:: -c ceph.conf, --conf=ceph.conf
1567
1568 Use ceph.conf configuration file instead of the default
1569 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1570
1571.. option:: --id CLIENT_ID, --user CLIENT_ID
1572
1573 Client id for authentication.
1574
1575.. option:: --name CLIENT_NAME, -n CLIENT_NAME
1576
1577 Client name for authentication.
1578
1579.. option:: --cluster CLUSTER
1580
1581 Name of the Ceph cluster.
1582
1583.. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1584
1585 Submit admin-socket commands via admin sockets in /var/run/ceph.
1586
1587.. option:: --admin-socket ADMIN_SOCKET_NOPE
1588
1589 You probably mean --admin-daemon
1590
1591.. option:: -s, --status
1592
1593 Show cluster status.
1594
1595.. option:: -w, --watch
1596
9f95a23c
TL
1597 Watch live cluster changes on the default 'cluster' channel
1598
1599.. option:: -W, --watch-channel
1600
1601 Watch live cluster changes on any channel (cluster, audit, cephadm, or * for all)
7c673cae
FG
1602
1603.. option:: --watch-debug
1604
1605 Watch debug events.
1606
1607.. option:: --watch-info
1608
1609 Watch info events.
1610
1611.. option:: --watch-sec
1612
1613 Watch security events.
1614
1615.. option:: --watch-warn
1616
1617 Watch warning events.
1618
1619.. option:: --watch-error
1620
1621 Watch error events.
1622
1623.. option:: --version, -v
1624
1625 Display version.
1626
1627.. option:: --verbose
1628
1629 Make verbose.
1630
1631.. option:: --concise
1632
1633 Make less verbose.
1634
33c7a0ef 1635.. option:: -f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format
7c673cae 1636
33c7a0ef 1637 Format of output. Note: yaml is only valid for orch commands.
7c673cae
FG
1638
1639.. option:: --connect-timeout CLUSTER_TIMEOUT
1640
1641 Set a timeout for connecting to the cluster.
1642
1643.. option:: --no-increasing
1644
1645 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1646 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1647 If this option is used with these commands, it will help not to increase osd weight
1648 even the osd is under utilized.
1649
11fdf7f2
TL
1650.. option:: --block
1651
1652 block until completion (scrub and deep-scrub only)
7c673cae
FG
1653
1654Availability
1655============
1656
1657:program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
20effc67 1658the Ceph documentation at https://docs.ceph.com for more information.
7c673cae
FG
1659
1660
1661See also
1662========
1663
1664:doc:`ceph-mon <ceph-mon>`\(8),
1665:doc:`ceph-osd <ceph-osd>`\(8),
1666:doc:`ceph-mds <ceph-mds>`\(8)