]> git.proxmox.com Git - ceph.git/blame - ceph/doc/man/8/ceph.rst
import quincy beta 17.1.0
[ceph.git] / ceph / doc / man / 8 / ceph.rst
CommitLineData
7c673cae
FG
1:orphan:
2
3==================================
4 ceph -- ceph administration tool
5==================================
6
7.. program:: ceph
8
9Synopsis
10========
11
12| **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
13
14| **ceph** **compact**
15
9f95a23c
TL
16| **ceph** **config** [ *dump* | *ls* | *help* | *get* | *show* | *show-with-defaults* | *set* | *rm* | *log* | *reset* | *assimilate-conf* | *generate-minimal-conf* ] ...
17
11fdf7f2 18| **ceph** **config-key** [ *rm* | *exists* | *get* | *ls* | *dump* | *set* ] ...
7c673cae
FG
19
20| **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
21
22| **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
23
24| **ceph** **df** *{detail}*
25
f67539c2 26| **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* \| *authorize* ] ...
7c673cae
FG
27
28| **ceph** **fsid**
29
30| **ceph** **health** *{detail}*
31
7c673cae
FG
32| **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
33
34| **ceph** **log** *<logtext>* [ *<logtext>*... ]
35
11fdf7f2 36| **ceph** **mds** [ *compat* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *repaired* ] ...
7c673cae
FG
37
38| **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
39
f67539c2 40| **ceph** **osd** [ *blocklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *ls* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
7c673cae
FG
41
42| **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
43
44| **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
45
11fdf7f2
TL
46| **ceph** **osd** **pool** **application** [ *disable* \| *enable* \| *get* \| *rm* \| *set* ] ...
47
7c673cae
FG
48| **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
49
11fdf7f2 50| **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *stat* ] ...
7c673cae 51
7c673cae
FG
52| **ceph** **quorum_status**
53
54| **ceph** **report** { *<tags>* [ *<tags>...* ] }
55
7c673cae
FG
56| **ceph** **status**
57
58| **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
59
11fdf7f2 60| **ceph** **tell** *<name (type.id)> <command> [options...]*
7c673cae
FG
61
62| **ceph** **version**
63
64Description
65===========
66
67:program:`ceph` is a control utility which is used for manual deployment and maintenance
68of a Ceph cluster. It provides a diverse set of commands that allows deployment of
69monitors, OSDs, placement groups, MDS and overall maintenance, administration
70of the cluster.
71
72Commands
73========
74
75auth
76----
77
78Manage authentication keys. It is used for adding, removing, exporting
79or updating of authentication keys for a particular entity such as a monitor or
80OSD. It uses some additional subcommands.
81
82Subcommand ``add`` adds authentication info for a particular entity from input
83file, or random key if no input is given and/or any caps specified in the command.
84
85Usage::
86
87 ceph auth add <entity> {<caps> [<caps>...]}
88
89Subcommand ``caps`` updates caps for **name** from caps specified in the command.
90
91Usage::
92
93 ceph auth caps <entity> <caps> [<caps>...]
94
95Subcommand ``del`` deletes all caps for ``name``.
96
97Usage::
98
99 ceph auth del <entity>
100
101Subcommand ``export`` writes keyring for requested entity, or master keyring if
102none given.
103
104Usage::
105
106 ceph auth export {<entity>}
107
108Subcommand ``get`` writes keyring file with requested key.
109
110Usage::
111
112 ceph auth get <entity>
113
114Subcommand ``get-key`` displays requested key.
115
116Usage::
117
118 ceph auth get-key <entity>
119
120Subcommand ``get-or-create`` adds authentication info for a particular entity
121from input file, or random key if no input given and/or any caps specified in the
122command.
123
124Usage::
125
126 ceph auth get-or-create <entity> {<caps> [<caps>...]}
127
128Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
129pairs specified in the command. If key already exists, any given caps must match
130the existing caps for that key.
131
132Usage::
133
134 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
135
136Subcommand ``import`` reads keyring from input file.
137
138Usage::
139
140 ceph auth import
141
c07f9fc5 142Subcommand ``ls`` lists authentication state.
7c673cae
FG
143
144Usage::
145
c07f9fc5 146 ceph auth ls
7c673cae
FG
147
148Subcommand ``print-key`` displays requested key.
149
150Usage::
151
152 ceph auth print-key <entity>
153
154Subcommand ``print_key`` displays requested key.
155
156Usage::
157
158 ceph auth print_key <entity>
159
160
161compact
162-------
163
164Causes compaction of monitor's leveldb storage.
165
166Usage::
167
168 ceph compact
169
170
9f95a23c
TL
171config
172------
173
174Configure the cluster. By default, Ceph daemons and clients retrieve their
175configuration options from monitor when they start, and are updated if any of
176the tracked options is changed at run time. It uses following additional
177subcommand.
178
179Subcommand ``dump`` to dump all options for the cluster
180
181Usage::
182
183 ceph config dump
184
185Subcommand ``ls`` to list all option names for the cluster
186
187Usage::
188
189 ceph config ls
190
191Subcommand ``help`` to describe the specified configuration option
192
193Usage::
194
195 ceph config help <option>
196
197Subcommand ``get`` to dump the option(s) for the specified entity.
198
199Usage::
200
201 ceph config get <who> {<option>}
202
203Subcommand ``show`` to display the running configuration of the specified
204entity. Please note, unlike ``get``, which only shows the options managed
205by monitor, ``show`` displays all the configurations being actively used.
206These options are pulled from several sources, for instance, the compiled-in
207default value, the monitor's configuration database, ``ceph.conf`` file on
208the host. The options can even be overridden at runtime. So, there is chance
209that the configuration options in the output of ``show`` could be different
210from those in the output of ``get``.
211
212Usage::
213
214 ceph config show {<who>}
215
216Subcommand ``show-with-defaults`` to display the running configuration along with the compiled-in defaults of the specified entity
217
218Usage::
219
220 ceph config show {<who>}
221
222Subcommand ``set`` to set an option for one or more specified entities
223
224Usage::
225
226 ceph config set <who> <option> <value> {--force}
227
228Subcommand ``rm`` to clear an option for one or more entities
229
230Usage::
231
232 ceph config rm <who> <option>
233
234Subcommand ``log`` to show recent history of config changes. If `count` option
235is omitted it defeaults to 10.
236
237Usage::
238
239 ceph config log {<count>}
240
241Subcommand ``reset`` to revert configuration to the specified historical version
242
243Usage::
244
245 ceph config reset <version>
246
247
248Subcommand ``assimilate-conf`` to assimilate options from stdin, and return a
249new, minimal conf file
250
251Usage::
252
253 ceph config assimilate-conf -i <input-config-path> > <output-config-path>
254 ceph config assimilate-conf < <input-config-path>
255
256Subcommand ``generate-minimal-conf`` to generate a minimal ``ceph.conf`` file,
257which can be used for bootstrapping a daemon or a client.
258
259Usage::
260
261 ceph config generate-minimal-conf > <minimal-config-path>
262
263
7c673cae
FG
264config-key
265----------
266
9f95a23c
TL
267Manage configuration key. Config-key is a general purpose key/value service
268offered by the monitors. This service is mainly used by Ceph tools and daemons
269for persisting various settings. Among which, ceph-mgr modules uses it for
270storing their options. It uses some additional subcommands.
7c673cae 271
11fdf7f2 272Subcommand ``rm`` deletes configuration key.
7c673cae
FG
273
274Usage::
275
11fdf7f2 276 ceph config-key rm <key>
7c673cae
FG
277
278Subcommand ``exists`` checks for configuration keys existence.
279
280Usage::
281
282 ceph config-key exists <key>
283
284Subcommand ``get`` gets the configuration key.
285
286Usage::
287
288 ceph config-key get <key>
289
11fdf7f2 290Subcommand ``ls`` lists configuration keys.
7c673cae
FG
291
292Usage::
293
c07f9fc5 294 ceph config-key ls
7c673cae
FG
295
296Subcommand ``dump`` dumps configuration keys and values.
297
298Usage::
299
300 ceph config-key dump
301
c07f9fc5 302Subcommand ``set`` puts configuration key and value.
7c673cae
FG
303
304Usage::
305
c07f9fc5 306 ceph config-key set <key> {<val>}
7c673cae
FG
307
308
309daemon
310------
311
312Submit admin-socket commands.
313
314Usage::
315
316 ceph daemon {daemon_name|socket_path} {command} ...
317
318Example::
319
320 ceph daemon osd.0 help
321
322
323daemonperf
324----------
325
326Watch performance counters from a Ceph daemon.
327
328Usage::
329
330 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
331
332
333df
334--
335
336Show cluster's free space status.
337
338Usage::
339
340 ceph df {detail}
341
c07f9fc5
FG
342.. _ceph features:
343
344features
345--------
346
347Show the releases and features of all connected daemons and clients connected
348to the cluster, along with the numbers of them in each bucket grouped by the
349corresponding features/releases. Each release of Ceph supports a different set
350of features, expressed by the features bitmask. New cluster features require
351that clients support the feature, or else they are not allowed to connect to
352these new features. As new features or capabilities are enabled after an
353upgrade, older clients are prevented from connecting.
354
355Usage::
356
357 ceph features
7c673cae
FG
358
359fs
360--
361
9f95a23c 362Manage cephfs file systems. It uses some additional subcommands.
7c673cae 363
9f95a23c 364Subcommand ``ls`` to list file systems
7c673cae
FG
365
366Usage::
367
368 ceph fs ls
369
9f95a23c 370Subcommand ``new`` to make a new file system using named pools <metadata> and <data>
7c673cae
FG
371
372Usage::
373
374 ceph fs new <fs_name> <metadata> <data>
375
376Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
377
378Usage::
379
380 ceph fs reset <fs_name> {--yes-i-really-mean-it}
381
9f95a23c 382Subcommand ``rm`` to disable the named file system
7c673cae
FG
383
384Usage::
385
386 ceph fs rm <fs_name> {--yes-i-really-mean-it}
387
f67539c2
TL
388Subcommand ``authorize`` creates a new client that will be authorized for the
389given path in ``<fs_name>``. Pass ``/`` to authorize for the entire FS.
390``<perms>`` below can be ``r``, ``rw`` or ``rwp``.
391
392Usage::
393
394 ceph fs authorize <fs_name> client.<client_id> <path> <perms> [<path> <perms>...]
7c673cae
FG
395
396fsid
397----
398
399Show cluster's FSID/UUID.
400
401Usage::
402
403 ceph fsid
404
405
406health
407------
408
409Show cluster's health.
410
411Usage::
412
413 ceph health {detail}
414
415
416heap
417----
418
419Show heap usage info (available only if compiled with tcmalloc)
420
421Usage::
422
9f95a23c 423 ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
11fdf7f2
TL
424
425Subcommand ``release`` to make TCMalloc to releases no-longer-used memory back to the kernel at once.
426
427Usage::
428
9f95a23c 429 ceph tell <name (type.id)> heap release
11fdf7f2
TL
430
431Subcommand ``(get|set)_release_rate`` get or set the TCMalloc memory release rate. TCMalloc releases
432no-longer-used memory back to the kernel gradually. the rate controls how quickly this happens.
433Increase this setting to make TCMalloc to return unused memory more frequently. 0 means never return
434memory to system, 1 means wait for 1000 pages after releasing a page to system. It is ``1.0`` by default..
7c673cae 435
11fdf7f2
TL
436Usage::
437
9f95a23c 438 ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
7c673cae
FG
439
440injectargs
441----------
442
443Inject configuration arguments into monitor.
444
445Usage::
446
447 ceph injectargs <injected_args> [<injected_args>...]
448
449
450log
451---
452
453Log supplied text to the monitor log.
454
455Usage::
456
457 ceph log <logtext> [<logtext>...]
458
459
460mds
461---
462
463Manage metadata server configuration and administration. It uses some
464additional subcommands.
465
466Subcommand ``compat`` manages compatible features. It uses some additional
467subcommands.
468
469Subcommand ``rm_compat`` removes compatible feature.
470
471Usage::
472
473 ceph mds compat rm_compat <int[0-]>
474
475Subcommand ``rm_incompat`` removes incompatible feature.
476
477Usage::
478
479 ceph mds compat rm_incompat <int[0-]>
480
481Subcommand ``show`` shows mds compatibility settings.
482
483Usage::
484
485 ceph mds compat show
486
7c673cae
FG
487Subcommand ``fail`` forces mds to status fail.
488
489Usage::
490
11fdf7f2 491 ceph mds fail <role|gid>
7c673cae
FG
492
493Subcommand ``rm`` removes inactive mds.
494
495Usage::
496
497 ceph mds rm <int[0-]> <name> (type.id)>
498
499Subcommand ``rmfailed`` removes failed mds.
500
501Usage::
502
503 ceph mds rmfailed <int[0-]>
504
505Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
506
507Usage::
508
509 ceph mds set_state <int[0-]> <int[0-20]>
510
511Subcommand ``stat`` shows MDS status.
512
513Usage::
514
515 ceph mds stat
516
11fdf7f2 517Subcommand ``repaired`` mark a damaged MDS rank as no longer damaged.
7c673cae
FG
518
519Usage::
520
11fdf7f2 521 ceph mds repaired <role>
7c673cae
FG
522
523mon
524---
525
526Manage monitor configuration and administration. It uses some additional
527subcommands.
528
529Subcommand ``add`` adds new monitor named <name> at <addr>.
530
531Usage::
532
533 ceph mon add <name> <IPaddr[:port]>
534
535Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
536
537Usage::
538
539 ceph mon dump {<int[0-]>}
540
541Subcommand ``getmap`` gets monmap.
542
543Usage::
544
545 ceph mon getmap {<int[0-]>}
546
547Subcommand ``remove`` removes monitor named <name>.
548
549Usage::
550
551 ceph mon remove <name>
552
553Subcommand ``stat`` summarizes monitor status.
554
555Usage::
556
557 ceph mon stat
558
c07f9fc5
FG
559mgr
560---
561
562Ceph manager daemon configuration and management.
563
564Subcommand ``dump`` dumps the latest MgrMap, which describes the active
565and standby manager daemons.
566
567Usage::
568
569 ceph mgr dump
570
571Subcommand ``fail`` will mark a manager daemon as failed, removing it
572from the manager map. If it is the active manager daemon a standby
573will take its place.
574
575Usage::
576
577 ceph mgr fail <name>
578
579Subcommand ``module ls`` will list currently enabled manager modules (plugins).
580
581Usage::
582
583 ceph mgr module ls
584
585Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
586
587Usage::
588
589 ceph mgr module enable <module>
590
591Subcommand ``module disable`` will disable an active manager module.
592
593Usage::
594
595 ceph mgr module disable <module>
596
597Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
598
599Usage::
600
601 ceph mgr metadata [name]
602
603Subcommand ``versions`` will report a count of running daemon versions.
604
605Usage::
606
607 ceph mgr versions
608
609Subcommand ``count-metadata`` will report a count of any daemon metadata field.
610
611Usage::
612
613 ceph mgr count-metadata <field>
614
11fdf7f2 615.. _ceph-admin-osd:
c07f9fc5 616
7c673cae
FG
617osd
618---
619
620Manage OSD configuration and administration. It uses some additional
621subcommands.
622
f67539c2 623Subcommand ``blocklist`` manage blocklisted clients. It uses some additional
7c673cae
FG
624subcommands.
625
f67539c2 626Subcommand ``add`` add <addr> to blocklist (optionally until <expire> seconds
7c673cae
FG
627from now)
628
629Usage::
630
f67539c2 631 ceph osd blocklist add <EntityAddr> {<float[0.0-]>}
7c673cae 632
f67539c2 633Subcommand ``ls`` show blocklisted clients
7c673cae
FG
634
635Usage::
636
f67539c2 637 ceph osd blocklist ls
7c673cae 638
f67539c2 639Subcommand ``rm`` remove <addr> from blocklist
7c673cae
FG
640
641Usage::
642
f67539c2 643 ceph osd blocklist rm <EntityAddr>
7c673cae
FG
644
645Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
646
647Usage::
648
649 ceph osd blocked-by
650
651Subcommand ``create`` creates new osd (with optional UUID and ID).
652
31f18b77
FG
653This command is DEPRECATED as of the Luminous release, and will be removed in
654a future release.
655
656Subcommand ``new`` should instead be used.
657
7c673cae
FG
658Usage::
659
660 ceph osd create {<uuid>} {<id>}
661
b5b8bbf5
FG
662Subcommand ``new`` can be used to create a new OSD or to recreate a previously
663destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*,
664and the command expects a JSON file containing the base64 cephx key for auth
665entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt
666lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
667the accompanying lockbox cephx key.
31f18b77
FG
668
669Usage::
670
3a9019d9 671 ceph osd new {<uuid>} {<id>} -i {<params.json>}
31f18b77 672
3a9019d9 673The parameters JSON file is optional but if provided, is expected to maintain
b5b8bbf5 674a form of the following format::
31f18b77
FG
675
676 {
3a9019d9
FG
677 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
678 "crush_device_class": "myclass"
31f18b77
FG
679 }
680
681Or::
682
683 {
684 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
685 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
3a9019d9
FG
686 "dmcrypt_key": "<dm-crypt key>",
687 "crush_device_class": "myclass"
688 }
689
690Or::
691
692 {
693 "crush_device_class": "myclass"
31f18b77 694 }
3a9019d9
FG
695
696The "crush_device_class" property is optional. If specified, it will set the
697initial CRUSH device class for the new OSD.
698
31f18b77 699
7c673cae
FG
700Subcommand ``crush`` is used for CRUSH management. It uses some additional
701subcommands.
702
703Subcommand ``add`` adds or updates crushmap position and weight for <name> with
704<weight> and location <args>.
705
706Usage::
707
708 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
709
710Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
711type <type>.
712
713Usage::
714
715 ceph osd crush add-bucket <name> <type>
716
717Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
718<weight> at/to location <args>.
719
720Usage::
721
722 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
723 [<args>...]
724
725Subcommand ``dump`` dumps crush map.
726
727Usage::
728
729 ceph osd crush dump
730
731Subcommand ``get-tunable`` get crush tunable straw_calc_version
732
733Usage::
734
735 ceph osd crush get-tunable straw_calc_version
736
737Subcommand ``link`` links existing entry for <name> under location <args>.
738
739Usage::
740
741 ceph osd crush link <name> <args> [<args>...]
742
743Subcommand ``move`` moves existing entry for <name> to location <args>.
744
745Usage::
746
747 ceph osd crush move <name> <args> [<args>...]
748
749Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
750<ancestor>).
751
752Usage::
753
754 ceph osd crush remove <name> {<ancestor>}
755
11fdf7f2 756Subcommand ``rename-bucket`` renames bucket <srcname> to <dstname>
7c673cae
FG
757
758Usage::
759
760 ceph osd crush rename-bucket <srcname> <dstname>
761
762Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
763
764Usage::
765
766 ceph osd crush reweight <name> <float[0.0-]>
767
768Subcommand ``reweight-all`` recalculate the weights for the tree to
769ensure they sum correctly
770
771Usage::
772
773 ceph osd crush reweight-all
774
775Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
776to <weight> in crush map
777
778Usage::
779
780 ceph osd crush reweight-subtree <name> <weight>
781
782Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
783<ancestor>).
784
785Usage::
786
787 ceph osd crush rm <name> {<ancestor>}
788
789Subcommand ``rule`` is used for creating crush rules. It uses some additional
790subcommands.
791
792Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
793created with <profile> (default default).
794
795Usage::
796
797 ceph osd crush rule create-erasure <name> {<profile>}
798
799Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
800replicate across buckets of type <type>, using a choose mode of <firstn|indep>
801(default firstn; indep best for erasure pools).
802
803Usage::
804
805 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
806
807Subcommand ``dump`` dumps crush rule <name> (default all).
808
809Usage::
810
811 ceph osd crush rule dump {<name>}
812
7c673cae
FG
813Subcommand ``ls`` lists crush rules.
814
815Usage::
816
817 ceph osd crush rule ls
818
819Subcommand ``rm`` removes crush rule <name>.
820
821Usage::
822
823 ceph osd crush rule rm <name>
824
825Subcommand ``set`` used alone, sets crush map from input file.
826
827Usage::
828
829 ceph osd crush set
830
831Subcommand ``set`` with osdname/osd.id update crushmap position and weight
832for <name> to <weight> with location <args>.
833
834Usage::
835
836 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
837
838Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
839tunable that can be set is straw_calc_version.
840
841Usage::
842
843 ceph osd crush set-tunable straw_calc_version <value>
844
845Subcommand ``show-tunables`` shows current crush tunables.
846
847Usage::
848
849 ceph osd crush show-tunables
850
851Subcommand ``tree`` shows the crush buckets and items in a tree view.
852
853Usage::
854
855 ceph osd crush tree
856
857Subcommand ``tunables`` sets crush tunables values to <profile>.
858
859Usage::
860
861 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
862
863Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
864<ancestor>).
865
866Usage::
867
868 ceph osd crush unlink <name> {<ancestor>}
869
870Subcommand ``df`` shows OSD utilization
871
872Usage::
873
874 ceph osd df {plain|tree}
875
876Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
877
878Usage::
879
880 ceph osd deep-scrub <who>
881
882Subcommand ``down`` sets osd(s) <id> [<id>...] down.
883
884Usage::
885
886 ceph osd down <ids> [<ids>...]
887
888Subcommand ``dump`` prints summary of OSD map.
889
890Usage::
891
892 ceph osd dump {<int[0-]>}
893
894Subcommand ``erasure-code-profile`` is used for managing the erasure code
895profiles. It uses some additional subcommands.
896
897Subcommand ``get`` gets erasure code profile <name>.
898
899Usage::
900
901 ceph osd erasure-code-profile get <name>
902
903Subcommand ``ls`` lists all erasure code profiles.
904
905Usage::
906
907 ceph osd erasure-code-profile ls
908
909Subcommand ``rm`` removes erasure code profile <name>.
910
911Usage::
912
913 ceph osd erasure-code-profile rm <name>
914
915Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
916pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
917
918Usage::
919
920 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
921
922Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
923
924Usage::
925
926 ceph osd find <int[0-]>
927
928Subcommand ``getcrushmap`` gets CRUSH map.
929
930Usage::
931
932 ceph osd getcrushmap {<int[0-]>}
933
934Subcommand ``getmap`` gets OSD map.
935
936Usage::
937
938 ceph osd getmap {<int[0-]>}
939
940Subcommand ``getmaxosd`` shows largest OSD id.
941
942Usage::
943
944 ceph osd getmaxosd
945
946Subcommand ``in`` sets osd(s) <id> [<id>...] in.
947
948Usage::
949
950 ceph osd in <ids> [<ids>...]
951
952Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
953MORE REPLICAS EXIST, BE CAREFUL.
954
955Usage::
956
957 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
958
959Subcommand ``ls`` shows all OSD ids.
960
961Usage::
962
963 ceph osd ls {<int[0-]>}
964
965Subcommand ``lspools`` lists pools.
966
967Usage::
968
969 ceph osd lspools {<int>}
970
971Subcommand ``map`` finds pg for <object> in <pool>.
972
973Usage::
974
975 ceph osd map <poolname> <objectname>
976
977Subcommand ``metadata`` fetches metadata for osd <id>.
978
979Usage::
980
981 ceph osd metadata {int[0-]} (default all)
982
983Subcommand ``out`` sets osd(s) <id> [<id>...] out.
984
985Usage::
986
987 ceph osd out <ids> [<ids>...]
988
35e4c445
FG
989Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
990stopped without immediately making data unavailable. That is, all
991data should remain readable and writeable, although data redundancy
992may be reduced as some PGs may end up in a degraded (but active)
993state. It will return a success code if it is okay to stop the
994OSD(s), or an error code and informative message if it is not or if no
f67539c2
TL
995conclusion can be drawn at the current time. When ``--max <num>`` is
996provided, up to <num> OSDs IDs will return (including the provided
997OSDs) that can all be stopped simultaneously. This allows larger sets
998of stoppable OSDs to be generated easily by providing a single
999starting OSD and a max. Additional OSDs are drawn from adjacent locations
1000in the CRUSH hierarchy.
35e4c445
FG
1001
1002Usage::
1003
f67539c2 1004 ceph osd ok-to-stop <id> [<ids>...] [--max <num>]
35e4c445 1005
7c673cae
FG
1006Subcommand ``pause`` pauses osd.
1007
1008Usage::
1009
1010 ceph osd pause
1011
1012Subcommand ``perf`` prints dump of OSD perf summary stats.
1013
1014Usage::
1015
1016 ceph osd perf
1017
1018Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
1019only).
1020
1021Usage::
1022
1023 ceph osd pg-temp <pgid> {<id> [<id>...]}
1024
c07f9fc5
FG
1025Subcommand ``force-create-pg`` forces creation of pg <pgid>.
1026
1027Usage::
1028
1029 ceph osd force-create-pg <pgid>
1030
1031
7c673cae
FG
1032Subcommand ``pool`` is used for managing data pools. It uses some additional
1033subcommands.
1034
1035Subcommand ``create`` creates pool.
1036
1037Usage::
1038
9f95a23c
TL
1039 ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
1040 {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
7c673cae
FG
1041
1042Subcommand ``delete`` deletes pool.
1043
1044Usage::
1045
1046 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
1047
1048Subcommand ``get`` gets pool parameter <var>.
1049
1050Usage::
1051
11fdf7f2 1052 ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
7c673cae
FG
1053
1054Only for tiered pools::
1055
1056 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
1057 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
1058 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1059 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
1060
1061Only for erasure coded pools::
1062
1063 ceph osd pool get <poolname> erasure_code_profile
1064
1065Use ``all`` to get all pool parameters that apply to the pool's type::
1066
1067 ceph osd pool get <poolname> all
1068
1069Subcommand ``get-quota`` obtains object or byte limits for pool.
1070
1071Usage::
1072
1073 ceph osd pool get-quota <poolname>
1074
1075Subcommand ``ls`` list pools
1076
1077Usage::
1078
1079 ceph osd pool ls {detail}
1080
1081Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
1082
1083Usage::
1084
1085 ceph osd pool mksnap <poolname> <snap>
1086
1087Subcommand ``rename`` renames <srcpool> to <destpool>.
1088
1089Usage::
1090
1091 ceph osd pool rename <poolname> <poolname>
1092
1093Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
1094
1095Usage::
1096
1097 ceph osd pool rmsnap <poolname> <snap>
1098
1099Subcommand ``set`` sets pool parameter <var> to <val>.
1100
1101Usage::
1102
11fdf7f2 1103 ceph osd pool set <poolname> size|min_size|pg_num|
b32b8144 1104 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
7c673cae
FG
1105 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
1106 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
1107 cache_target_dirty_high_ratio|
11fdf7f2 1108 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
7c673cae
FG
1109 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
1110 hit_set_search_last_n
1111 <val> {--yes-i-really-mean-it}
1112
1113Subcommand ``set-quota`` sets object or byte limit on pool.
1114
1115Usage::
1116
1117 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1118
1119Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1120
1121Usage::
1122
1123 ceph osd pool stats {<name>}
1124
11fdf7f2
TL
1125Subcommand ``application`` is used for adding an annotation to the given
1126pool. By default, the possible applications are object, block, and file
1127storage (corresponding app-names are "rgw", "rbd", and "cephfs"). However,
1128there might be other applications as well. Based on the application, there
1129may or may not be some processing conducted.
1130
1131Subcommand ``disable`` disables the given application on the given pool.
1132
1133Usage::
1134
1135 ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
1136
1137Subcommand ``enable`` adds an annotation to the given pool for the mentioned
1138application.
1139
1140Usage::
1141
1142 ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
1143
9f95a23c 1144Subcommand ``get`` displays the value for the given key that is associated
11fdf7f2
TL
1145with the given application of the given pool. Not passing the optional
1146arguments would display all key-value pairs for all applications for all
1147pools.
1148
1149Usage::
1150
1151 ceph osd pool application get {<pool-name>} {<app>} {<key>}
1152
1153Subcommand ``rm`` removes the key-value pair for the given key in the given
1154application of the given pool.
1155
1156Usage::
1157
1158 ceph osd pool application rm <pool-name> <app> <key>
1159
f67539c2 1160Subcommand ``set`` associates or updates, if it already exists, a key-value
11fdf7f2
TL
1161pair with the given application for the given pool.
1162
1163Usage::
1164
1165 ceph osd pool application set <pool-name> <app> <key> <value>
1166
7c673cae
FG
1167Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1168<= 1.0
1169
1170Usage::
1171
1172 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1173
1174Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1175only).
1176
1177Usage::
1178
1179 ceph osd primary-temp <pgid> <id>
1180
1181Subcommand ``repair`` initiates repair on a specified osd.
1182
1183Usage::
1184
1185 ceph osd repair <who>
1186
1187Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1188
1189Usage::
1190
1191 osd reweight <int[0-]> <float[0.0-1.0]>
1192
1193Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1194[overload-percentage-for-consideration, default 120].
1195
1196Usage::
1197
1198 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1199 {--no-increasing}
1200
f67539c2
TL
1201Subcommand ``reweight-by-utilization`` reweights OSDs by utilization. It only reweights
1202outlier OSDs whose utilization exceeds the average, eg. the default 120%
1203limits reweight to those OSDs that are more than 20% over the average.
1204[overload-threshold, default 120 [max_weight_change, default 0.05 [max_osds_to_adjust, default 4]]]
7c673cae
FG
1205
1206Usage::
1207
f67539c2 1208 ceph osd reweight-by-utilization {<int[100-]> {<float[0.0-]> {<int[0-]>}}}
7c673cae
FG
1209 {--no-increasing}
1210
c07f9fc5
FG
1211Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1212
7c673cae
FG
1213
1214Usage::
1215
1216 ceph osd rm <ids> [<ids>...]
1217
31f18b77
FG
1218Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1219entity's keys and all of its dm-crypt and daemon-private config key
1220entries.
1221
1222This command will not remove the OSD from crush, nor will it remove the
1223OSD from the OSD map. Instead, once the command successfully completes,
1224the OSD will show marked as *destroyed*.
1225
1226In order to mark an OSD as destroyed, the OSD must first be marked as
1227**lost**.
1228
1229Usage::
1230
1231 ceph osd destroy <id> {--yes-i-really-mean-it}
1232
1233
1234Subcommand ``purge`` performs a combination of ``osd destroy``,
1235``osd rm`` and ``osd crush remove``.
1236
1237Usage::
1238
1239 ceph osd purge <id> {--yes-i-really-mean-it}
1240
35e4c445
FG
1241Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1242destroy an OSD without reducing overall data redundancy or durability.
1243It will return a success code if it is definitely safe, or an error
1244code and informative message if it is not or if no conclusion can be
1245drawn at the current time.
1246
1247Usage::
1248
1249 ceph osd safe-to-destroy <id> [<ids>...]
1250
7c673cae
FG
1251Subcommand ``scrub`` initiates scrub on specified osd.
1252
1253Usage::
1254
1255 ceph osd scrub <who>
1256
9f95a23c
TL
1257Subcommand ``set`` sets cluster-wide <flag> by updating OSD map.
1258The ``full`` flag is not honored anymore since the Mimic release, and
1259``ceph osd set full`` is not supported in the Octopus release.
7c673cae
FG
1260
1261Usage::
1262
9f95a23c 1263 ceph osd set pause|noup|nodown|noout|noin|nobackfill|
7c673cae
FG
1264 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1265
1266Subcommand ``setcrushmap`` sets crush map from input file.
1267
1268Usage::
1269
1270 ceph osd setcrushmap
1271
1272Subcommand ``setmaxosd`` sets new maximum osd value.
1273
1274Usage::
1275
1276 ceph osd setmaxosd <int[0-]>
1277
c07f9fc5
FG
1278Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1279compatible with the specified client version. This subcommand prevents you from
1280making any changes (e.g., crush tunables, or using new features) that
1281would violate the current setting. Please note, This subcommand will fail if
1282any connected daemon or client is not compatible with the features offered by
1283the given <version>. To see the features and releases of all clients connected
1284to cluster, please see `ceph features`_.
1285
1286Usage::
1287
1288 ceph osd set-require-min-compat-client <version>
1289
7c673cae
FG
1290Subcommand ``stat`` prints summary of OSD map.
1291
1292Usage::
1293
1294 ceph osd stat
1295
1296Subcommand ``tier`` is used for managing tiers. It uses some additional
1297subcommands.
1298
1299Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1300(the first one).
1301
1302Usage::
1303
1304 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1305
1306Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1307to existing pool <pool> (the first one).
1308
1309Usage::
1310
1311 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1312
1313Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1314
1315Usage::
1316
e306af50 1317 ceph osd tier cache-mode <poolname> writeback|readproxy|readonly|none
7c673cae
FG
1318
1319Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1320<pool> (the first one).
1321
1322Usage::
1323
1324 ceph osd tier remove <poolname> <poolname>
1325
1326Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1327
1328Usage::
1329
1330 ceph osd tier remove-overlay <poolname>
1331
1332Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1333<overlaypool>.
1334
1335Usage::
1336
1337 ceph osd tier set-overlay <poolname> <poolname>
1338
1339Subcommand ``tree`` prints OSD tree.
1340
1341Usage::
1342
1343 ceph osd tree {<int[0-]>}
1344
1345Subcommand ``unpause`` unpauses osd.
1346
1347Usage::
1348
1349 ceph osd unpause
1350
9f95a23c 1351Subcommand ``unset`` unsets cluster-wide <flag> by updating OSD map.
7c673cae
FG
1352
1353Usage::
1354
9f95a23c 1355 ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
7c673cae
FG
1356 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1357
1358
1359pg
1360--
1361
1362It is used for managing the placement groups in OSDs. It uses some
1363additional subcommands.
1364
1365Subcommand ``debug`` shows debug info about pgs.
1366
1367Usage::
1368
1369 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1370
1371Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1372
1373Usage::
1374
1375 ceph pg deep-scrub <pgid>
1376
1377Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1378with plain).
1379
1380Usage::
1381
1382 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1383
1384Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1385
1386Usage::
1387
1388 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1389
1390Subcommand ``dump_pools_json`` shows pg pools info in json only.
1391
1392Usage::
1393
1394 ceph pg dump_pools_json
1395
1396Subcommand ``dump_stuck`` shows information about stuck pgs.
1397
1398Usage::
1399
1400 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1401 {<int>}
1402
7c673cae
FG
1403Subcommand ``getmap`` gets binary pg map to -o/stdout.
1404
1405Usage::
1406
1407 ceph pg getmap
1408
1409Subcommand ``ls`` lists pg with specific pool, osd, state
1410
1411Usage::
1412
11fdf7f2 1413 ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
7c673cae
FG
1414
1415Subcommand ``ls-by-osd`` lists pg on osd [osd]
1416
1417Usage::
1418
1419 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
11fdf7f2 1420 {<pg-state> [<pg-state>...]}
7c673cae
FG
1421
1422Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1423
1424Usage::
1425
11fdf7f2 1426 ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
7c673cae
FG
1427
1428Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1429
1430Usage::
1431
1432 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
11fdf7f2 1433 {<pg-state> [<pg-state>...]}
7c673cae
FG
1434
1435Subcommand ``map`` shows mapping of pg to osds.
1436
1437Usage::
1438
1439 ceph pg map <pgid>
1440
1441Subcommand ``repair`` starts repair on <pgid>.
1442
1443Usage::
1444
1445 ceph pg repair <pgid>
1446
1447Subcommand ``scrub`` starts scrub on <pgid>.
1448
1449Usage::
1450
1451 ceph pg scrub <pgid>
1452
7c673cae
FG
1453Subcommand ``stat`` shows placement group status.
1454
1455Usage::
1456
1457 ceph pg stat
1458
1459
1460quorum
1461------
1462
9f95a23c 1463Cause a specific MON to enter or exit quorum.
7c673cae
FG
1464
1465Usage::
1466
7c673cae
FG
1467 ceph tell mon.<id> quorum enter|exit
1468
1469quorum_status
1470-------------
1471
1472Reports status of monitor quorum.
1473
1474Usage::
1475
1476 ceph quorum_status
1477
1478
1479report
1480------
1481
1482Reports full status of cluster, optional title tag strings.
1483
1484Usage::
1485
1486 ceph report {<tags> [<tags>...]}
1487
1488
7c673cae
FG
1489status
1490------
1491
1492Shows cluster status.
1493
1494Usage::
1495
1496 ceph status
1497
1498
7c673cae
FG
1499tell
1500----
1501
1502Sends a command to a specific daemon.
1503
1504Usage::
1505
11fdf7f2 1506 ceph tell <name (type.id)> <command> [options...]
7c673cae 1507
31f18b77
FG
1508
1509List all available commands.
1510
1511Usage::
1512
1513 ceph tell <name (type.id)> help
1514
7c673cae
FG
1515version
1516-------
1517
1518Show mon daemon version
1519
1520Usage::
1521
1522 ceph version
1523
1524Options
1525=======
1526
1527.. option:: -i infile
1528
1529 will specify an input file to be passed along as a payload with the
1530 command to the monitor cluster. This is only used for specific
1531 monitor commands.
1532
1533.. option:: -o outfile
1534
1535 will write any payload returned by the monitor cluster with its
1536 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1537 return a payload.
1538
a8e16298
TL
1539.. option:: --setuser user
1540
1541 will apply the appropriate user ownership to the file specified by
1542 the option '-o'.
1543
1544.. option:: --setgroup group
1545
1546 will apply the appropriate group ownership to the file specified by
1547 the option '-o'.
1548
7c673cae
FG
1549.. option:: -c ceph.conf, --conf=ceph.conf
1550
1551 Use ceph.conf configuration file instead of the default
1552 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1553
1554.. option:: --id CLIENT_ID, --user CLIENT_ID
1555
1556 Client id for authentication.
1557
1558.. option:: --name CLIENT_NAME, -n CLIENT_NAME
1559
1560 Client name for authentication.
1561
1562.. option:: --cluster CLUSTER
1563
1564 Name of the Ceph cluster.
1565
1566.. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1567
1568 Submit admin-socket commands via admin sockets in /var/run/ceph.
1569
1570.. option:: --admin-socket ADMIN_SOCKET_NOPE
1571
1572 You probably mean --admin-daemon
1573
1574.. option:: -s, --status
1575
1576 Show cluster status.
1577
1578.. option:: -w, --watch
1579
9f95a23c
TL
1580 Watch live cluster changes on the default 'cluster' channel
1581
1582.. option:: -W, --watch-channel
1583
1584 Watch live cluster changes on any channel (cluster, audit, cephadm, or * for all)
7c673cae
FG
1585
1586.. option:: --watch-debug
1587
1588 Watch debug events.
1589
1590.. option:: --watch-info
1591
1592 Watch info events.
1593
1594.. option:: --watch-sec
1595
1596 Watch security events.
1597
1598.. option:: --watch-warn
1599
1600 Watch warning events.
1601
1602.. option:: --watch-error
1603
1604 Watch error events.
1605
1606.. option:: --version, -v
1607
1608 Display version.
1609
1610.. option:: --verbose
1611
1612 Make verbose.
1613
1614.. option:: --concise
1615
1616 Make less verbose.
1617
1618.. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1619
1620 Format of output.
1621
1622.. option:: --connect-timeout CLUSTER_TIMEOUT
1623
1624 Set a timeout for connecting to the cluster.
1625
1626.. option:: --no-increasing
1627
1628 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1629 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1630 If this option is used with these commands, it will help not to increase osd weight
1631 even the osd is under utilized.
1632
11fdf7f2
TL
1633.. option:: --block
1634
1635 block until completion (scrub and deep-scrub only)
7c673cae
FG
1636
1637Availability
1638============
1639
1640:program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
20effc67 1641the Ceph documentation at https://docs.ceph.com for more information.
7c673cae
FG
1642
1643
1644See also
1645========
1646
1647:doc:`ceph-mon <ceph-mon>`\(8),
1648:doc:`ceph-osd <ceph-osd>`\(8),
1649:doc:`ceph-mds <ceph-mds>`\(8)