]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/operations/pools.rst
add stop-gap to fix compat with CPUs not supporting SSE 4.1
[ceph.git] / ceph / doc / rados / operations / pools.rst
1 .. _rados_pools:
2
3 =======
4 Pools
5 =======
6 Pools are logical partitions that are used to store objects.
7
8 Pools provide:
9
10 - **Resilience**: It is possible to set the number of OSDs that are allowed to
11 fail without any data being lost. If your cluster uses replicated pools, the
12 number of OSDs that can fail without data loss is equal to the number of
13 replicas.
14
15 For example: a typical configuration stores an object and two replicas
16 (copies) of each RADOS object (that is: ``size = 3``), but you can configure
17 the number of replicas on a per-pool basis. For `erasure-coded pools
18 <../erasure-code>`_, resilience is defined as the number of coding chunks
19 (for example, ``m = 2`` in the default **erasure code profile**).
20
21 - **Placement Groups**: You can set the number of placement groups (PGs) for
22 the pool. In a typical configuration, the target number of PGs is
23 approximately one hundred PGs per OSD. This provides reasonable balancing
24 without consuming excessive computing resources. When setting up multiple
25 pools, be careful to set an appropriate number of PGs for each pool and for
26 the cluster as a whole. Each PG belongs to a specific pool: when multiple
27 pools use the same OSDs, make sure that the **sum** of PG replicas per OSD is
28 in the desired PG-per-OSD target range. To calculate an appropriate number of
29 PGs for your pools, use the `pgcalc`_ tool.
30
31 - **CRUSH Rules**: When data is stored in a pool, the placement of the object
32 and its replicas (or chunks, in the case of erasure-coded pools) in your
33 cluster is governed by CRUSH rules. Custom CRUSH rules can be created for a
34 pool if the default rule does not fit your use case.
35
36 - **Snapshots**: The command ``ceph osd pool mksnap`` creates a snapshot of a
37 pool.
38
39 Pool Names
40 ==========
41
42 Pool names beginning with ``.`` are reserved for use by Ceph's internal
43 operations. Do not create or manipulate pools with these names.
44
45
46 List Pools
47 ==========
48
49 To list your cluster's pools, run the following command:
50
51 .. prompt:: bash $
52
53 ceph osd lspools
54
55 .. _createpool:
56
57 Creating a Pool
58 ===============
59
60 Before creating a pool, consult `Pool, PG and CRUSH Config Reference`_. Your
61 Ceph configuration file contains a setting (namely, ``pg_num``) that determines
62 the number of PGs. However, this setting's default value is NOT appropriate
63 for most systems. In most cases, you should override this default value when
64 creating your pool. For details on PG numbers, see `setting the number of
65 placement groups`_
66
67 For example:
68
69 .. prompt:: bash $
70
71 osd_pool_default_pg_num = 128
72 osd_pool_default_pgp_num = 128
73
74 .. note:: In Luminous and later releases, each pool must be associated with the
75 application that will be using the pool. For more information, see
76 `Associating a Pool with an Application`_ below.
77
78 To create a pool, run one of the following commands:
79
80 .. prompt:: bash $
81
82 ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \
83 [crush-rule-name] [expected-num-objects]
84
85 or:
86
87 .. prompt:: bash $
88
89 ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] erasure \
90 [erasure-code-profile] [crush-rule-name] [expected_num_objects] [--autoscale-mode=<on,off,warn>]
91
92 For a brief description of the elements of the above commands, consult the
93 following:
94
95 .. describe:: {pool-name}
96
97 The name of the pool. It must be unique.
98
99 :Type: String
100 :Required: Yes.
101
102 .. describe:: {pg-num}
103
104 The total number of PGs in the pool. For details on calculating an
105 appropriate number, see :ref:`placement groups`. The default value ``8`` is
106 NOT suitable for most systems.
107
108 :Type: Integer
109 :Required: Yes.
110 :Default: 8
111
112 .. describe:: {pgp-num}
113
114 The total number of PGs for placement purposes. This **should be equal to
115 the total number of PGs**, except briefly while ``pg_num`` is being
116 increased or decreased.
117
118 :Type: Integer
119 :Required: Yes. If no value has been specified in the command, then the default value is used (unless a different value has been set in Ceph configuration).
120 :Default: 8
121
122 .. describe:: {replicated|erasure}
123
124 The pool type. This can be either **replicated** (to recover from lost OSDs
125 by keeping multiple copies of the objects) or **erasure** (to achieve a kind
126 of `generalized parity RAID <../erasure-code>`_ capability). The
127 **replicated** pools require more raw storage but can implement all Ceph
128 operations. The **erasure** pools require less raw storage but can perform
129 only some Ceph tasks and may provide decreased performance.
130
131 :Type: String
132 :Required: No.
133 :Default: replicated
134
135 .. describe:: [crush-rule-name]
136
137 The name of the CRUSH rule to use for this pool. The specified rule must
138 exist; otherwise the command will fail.
139
140 :Type: String
141 :Required: No.
142 :Default: For **replicated** pools, it is the rule specified by the :confval:`osd_pool_default_crush_rule` configuration variable. This rule must exist. For **erasure** pools, it is the ``erasure-code`` rule if the ``default`` `erasure code profile`_ is used or the ``{pool-name}`` rule if not. This rule will be created implicitly if it doesn't already exist.
143
144 .. describe:: [erasure-code-profile=profile]
145
146 For **erasure** pools only. Instructs Ceph to use the specified `erasure
147 code profile`_. This profile must be an existing profile as defined by **osd
148 erasure-code-profile set**.
149
150 :Type: String
151 :Required: No.
152
153 .. _erasure code profile: ../erasure-code-profile
154
155 .. describe:: --autoscale-mode=<on,off,warn>
156
157 - ``on``: the Ceph cluster will autotune or recommend changes to the number of PGs in your pool based on actual usage.
158 - ``warn``: the Ceph cluster will autotune or recommend changes to the number of PGs in your pool based on actual usage.
159 - ``off``: refer to :ref:`placement groups` for more information.
160
161 :Type: String
162 :Required: No.
163 :Default: The default behavior is determined by the :confval:`osd_pool_default_pg_autoscale_mode` option.
164
165 .. describe:: [expected-num-objects]
166
167 The expected number of RADOS objects for this pool. By setting this value and
168 assigning a negative value to **filestore merge threshold**, you arrange
169 for the PG folder splitting to occur at the time of pool creation and
170 avoid the latency impact that accompanies runtime folder splitting.
171
172 :Type: Integer
173 :Required: No.
174 :Default: 0, no splitting at the time of pool creation.
175
176 .. _associate-pool-to-application:
177
178 Associating a Pool with an Application
179 ======================================
180
181 Pools need to be associated with an application before they can be used. Pools
182 that are intended for use with CephFS and pools that are created automatically
183 by RGW are associated automatically. Pools that are intended for use with RBD
184 should be initialized with the ``rbd`` tool (see `Block Device Commands`_ for
185 more information).
186
187 For other cases, you can manually associate a free-form application name to a
188 pool by running the following command.:
189
190 .. prompt:: bash $
191
192 ceph osd pool application enable {pool-name} {application-name}
193
194 .. note:: CephFS uses the application name ``cephfs``, RBD uses the
195 application name ``rbd``, and RGW uses the application name ``rgw``.
196
197 Setting Pool Quotas
198 ===================
199
200 To set pool quotas for the maximum number of bytes and/or the maximum number of
201 RADOS objects per pool, run the following command:
202
203 .. prompt:: bash $
204
205 ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}]
206
207 For example:
208
209 .. prompt:: bash $
210
211 ceph osd pool set-quota data max_objects 10000
212
213 To remove a quota, set its value to ``0``.
214
215
216 Deleting a Pool
217 ===============
218
219 To delete a pool, run a command of the following form:
220
221 .. prompt:: bash $
222
223 ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]
224
225 To remove a pool, you must set the ``mon_allow_pool_delete`` flag to ``true``
226 in the monitor's configuration. Otherwise, monitors will refuse to remove
227 pools.
228
229 For more information, see `Monitor Configuration`_.
230
231 .. _Monitor Configuration: ../../configuration/mon-config-ref
232
233 If there are custom rules for a pool that is no longer needed, consider
234 deleting those rules.
235
236 .. prompt:: bash $
237
238 ceph osd pool get {pool-name} crush_rule
239
240 For example, if the custom rule is "123", check all pools to see whether they
241 use the rule by running the following command:
242
243 .. prompt:: bash $
244
245 ceph osd dump | grep "^pool" | grep "crush_rule 123"
246
247 If no pools use this custom rule, then it is safe to delete the rule from the
248 cluster.
249
250 Similarly, if there are users with permissions restricted to a pool that no
251 longer exists, consider deleting those users by running commands of the
252 following forms:
253
254 .. prompt:: bash $
255
256 ceph auth ls | grep -C 5 {pool-name}
257 ceph auth del {user}
258
259
260 Renaming a Pool
261 ===============
262
263 To rename a pool, run a command of the following form:
264
265 .. prompt:: bash $
266
267 ceph osd pool rename {current-pool-name} {new-pool-name}
268
269 If you rename a pool for which an authenticated user has per-pool capabilities,
270 you must update the user's capabilities ("caps") to refer to the new pool name.
271
272
273 Showing Pool Statistics
274 =======================
275
276 To show a pool's utilization statistics, run the following command:
277
278 .. prompt:: bash $
279
280 rados df
281
282 To obtain I/O information for a specific pool or for all pools, run a command
283 of the following form:
284
285 .. prompt:: bash $
286
287 ceph osd pool stats [{pool-name}]
288
289
290 Making a Snapshot of a Pool
291 ===========================
292
293 To make a snapshot of a pool, run a command of the following form:
294
295 .. prompt:: bash $
296
297 ceph osd pool mksnap {pool-name} {snap-name}
298
299 Removing a Snapshot of a Pool
300 =============================
301
302 To remove a snapshot of a pool, run a command of the following form:
303
304 .. prompt:: bash $
305
306 ceph osd pool rmsnap {pool-name} {snap-name}
307
308 .. _setpoolvalues:
309
310 Setting Pool Values
311 ===================
312
313 To assign values to a pool's configuration keys, run a command of the following
314 form:
315
316 .. prompt:: bash $
317
318 ceph osd pool set {pool-name} {key} {value}
319
320 You may set values for the following keys:
321
322 .. _compression_algorithm:
323
324 .. describe:: compression_algorithm
325
326 :Description: Sets the inline compression algorithm used in storing data on the underlying BlueStore back end. This key's setting overrides the global setting :confval:`bluestore_compression_algorithm`.
327 :Type: String
328 :Valid Settings: ``lz4``, ``snappy``, ``zlib``, ``zstd``
329
330 .. describe:: compression_mode
331
332 :Description: Sets the policy for the inline compression algorithm used in storing data on the underlying BlueStore back end. This key's setting overrides the global setting :confval:`bluestore_compression_mode`.
333 :Type: String
334 :Valid Settings: ``none``, ``passive``, ``aggressive``, ``force``
335
336 .. describe:: compression_min_blob_size
337
338
339 :Description: Sets the minimum size for the compression of chunks: that is, chunks smaller than this are not compressed. This key's setting overrides the following global settings:
340
341 * :confval:`bluestore_compression_min_blob_size`
342 * :confval:`bluestore_compression_min_blob_size_hdd`
343 * :confval:`bluestore_compression_min_blob_size_ssd`
344
345 :Type: Unsigned Integer
346
347
348 .. describe:: compression_max_blob_size
349
350 :Description: Sets the maximum size for chunks: that is, chunks larger than this are broken into smaller blobs of this size before compression is performed.
351 :Type: Unsigned Integer
352
353 .. _size:
354
355 .. describe:: size
356
357 :Description: Sets the number of replicas for objects in the pool. For further details, see `Setting the Number of RADOS Object Replicas`_. Replicated pools only.
358 :Type: Integer
359
360 .. _min_size:
361
362 .. describe:: min_size
363
364 :Description: Sets the minimum number of replicas required for I/O. For further details, see `Setting the Number of RADOS Object Replicas`_. For erasure-coded pools, this should be set to a value greater than 'k'. If I/O is allowed at the value 'k', then there is no redundancy and data will be lost in the event of a permanent OSD failure. For more information, see `Erasure Code <../erasure-code>`_
365 :Type: Integer
366 :Version: ``0.54`` and above
367
368 .. _pg_num:
369
370 .. describe:: pg_num
371
372 :Description: Sets the effective number of PGs to use when calculating data placement.
373 :Type: Integer
374 :Valid Range: ``0`` to ``mon_max_pool_pg_num``. If set to ``0``, the value of ``osd_pool_default_pg_num`` will be used.
375
376 .. _pgp_num:
377
378 .. describe:: pgp_num
379
380 :Description: Sets the effective number of PGs to use when calculating data placement.
381 :Type: Integer
382 :Valid Range: Between ``1`` and the current value of ``pg_num``.
383
384 .. _crush_rule:
385
386 .. describe:: crush_rule
387
388 :Description: Sets the CRUSH rule that Ceph uses to map object placement within the pool.
389 :Type: String
390
391 .. _allow_ec_overwrites:
392
393 .. describe:: allow_ec_overwrites
394
395 :Description: Determines whether writes to an erasure-coded pool are allowed to update only part of a RADOS object. This allows CephFS and RBD to use an EC (erasure-coded) pool for user data (but not for metadata). For more details, see `Erasure Coding with Overwrites`_.
396 :Type: Boolean
397
398 .. versionadded:: 12.2.0
399
400 .. describe:: hashpspool
401
402 :Description: Sets and unsets the HASHPSPOOL flag on a given pool.
403 :Type: Integer
404 :Valid Range: 1 sets flag, 0 unsets flag
405
406 .. _nodelete:
407
408 .. describe:: nodelete
409
410 :Description: Sets and unsets the NODELETE flag on a given pool.
411 :Type: Integer
412 :Valid Range: 1 sets flag, 0 unsets flag
413 :Version: Version ``FIXME``
414
415 .. _nopgchange:
416
417 .. describe:: nopgchange
418
419 :Description: Sets and unsets the NOPGCHANGE flag on a given pool.
420 :Type: Integer
421 :Valid Range: 1 sets flag, 0 unsets flag
422 :Version: Version ``FIXME``
423
424 .. _nosizechange:
425
426 .. describe:: nosizechange
427
428 :Description: Sets and unsets the NOSIZECHANGE flag on a given pool.
429 :Type: Integer
430 :Valid Range: 1 sets flag, 0 unsets flag
431 :Version: Version ``FIXME``
432
433 .. _bulk:
434
435 .. describe:: bulk
436
437 :Description: Sets and unsets the bulk flag on a given pool.
438 :Type: Boolean
439 :Valid Range: ``true``/``1`` sets flag, ``false``/``0`` unsets flag
440
441 .. _write_fadvise_dontneed:
442
443 .. describe:: write_fadvise_dontneed
444
445 :Description: Sets and unsets the WRITE_FADVISE_DONTNEED flag on a given pool.
446 :Type: Integer
447 :Valid Range: ``1`` sets flag, ``0`` unsets flag
448
449 .. _noscrub:
450
451 .. describe:: noscrub
452
453 :Description: Sets and unsets the NOSCRUB flag on a given pool.
454 :Type: Integer
455 :Valid Range: ``1`` sets flag, ``0`` unsets flag
456
457 .. _nodeep-scrub:
458
459 .. describe:: nodeep-scrub
460
461 :Description: Sets and unsets the NODEEP_SCRUB flag on a given pool.
462 :Type: Integer
463 :Valid Range: ``1`` sets flag, ``0`` unsets flag
464
465 .. _hit_set_type:
466
467 .. describe:: hit_set_type
468
469 :Description: Enables HitSet tracking for cache pools.
470 For additional information, see `Bloom Filter`_.
471 :Type: String
472 :Valid Settings: ``bloom``, ``explicit_hash``, ``explicit_object``
473 :Default: ``bloom``. Other values are for testing.
474
475 .. _hit_set_count:
476
477 .. describe:: hit_set_count
478
479 :Description: Determines the number of HitSets to store for cache pools. The
480 higher the value, the more RAM is consumed by the ``ceph-osd``
481 daemon.
482 :Type: Integer
483 :Valid Range: ``1``. Agent doesn't handle > ``1`` yet.
484
485 .. _hit_set_period:
486
487 .. describe:: hit_set_period
488
489 :Description: Determines the duration of a HitSet period (in seconds) for
490 cache pools. The higher the value, the more RAM is consumed
491 by the ``ceph-osd`` daemon.
492 :Type: Integer
493 :Example: ``3600`` (3600 seconds: one hour)
494
495 .. _hit_set_fpp:
496
497 .. describe:: hit_set_fpp
498
499 :Description: Determines the probability of false positives for the
500 ``bloom`` HitSet type. For additional information, see `Bloom
501 Filter`_.
502 :Type: Double
503 :Valid Range: ``0.0`` - ``1.0``
504 :Default: ``0.05``
505
506 .. _cache_target_dirty_ratio:
507
508 .. describe:: cache_target_dirty_ratio
509
510 :Description: Sets a flush threshold for the percentage of the cache pool
511 containing modified (dirty) objects. When this threshold is
512 reached, the cache-tiering agent will flush these objects to
513 the backing storage pool.
514 :Type: Double
515 :Default: ``.4``
516
517 .. _cache_target_dirty_high_ratio:
518
519 .. describe:: cache_target_dirty_high_ratio
520
521 :Description: Sets a flush threshold for the percentage of the cache pool
522 containing modified (dirty) objects. When this threshold is
523 reached, the cache-tiering agent will flush these objects to
524 the backing storage pool with a higher speed (as compared with
525 ``cache_target_dirty_ratio``).
526 :Type: Double
527 :Default: ``.6``
528
529 .. _cache_target_full_ratio:
530
531 .. describe:: cache_target_full_ratio
532
533 :Description: Sets an eviction threshold for the percentage of the cache
534 pool containing unmodified (clean) objects. When this
535 threshold is reached, the cache-tiering agent will evict
536 these objects from the cache pool.
537
538 :Type: Double
539 :Default: ``.8``
540
541 .. _target_max_bytes:
542
543 .. describe:: target_max_bytes
544
545 :Description: Ceph will begin flushing or evicting objects when the
546 ``max_bytes`` threshold is triggered.
547 :Type: Integer
548 :Example: ``1000000000000`` #1-TB
549
550 .. _target_max_objects:
551
552 .. describe:: target_max_objects
553
554 :Description: Ceph will begin flushing or evicting objects when the
555 ``max_objects`` threshold is triggered.
556 :Type: Integer
557 :Example: ``1000000`` #1M objects
558
559
560 .. describe:: hit_set_grade_decay_rate
561
562 :Description: Sets the temperature decay rate between two successive
563 HitSets.
564 :Type: Integer
565 :Valid Range: 0 - 100
566 :Default: ``20``
567
568 .. describe:: hit_set_search_last_n
569
570 :Description: Count at most N appearances in HitSets. Used for temperature
571 calculation.
572 :Type: Integer
573 :Valid Range: 0 - hit_set_count
574 :Default: ``1``
575
576 .. _cache_min_flush_age:
577
578 .. describe:: cache_min_flush_age
579
580 :Description: Sets the time (in seconds) before the cache-tiering agent
581 flushes an object from the cache pool to the storage pool.
582 :Type: Integer
583 :Example: ``600`` (600 seconds: ten minutes)
584
585 .. _cache_min_evict_age:
586
587 .. describe:: cache_min_evict_age
588
589 :Description: Sets the time (in seconds) before the cache-tiering agent
590 evicts an object from the cache pool.
591 :Type: Integer
592 :Example: ``1800`` (1800 seconds: thirty minutes)
593
594 .. _fast_read:
595
596 .. describe:: fast_read
597
598 :Description: For erasure-coded pools, if this flag is turned ``on``, the
599 read request issues "sub reads" to all shards, and then waits
600 until it receives enough shards to decode before it serves
601 the client. If *jerasure* or *isa* erasure plugins are in
602 use, then after the first *K* replies have returned, the
603 client's request is served immediately using the data decoded
604 from these replies. This approach sacrifices resources in
605 exchange for better performance. This flag is supported only
606 for erasure-coded pools.
607 :Type: Boolean
608 :Defaults: ``0``
609
610 .. _scrub_min_interval:
611
612 .. describe:: scrub_min_interval
613
614 :Description: Sets the minimum interval (in seconds) for successive scrubs of the pool's PGs when the load is low. If the default value of ``0`` is in effect, then the value of ``osd_scrub_min_interval`` from central config is used.
615
616 :Type: Double
617 :Default: ``0``
618
619 .. _scrub_max_interval:
620
621 .. describe:: scrub_max_interval
622
623 :Description: Sets the maximum interval (in seconds) for scrubs of the pool's PGs regardless of cluster load. If the value of ``scrub_max_interval`` is ``0``, then the value ``osd_scrub_max_interval`` from central config is used.
624
625 :Type: Double
626 :Default: ``0``
627
628 .. _deep_scrub_interval:
629
630 .. describe:: deep_scrub_interval
631
632 :Description: Sets the interval (in seconds) for pool “deep” scrubs of the pool's PGs. If the value of ``deep_scrub_interval`` is ``0``, the value ``osd_deep_scrub_interval`` from central config is used.
633
634 :Type: Double
635 :Default: ``0``
636
637 .. _recovery_priority:
638
639 .. describe:: recovery_priority
640
641 :Description: Setting this value adjusts a pool's computed reservation priority. This value must be in the range ``-10`` to ``10``. Any pool assigned a negative value will be given a lower priority than any new pools, so users are directed to assign negative values to low-priority pools.
642
643 :Type: Integer
644 :Default: ``0``
645
646
647 .. _recovery_op_priority:
648
649 .. describe:: recovery_op_priority
650
651 :Description: Sets the recovery operation priority for a specific pool's PGs. This overrides the general priority determined by :confval:`osd_recovery_op_priority`.
652
653 :Type: Integer
654 :Default: ``0``
655
656
657 Getting Pool Values
658 ===================
659
660 To get a value from a pool's key, run a command of the following form:
661
662 .. prompt:: bash $
663
664 ceph osd pool get {pool-name} {key}
665
666
667 You may get values from the following keys:
668
669
670 ``size``
671
672 :Description: See size_.
673
674 :Type: Integer
675
676
677 ``min_size``
678
679 :Description: See min_size_.
680
681 :Type: Integer
682 :Version: ``0.54`` and above
683
684
685 ``pg_num``
686
687 :Description: See pg_num_.
688
689 :Type: Integer
690
691
692 ``pgp_num``
693
694 :Description: See pgp_num_.
695
696 :Type: Integer
697 :Valid Range: Equal to or less than ``pg_num``.
698
699
700 ``crush_rule``
701
702 :Description: See crush_rule_.
703
704
705 ``hit_set_type``
706
707 :Description: See hit_set_type_.
708
709 :Type: String
710 :Valid Settings: ``bloom``, ``explicit_hash``, ``explicit_object``
711
712
713 ``hit_set_count``
714
715 :Description: See hit_set_count_.
716
717 :Type: Integer
718
719
720 ``hit_set_period``
721
722 :Description: See hit_set_period_.
723
724 :Type: Integer
725
726
727 ``hit_set_fpp``
728
729 :Description: See hit_set_fpp_.
730
731 :Type: Double
732
733
734 ``cache_target_dirty_ratio``
735
736 :Description: See cache_target_dirty_ratio_.
737
738 :Type: Double
739
740
741 ``cache_target_dirty_high_ratio``
742
743 :Description: See cache_target_dirty_high_ratio_.
744
745 :Type: Double
746
747
748 ``cache_target_full_ratio``
749
750 :Description: See cache_target_full_ratio_.
751
752 :Type: Double
753
754
755 ``target_max_bytes``
756
757 :Description: See target_max_bytes_.
758
759 :Type: Integer
760
761
762 ``target_max_objects``
763
764 :Description: See target_max_objects_.
765
766 :Type: Integer
767
768
769 ``cache_min_flush_age``
770
771 :Description: See cache_min_flush_age_.
772
773 :Type: Integer
774
775
776 ``cache_min_evict_age``
777
778 :Description: See cache_min_evict_age_.
779
780 :Type: Integer
781
782
783 ``fast_read``
784
785 :Description: See fast_read_.
786
787 :Type: Boolean
788
789
790 ``scrub_min_interval``
791
792 :Description: See scrub_min_interval_.
793
794 :Type: Double
795
796
797 ``scrub_max_interval``
798
799 :Description: See scrub_max_interval_.
800
801 :Type: Double
802
803
804 ``deep_scrub_interval``
805
806 :Description: See deep_scrub_interval_.
807
808 :Type: Double
809
810
811 ``allow_ec_overwrites``
812
813 :Description: See allow_ec_overwrites_.
814
815 :Type: Boolean
816
817
818 ``recovery_priority``
819
820 :Description: See recovery_priority_.
821
822 :Type: Integer
823
824
825 ``recovery_op_priority``
826
827 :Description: See recovery_op_priority_.
828
829 :Type: Integer
830
831
832 Setting the Number of RADOS Object Replicas
833 ===========================================
834
835 To set the number of data replicas on a replicated pool, run a command of the
836 following form:
837
838 .. prompt:: bash $
839
840 ceph osd pool set {poolname} size {num-replicas}
841
842 .. important:: The ``{num-replicas}`` argument includes the primary object
843 itself. For example, if you want there to be two replicas of the object in
844 addition to the original object (for a total of three instances of the
845 object) specify ``3`` by running the following command:
846
847 .. prompt:: bash $
848
849 ceph osd pool set data size 3
850
851 You may run the above command for each pool.
852
853 .. Note:: An object might accept I/Os in degraded mode with fewer than ``pool
854 size`` replicas. To set a minimum number of replicas required for I/O, you
855 should use the ``min_size`` setting. For example, you might run the
856 following command:
857
858 .. prompt:: bash $
859
860 ceph osd pool set data min_size 2
861
862 This command ensures that no object in the data pool will receive I/O if it has
863 fewer than ``min_size`` (in this case, two) replicas.
864
865
866 Getting the Number of Object Replicas
867 =====================================
868
869 To get the number of object replicas, run the following command:
870
871 .. prompt:: bash $
872
873 ceph osd dump | grep 'replicated size'
874
875 Ceph will list pools and highlight the ``replicated size`` attribute. By
876 default, Ceph creates two replicas of an object (a total of three copies, for a
877 size of ``3``).
878
879
880 .. _pgcalc: https://old.ceph.com/pgcalc/
881 .. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref
882 .. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
883 .. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups
884 .. _Erasure Coding with Overwrites: ../erasure-code#erasure-coding-with-overwrites
885 .. _Block Device Commands: ../../../rbd/rados-rbd-cmds/#create-a-block-device-pool