]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/operations/pools.rst
update ceph source to reef 18.2.1
[ceph.git] / ceph / doc / rados / operations / pools.rst
CommitLineData
39ae355f
TL
1.. _rados_pools:
2
7c673cae
FG
3=======
4 Pools
5=======
39ae355f
TL
6Pools are logical partitions that are used to store objects.
7
8Pools provide:
9
10- **Resilience**: It is possible to set the number of OSDs that are allowed to
11 fail without any data being lost. If your cluster uses replicated pools, the
1e59de90
TL
12 number of OSDs that can fail without data loss is equal to the number of
13 replicas.
14
15 For example: a typical configuration stores an object and two replicas
16 (copies) of each RADOS object (that is: ``size = 3``), but you can configure
17 the number of replicas on a per-pool basis. For `erasure-coded pools
18 <../erasure-code>`_, resilience is defined as the number of coding chunks
19 (for example, ``m = 2`` in the default **erasure code profile**).
20
21- **Placement Groups**: You can set the number of placement groups (PGs) for
22 the pool. In a typical configuration, the target number of PGs is
23 approximately one hundred PGs per OSD. This provides reasonable balancing
24 without consuming excessive computing resources. When setting up multiple
25 pools, be careful to set an appropriate number of PGs for each pool and for
26 the cluster as a whole. Each PG belongs to a specific pool: when multiple
27 pools use the same OSDs, make sure that the **sum** of PG replicas per OSD is
28 in the desired PG-per-OSD target range. To calculate an appropriate number of
29 PGs for your pools, use the `pgcalc`_ tool.
39ae355f
TL
30
31- **CRUSH Rules**: When data is stored in a pool, the placement of the object
32 and its replicas (or chunks, in the case of erasure-coded pools) in your
33 cluster is governed by CRUSH rules. Custom CRUSH rules can be created for a
34 pool if the default rule does not fit your use case.
35
36- **Snapshots**: The command ``ceph osd pool mksnap`` creates a snapshot of a
1e59de90 37 pool.
7c673cae 38
20effc67
TL
39Pool Names
40==========
41
42Pool names beginning with ``.`` are reserved for use by Ceph's internal
1e59de90 43operations. Do not create or manipulate pools with these names.
20effc67
TL
44
45
7c673cae
FG
46List Pools
47==========
48
aee94f69
TL
49There are multiple ways to get the list of pools in your cluster.
50
51To list just your cluster's pool names (good for scripting), execute:
52
53.. prompt:: bash $
54
55 ceph osd pool ls
56
57::
58
59 .rgw.root
60 default.rgw.log
61 default.rgw.control
62 default.rgw.meta
63
64To list your cluster's pools with the pool number, run the following command:
7c673cae 65
39ae355f 66.. prompt:: bash $
7c673cae 67
39ae355f 68 ceph osd lspools
7c673cae 69
aee94f69
TL
70::
71
72 1 .rgw.root
73 2 default.rgw.log
74 3 default.rgw.control
75 4 default.rgw.meta
76
77To list your cluster's pools with additional information, execute:
78
79.. prompt:: bash $
80
81 ceph osd pool ls detail
82
83::
84
85 pool 1 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 19 flags hashpspool stripe_width 0 application rgw read_balance_score 4.00
86 pool 2 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 21 flags hashpspool stripe_width 0 application rgw read_balance_score 4.00
87 pool 3 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 23 flags hashpspool stripe_width 0 application rgw read_balance_score 4.00
88 pool 4 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 25 flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw read_balance_score 4.00
89
90To get even more information, you can execute this command with the ``--format`` (or ``-f``) option and the ``json``, ``json-pretty``, ``xml`` or ``xml-pretty`` value.
91
7c673cae
FG
92.. _createpool:
93
1e59de90
TL
94Creating a Pool
95===============
7c673cae 96
1e59de90
TL
97Before creating a pool, consult `Pool, PG and CRUSH Config Reference`_. Your
98Ceph configuration file contains a setting (namely, ``pg_num``) that determines
99the number of PGs. However, this setting's default value is NOT appropriate
100for most systems. In most cases, you should override this default value when
101creating your pool. For details on PG numbers, see `setting the number of
102placement groups`_
c07f9fc5 103
39ae355f
TL
104For example:
105
106.. prompt:: bash $
7c673cae 107
1e59de90
TL
108 osd_pool_default_pg_num = 128
109 osd_pool_default_pgp_num = 128
110
111.. note:: In Luminous and later releases, each pool must be associated with the
112 application that will be using the pool. For more information, see
113 `Associating a Pool with an Application`_ below.
7c673cae 114
1e59de90 115To create a pool, run one of the following commands:
39ae355f
TL
116
117.. prompt:: bash $
7c673cae 118
1e59de90 119 ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \
224ce89b 120 [crush-rule-name] [expected-num-objects]
1e59de90
TL
121
122or:
123
124.. prompt:: bash $
125
126 ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] erasure \
9f95a23c 127 [erasure-code-profile] [crush-rule-name] [expected_num_objects] [--autoscale-mode=<on,off,warn>]
7c673cae 128
1e59de90
TL
129For a brief description of the elements of the above commands, consult the
130following:
7c673cae 131
20effc67 132.. describe:: {pool-name}
7c673cae 133
20effc67 134 The name of the pool. It must be unique.
7c673cae 135
20effc67
TL
136 :Type: String
137 :Required: Yes.
7c673cae 138
20effc67 139.. describe:: {pg-num}
7c673cae 140
1e59de90
TL
141 The total number of PGs in the pool. For details on calculating an
142 appropriate number, see :ref:`placement groups`. The default value ``8`` is
143 NOT suitable for most systems.
7c673cae 144
20effc67
TL
145 :Type: Integer
146 :Required: Yes.
147 :Default: 8
7c673cae 148
20effc67 149.. describe:: {pgp-num}
7c673cae 150
1e59de90
TL
151 The total number of PGs for placement purposes. This **should be equal to
152 the total number of PGs**, except briefly while ``pg_num`` is being
153 increased or decreased.
20effc67
TL
154
155 :Type: Integer
1e59de90 156 :Required: Yes. If no value has been specified in the command, then the default value is used (unless a different value has been set in Ceph configuration).
20effc67 157 :Default: 8
7c673cae 158
20effc67 159.. describe:: {replicated|erasure}
7c673cae 160
1e59de90
TL
161 The pool type. This can be either **replicated** (to recover from lost OSDs
162 by keeping multiple copies of the objects) or **erasure** (to achieve a kind
163 of `generalized parity RAID <../erasure-code>`_ capability). The
164 **replicated** pools require more raw storage but can implement all Ceph
165 operations. The **erasure** pools require less raw storage but can perform
166 only some Ceph tasks and may provide decreased performance.
7c673cae 167
20effc67
TL
168 :Type: String
169 :Required: No.
170 :Default: replicated
7c673cae 171
20effc67 172.. describe:: [crush-rule-name]
7c673cae 173
1e59de90
TL
174 The name of the CRUSH rule to use for this pool. The specified rule must
175 exist; otherwise the command will fail.
7c673cae 176
20effc67
TL
177 :Type: String
178 :Required: No.
1e59de90 179 :Default: For **replicated** pools, it is the rule specified by the :confval:`osd_pool_default_crush_rule` configuration variable. This rule must exist. For **erasure** pools, it is the ``erasure-code`` rule if the ``default`` `erasure code profile`_ is used or the ``{pool-name}`` rule if not. This rule will be created implicitly if it doesn't already exist.
7c673cae 180
20effc67 181.. describe:: [erasure-code-profile=profile]
7c673cae 182
1e59de90
TL
183 For **erasure** pools only. Instructs Ceph to use the specified `erasure
184 code profile`_. This profile must be an existing profile as defined by **osd
185 erasure-code-profile set**.
7c673cae 186
20effc67
TL
187 :Type: String
188 :Required: No.
7c673cae 189
20effc67 190.. _erasure code profile: ../erasure-code-profile
7c673cae 191
20effc67 192.. describe:: --autoscale-mode=<on,off,warn>
9f95a23c 193
1e59de90
TL
194 - ``on``: the Ceph cluster will autotune or recommend changes to the number of PGs in your pool based on actual usage.
195 - ``warn``: the Ceph cluster will autotune or recommend changes to the number of PGs in your pool based on actual usage.
196 - ``off``: refer to :ref:`placement groups` for more information.
9f95a23c 197
20effc67
TL
198 :Type: String
199 :Required: No.
1e59de90 200 :Default: The default behavior is determined by the :confval:`osd_pool_default_pg_autoscale_mode` option.
7c673cae 201
20effc67 202.. describe:: [expected-num-objects]
7c673cae 203
1e59de90
TL
204 The expected number of RADOS objects for this pool. By setting this value and
205 assigning a negative value to **filestore merge threshold**, you arrange
206 for the PG folder splitting to occur at the time of pool creation and
207 avoid the latency impact that accompanies runtime folder splitting.
7c673cae 208
20effc67
TL
209 :Type: Integer
210 :Required: No.
1e59de90 211 :Default: 0, no splitting at the time of pool creation.
11fdf7f2
TL
212
213.. _associate-pool-to-application:
7c673cae 214
1e59de90
TL
215Associating a Pool with an Application
216======================================
c07f9fc5 217
1e59de90
TL
218Pools need to be associated with an application before they can be used. Pools
219that are intended for use with CephFS and pools that are created automatically
220by RGW are associated automatically. Pools that are intended for use with RBD
221should be initialized with the ``rbd`` tool (see `Block Device Commands`_ for
222more information).
c07f9fc5 223
1e59de90
TL
224For other cases, you can manually associate a free-form application name to a
225pool by running the following command.:
39ae355f
TL
226
227.. prompt:: bash $
c07f9fc5 228
39ae355f 229 ceph osd pool application enable {pool-name} {application-name}
c07f9fc5
FG
230
231.. note:: CephFS uses the application name ``cephfs``, RBD uses the
232 application name ``rbd``, and RGW uses the application name ``rgw``.
233
1e59de90
TL
234Setting Pool Quotas
235===================
7c673cae 236
1e59de90
TL
237To set pool quotas for the maximum number of bytes and/or the maximum number of
238RADOS objects per pool, run the following command:
39ae355f
TL
239
240.. prompt:: bash $
7c673cae 241
39ae355f 242 ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}]
7c673cae 243
39ae355f 244For example:
7c673cae 245
39ae355f
TL
246.. prompt:: bash $
247
248 ceph osd pool set-quota data max_objects 10000
7c673cae
FG
249
250To remove a quota, set its value to ``0``.
251
252
1e59de90
TL
253Deleting a Pool
254===============
7c673cae 255
1e59de90 256To delete a pool, run a command of the following form:
39ae355f
TL
257
258.. prompt:: bash $
7c673cae 259
39ae355f 260 ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]
7c673cae 261
1e59de90
TL
262To remove a pool, you must set the ``mon_allow_pool_delete`` flag to ``true``
263in the monitor's configuration. Otherwise, monitors will refuse to remove
264pools.
7c673cae 265
1e59de90 266For more information, see `Monitor Configuration`_.
7c673cae
FG
267
268.. _Monitor Configuration: ../../configuration/mon-config-ref
7c673cae 269
1e59de90
TL
270If there are custom rules for a pool that is no longer needed, consider
271deleting those rules.
39ae355f
TL
272
273.. prompt:: bash $
b32b8144 274
39ae355f 275 ceph osd pool get {pool-name} crush_rule
7c673cae 276
1e59de90
TL
277For example, if the custom rule is "123", check all pools to see whether they
278use the rule by running the following command:
39ae355f
TL
279
280.. prompt:: bash $
7c673cae 281
1e59de90 282 ceph osd dump | grep "^pool" | grep "crush_rule 123"
7c673cae 283
1e59de90
TL
284If no pools use this custom rule, then it is safe to delete the rule from the
285cluster.
39ae355f 286
1e59de90
TL
287Similarly, if there are users with permissions restricted to a pool that no
288longer exists, consider deleting those users by running commands of the
289following forms:
39ae355f
TL
290
291.. prompt:: bash $
7c673cae 292
1e59de90
TL
293 ceph auth ls | grep -C 5 {pool-name}
294 ceph auth del {user}
7c673cae
FG
295
296
1e59de90
TL
297Renaming a Pool
298===============
7c673cae 299
1e59de90 300To rename a pool, run a command of the following form:
7c673cae 301
39ae355f
TL
302.. prompt:: bash $
303
304 ceph osd pool rename {current-pool-name} {new-pool-name}
7c673cae 305
1e59de90
TL
306If you rename a pool for which an authenticated user has per-pool capabilities,
307you must update the user's capabilities ("caps") to refer to the new pool name.
308
7c673cae 309
1e59de90
TL
310Showing Pool Statistics
311=======================
7c673cae 312
1e59de90 313To show a pool's utilization statistics, run the following command:
39ae355f
TL
314
315.. prompt:: bash $
316
317 rados df
7c673cae 318
1e59de90
TL
319To obtain I/O information for a specific pool or for all pools, run a command
320of the following form:
11fdf7f2 321
39ae355f 322.. prompt:: bash $
11fdf7f2 323
39ae355f 324 ceph osd pool stats [{pool-name}]
11fdf7f2 325
7c673cae 326
1e59de90
TL
327Making a Snapshot of a Pool
328===========================
7c673cae 329
1e59de90 330To make a snapshot of a pool, run a command of the following form:
7c673cae 331
39ae355f
TL
332.. prompt:: bash $
333
334 ceph osd pool mksnap {pool-name} {snap-name}
7c673cae 335
1e59de90
TL
336Removing a Snapshot of a Pool
337=============================
7c673cae 338
1e59de90 339To remove a snapshot of a pool, run a command of the following form:
39ae355f
TL
340
341.. prompt:: bash $
7c673cae 342
39ae355f 343 ceph osd pool rmsnap {pool-name} {snap-name}
7c673cae 344
7c673cae
FG
345.. _setpoolvalues:
346
1e59de90
TL
347Setting Pool Values
348===================
7c673cae 349
1e59de90
TL
350To assign values to a pool's configuration keys, run a command of the following
351form:
39ae355f
TL
352
353.. prompt:: bash $
7c673cae 354
39ae355f 355 ceph osd pool set {pool-name} {key} {value}
11fdf7f2
TL
356
357You may set values for the following keys:
7c673cae 358
d2e6a577
FG
359.. _compression_algorithm:
360
20effc67 361.. describe:: compression_algorithm
1e59de90
TL
362
363 :Description: Sets the inline compression algorithm used in storing data on the underlying BlueStore back end. This key's setting overrides the global setting :confval:`bluestore_compression_algorithm`.
20effc67
TL
364 :Type: String
365 :Valid Settings: ``lz4``, ``snappy``, ``zlib``, ``zstd``
d2e6a577 366
20effc67 367.. describe:: compression_mode
1e59de90
TL
368
369 :Description: Sets the policy for the inline compression algorithm used in storing data on the underlying BlueStore back end. This key's setting overrides the global setting :confval:`bluestore_compression_mode`.
20effc67
TL
370 :Type: String
371 :Valid Settings: ``none``, ``passive``, ``aggressive``, ``force``
d2e6a577 372
20effc67 373.. describe:: compression_min_blob_size
d2e6a577 374
1e59de90
TL
375
376 :Description: Sets the minimum size for the compression of chunks: that is, chunks smaller than this are not compressed. This key's setting overrides the following global settings:
377
378 * :confval:`bluestore_compression_min_blob_size`
379 * :confval:`bluestore_compression_min_blob_size_hdd`
380 * :confval:`bluestore_compression_min_blob_size_ssd`
d2e6a577 381
20effc67 382 :Type: Unsigned Integer
d2e6a577 383
d2e6a577 384
1e59de90
TL
385.. describe:: compression_max_blob_size
386
387 :Description: Sets the maximum size for chunks: that is, chunks larger than this are broken into smaller blobs of this size before compression is performed.
20effc67 388 :Type: Unsigned Integer
d2e6a577 389
7c673cae
FG
390.. _size:
391
20effc67 392.. describe:: size
1e59de90
TL
393
394 :Description: Sets the number of replicas for objects in the pool. For further details, see `Setting the Number of RADOS Object Replicas`_. Replicated pools only.
20effc67 395 :Type: Integer
7c673cae
FG
396
397.. _min_size:
398
20effc67 399.. describe:: min_size
1e59de90
TL
400
401 :Description: Sets the minimum number of replicas required for I/O. For further details, see `Setting the Number of RADOS Object Replicas`_. For erasure-coded pools, this should be set to a value greater than 'k'. If I/O is allowed at the value 'k', then there is no redundancy and data will be lost in the event of a permanent OSD failure. For more information, see `Erasure Code <../erasure-code>`_
20effc67
TL
402 :Type: Integer
403 :Version: ``0.54`` and above
7c673cae
FG
404
405.. _pg_num:
406
20effc67 407.. describe:: pg_num
1e59de90
TL
408
409 :Description: Sets the effective number of PGs to use when calculating data placement.
20effc67 410 :Type: Integer
1e59de90 411 :Valid Range: ``0`` to ``mon_max_pool_pg_num``. If set to ``0``, the value of ``osd_pool_default_pg_num`` will be used.
7c673cae
FG
412
413.. _pgp_num:
414
20effc67 415.. describe:: pgp_num
1e59de90
TL
416
417 :Description: Sets the effective number of PGs to use when calculating data placement.
20effc67 418 :Type: Integer
1e59de90 419 :Valid Range: Between ``1`` and the current value of ``pg_num``.
7c673cae 420
b32b8144 421.. _crush_rule:
7c673cae 422
20effc67 423.. describe:: crush_rule
1e59de90
TL
424
425 :Description: Sets the CRUSH rule that Ceph uses to map object placement within the pool.
20effc67 426 :Type: String
7c673cae
FG
427
428.. _allow_ec_overwrites:
429
20effc67 430.. describe:: allow_ec_overwrites
1e59de90
TL
431
432 :Description: Determines whether writes to an erasure-coded pool are allowed to update only part of a RADOS object. This allows CephFS and RBD to use an EC (erasure-coded) pool for user data (but not for metadata). For more details, see `Erasure Coding with Overwrites`_.
20effc67 433 :Type: Boolean
7c673cae 434
20effc67 435 .. versionadded:: 12.2.0
1e59de90 436
20effc67 437.. describe:: hashpspool
7c673cae 438
1e59de90 439 :Description: Sets and unsets the HASHPSPOOL flag on a given pool.
20effc67
TL
440 :Type: Integer
441 :Valid Range: 1 sets flag, 0 unsets flag
7c673cae
FG
442
443.. _nodelete:
444
20effc67 445.. describe:: nodelete
7c673cae 446
1e59de90 447 :Description: Sets and unsets the NODELETE flag on a given pool.
20effc67
TL
448 :Type: Integer
449 :Valid Range: 1 sets flag, 0 unsets flag
450 :Version: Version ``FIXME``
7c673cae
FG
451
452.. _nopgchange:
453
20effc67 454.. describe:: nopgchange
7c673cae 455
1e59de90 456 :Description: Sets and unsets the NOPGCHANGE flag on a given pool.
20effc67
TL
457 :Type: Integer
458 :Valid Range: 1 sets flag, 0 unsets flag
459 :Version: Version ``FIXME``
7c673cae
FG
460
461.. _nosizechange:
462
20effc67 463.. describe:: nosizechange
7c673cae 464
1e59de90 465 :Description: Sets and unsets the NOSIZECHANGE flag on a given pool.
20effc67
TL
466 :Type: Integer
467 :Valid Range: 1 sets flag, 0 unsets flag
468 :Version: Version ``FIXME``
469
470.. _bulk:
471
472.. describe:: bulk
473
1e59de90 474 :Description: Sets and unsets the bulk flag on a given pool.
20effc67 475 :Type: Boolean
1e59de90 476 :Valid Range: ``true``/``1`` sets flag, ``false``/``0`` unsets flag
7c673cae
FG
477
478.. _write_fadvise_dontneed:
479
20effc67 480.. describe:: write_fadvise_dontneed
7c673cae 481
1e59de90 482 :Description: Sets and unsets the WRITE_FADVISE_DONTNEED flag on a given pool.
20effc67 483 :Type: Integer
1e59de90 484 :Valid Range: ``1`` sets flag, ``0`` unsets flag
7c673cae
FG
485
486.. _noscrub:
487
20effc67 488.. describe:: noscrub
7c673cae 489
1e59de90 490 :Description: Sets and unsets the NOSCRUB flag on a given pool.
20effc67 491 :Type: Integer
1e59de90 492 :Valid Range: ``1`` sets flag, ``0`` unsets flag
7c673cae
FG
493
494.. _nodeep-scrub:
495
20effc67 496.. describe:: nodeep-scrub
7c673cae 497
1e59de90 498 :Description: Sets and unsets the NODEEP_SCRUB flag on a given pool.
20effc67 499 :Type: Integer
1e59de90 500 :Valid Range: ``1`` sets flag, ``0`` unsets flag
7c673cae 501
7c673cae
FG
502.. _target_max_bytes:
503
20effc67 504.. describe:: target_max_bytes
1e59de90
TL
505
506 :Description: Ceph will begin flushing or evicting objects when the
507 ``max_bytes`` threshold is triggered.
20effc67
TL
508 :Type: Integer
509 :Example: ``1000000000000`` #1-TB
7c673cae
FG
510
511.. _target_max_objects:
512
20effc67 513.. describe:: target_max_objects
1e59de90
TL
514
515 :Description: Ceph will begin flushing or evicting objects when the
516 ``max_objects`` threshold is triggered.
20effc67
TL
517 :Type: Integer
518 :Example: ``1000000`` #1M objects
7c673cae 519
7c673cae
FG
520.. _fast_read:
521
20effc67 522.. describe:: fast_read
1e59de90
TL
523
524 :Description: For erasure-coded pools, if this flag is turned ``on``, the
525 read request issues "sub reads" to all shards, and then waits
526 until it receives enough shards to decode before it serves
527 the client. If *jerasure* or *isa* erasure plugins are in
528 use, then after the first *K* replies have returned, the
529 client's request is served immediately using the data decoded
530 from these replies. This approach sacrifices resources in
531 exchange for better performance. This flag is supported only
532 for erasure-coded pools.
533 :Type: Boolean
20effc67 534 :Defaults: ``0``
7c673cae
FG
535
536.. _scrub_min_interval:
537
20effc67 538.. describe:: scrub_min_interval
1e59de90
TL
539
540 :Description: Sets the minimum interval (in seconds) for successive scrubs of the pool's PGs when the load is low. If the default value of ``0`` is in effect, then the value of ``osd_scrub_min_interval`` from central config is used.
7c673cae 541
20effc67
TL
542 :Type: Double
543 :Default: ``0``
7c673cae
FG
544
545.. _scrub_max_interval:
546
20effc67 547.. describe:: scrub_max_interval
1e59de90
TL
548
549 :Description: Sets the maximum interval (in seconds) for scrubs of the pool's PGs regardless of cluster load. If the value of ``scrub_max_interval`` is ``0``, then the value ``osd_scrub_max_interval`` from central config is used.
7c673cae 550
20effc67
TL
551 :Type: Double
552 :Default: ``0``
7c673cae
FG
553
554.. _deep_scrub_interval:
555
20effc67 556.. describe:: deep_scrub_interval
1e59de90
TL
557
558 :Description: Sets the interval (in seconds) for pool “deep” scrubs of the pool's PGs. If the value of ``deep_scrub_interval`` is ``0``, the value ``osd_deep_scrub_interval`` from central config is used.
7c673cae 559
20effc67
TL
560 :Type: Double
561 :Default: ``0``
7c673cae 562
11fdf7f2
TL
563.. _recovery_priority:
564
20effc67 565.. describe:: recovery_priority
1e59de90
TL
566
567 :Description: Setting this value adjusts a pool's computed reservation priority. This value must be in the range ``-10`` to ``10``. Any pool assigned a negative value will be given a lower priority than any new pools, so users are directed to assign negative values to low-priority pools.
11fdf7f2 568
20effc67
TL
569 :Type: Integer
570 :Default: ``0``
11fdf7f2
TL
571
572
573.. _recovery_op_priority:
574
20effc67 575.. describe:: recovery_op_priority
1e59de90
TL
576
577 :Description: Sets the recovery operation priority for a specific pool's PGs. This overrides the general priority determined by :confval:`osd_recovery_op_priority`.
11fdf7f2 578
20effc67
TL
579 :Type: Integer
580 :Default: ``0``
11fdf7f2
TL
581
582
1e59de90
TL
583Getting Pool Values
584===================
7c673cae 585
1e59de90 586To get a value from a pool's key, run a command of the following form:
7c673cae 587
39ae355f
TL
588.. prompt:: bash $
589
590 ceph osd pool get {pool-name} {key}
11fdf7f2 591
1e59de90
TL
592
593You may get values from the following keys:
594
7c673cae
FG
595
596``size``
597
1e59de90 598:Description: See size_.
7c673cae
FG
599
600:Type: Integer
601
1e59de90 602
7c673cae
FG
603``min_size``
604
1e59de90 605:Description: See min_size_.
7c673cae
FG
606
607:Type: Integer
608:Version: ``0.54`` and above
609
1e59de90 610
7c673cae
FG
611``pg_num``
612
1e59de90 613:Description: See pg_num_.
7c673cae
FG
614
615:Type: Integer
616
617
618``pgp_num``
619
1e59de90 620:Description: See pgp_num_.
7c673cae
FG
621
622:Type: Integer
623:Valid Range: Equal to or less than ``pg_num``.
624
625
b32b8144 626``crush_rule``
7c673cae 627
1e59de90 628:Description: See crush_rule_.
7c673cae
FG
629
630
7c673cae
FG
631``target_max_bytes``
632
1e59de90 633:Description: See target_max_bytes_.
11fdf7f2 634
7c673cae
FG
635:Type: Integer
636
637
11fdf7f2 638``target_max_objects``
7c673cae 639
1e59de90 640:Description: See target_max_objects_.
7c673cae
FG
641
642:Type: Integer
643
644
7c673cae
FG
645``fast_read``
646
1e59de90 647:Description: See fast_read_.
7c673cae
FG
648
649:Type: Boolean
650
651
652``scrub_min_interval``
653
1e59de90 654:Description: See scrub_min_interval_.
7c673cae
FG
655
656:Type: Double
657
658
659``scrub_max_interval``
660
1e59de90 661:Description: See scrub_max_interval_.
7c673cae
FG
662
663:Type: Double
664
665
666``deep_scrub_interval``
667
1e59de90 668:Description: See deep_scrub_interval_.
7c673cae
FG
669
670:Type: Double
671
672
28e407b8
AA
673``allow_ec_overwrites``
674
1e59de90 675:Description: See allow_ec_overwrites_.
28e407b8
AA
676
677:Type: Boolean
678
679
11fdf7f2
TL
680``recovery_priority``
681
1e59de90 682:Description: See recovery_priority_.
11fdf7f2
TL
683
684:Type: Integer
685
686
687``recovery_op_priority``
688
1e59de90 689:Description: See recovery_op_priority_.
11fdf7f2
TL
690
691:Type: Integer
692
693
1e59de90
TL
694Setting the Number of RADOS Object Replicas
695===========================================
7c673cae 696
1e59de90
TL
697To set the number of data replicas on a replicated pool, run a command of the
698following form:
39ae355f
TL
699
700.. prompt:: bash $
7c673cae 701
39ae355f 702 ceph osd pool set {poolname} size {num-replicas}
7c673cae 703
1e59de90
TL
704.. important:: The ``{num-replicas}`` argument includes the primary object
705 itself. For example, if you want there to be two replicas of the object in
706 addition to the original object (for a total of three instances of the
707 object) specify ``3`` by running the following command:
39ae355f
TL
708
709.. prompt:: bash $
7c673cae 710
39ae355f 711 ceph osd pool set data size 3
7c673cae 712
1e59de90
TL
713You may run the above command for each pool.
714
715.. Note:: An object might accept I/Os in degraded mode with fewer than ``pool
716 size`` replicas. To set a minimum number of replicas required for I/O, you
717 should use the ``min_size`` setting. For example, you might run the
718 following command:
7c673cae 719
39ae355f
TL
720.. prompt:: bash $
721
722 ceph osd pool set data min_size 2
7c673cae 723
1e59de90
TL
724This command ensures that no object in the data pool will receive I/O if it has
725fewer than ``min_size`` (in this case, two) replicas.
7c673cae
FG
726
727
1e59de90
TL
728Getting the Number of Object Replicas
729=====================================
7c673cae 730
1e59de90 731To get the number of object replicas, run the following command:
39ae355f
TL
732
733.. prompt:: bash $
7c673cae 734
39ae355f 735 ceph osd dump | grep 'replicated size'
11fdf7f2 736
1e59de90
TL
737Ceph will list pools and highlight the ``replicated size`` attribute. By
738default, Ceph creates two replicas of an object (a total of three copies, for a
739size of ``3``).
7c673cae 740
aee94f69
TL
741Managing pools that are flagged with ``--bulk``
742===============================================
743See :ref:`managing_bulk_flagged_pools`.
744
7c673cae 745
39ae355f 746.. _pgcalc: https://old.ceph.com/pgcalc/
7c673cae 747.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref
11fdf7f2 748.. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
7c673cae
FG
749.. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups
750.. _Erasure Coding with Overwrites: ../erasure-code#erasure-coding-with-overwrites
c07f9fc5 751.. _Block Device Commands: ../../../rbd/rados-rbd-cmds/#create-a-block-device-pool