setting up multiple pools, be careful to ensure you set a reasonable number of
placement groups for both the pool and the cluster as a whole.
-- **CRUSH Rules**: When you store data in a pool, a CRUSH ruleset mapped to the
- pool enables CRUSH to identify a rule for the placement of the object
- and its replicas (or chunks for erasure coded pools) in your cluster.
- You can create a custom CRUSH rule for your pool.
-
-- **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``,
+- **CRUSH Rules**: When you store data in a pool, placement of the object
+ and its replicas (or chunks for erasure coded pools) in your cluster is governed
+ by CRUSH rules. You can create a custom CRUSH rule for your pool if the default
+ rule is not appropriate for your use case.
+
+- **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``,
you effectively take a snapshot of a particular pool.
To organize data into pools, you can list, create, and remove pools.
groups in your Ceph configuration file, as the default is NOT ideal.
For details on placement group numbers refer to `setting the number of placement groups`_
+.. note:: Starting with Luminous, all pools need to be associated to the
+ application using the pool. See `Associate Pool to Application`_ below for
+ more information.
+
For example::
osd pool default pg num = 100
To create a pool, execute::
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
- [crush-ruleset-name] [expected-num-objects]
+ [crush-rule-name] [expected-num-objects]
ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \
- [erasure-code-profile] [crush-ruleset-name] [expected_num_objects]
+ [erasure-code-profile] [crush-rule-name] [expected_num_objects]
Where:
:Required: No.
:Default: replicated
-``[crush-ruleset-name]``
+``[crush-rule-name]``
-:Description: The name of a CRUSH ruleset to use for this pool. The specified
- ruleset must exist.
+:Description: The name of a CRUSH rule to use for this pool. The specified
+ rule must exist.
:Type: String
-:Required: No.
-:Default: For **replicated** pools it is the ruleset specified by the ``osd
- pool default crush replicated ruleset`` config variable. This
- ruleset must exist.
+:Required: No.
+:Default: For **replicated** pools it is the rule specified by the ``osd
+ pool default crush rule`` config variable. This rule must exist.
For **erasure** pools it is ``erasure-code`` if the ``default``
`erasure code profile`_ is used or ``{pool-name}`` otherwise. This
- ruleset will be created implicitly if it doesn't exist already.
+ rule will be created implicitly if it doesn't exist already.
``[erasure-code-profile=profile]``
:Required: No.
:Default: 0, no splitting at the pool creation time.
+Associate Pool to Application
+=============================
+
+Pools need to be associated with an application before use. Pools that will be
+used with CephFS or pools that are automatically created by RGW are
+automatically associated. Pools that are intended for use with RBD should be
+initialized using the ``rbd`` tool (see `Block Device Commands`_ for more
+information).
+
+For other cases, you can manually associate a free-form application name to
+a pool.::
+
+ ceph osd pool application enable {pool-name} {application-name}
+
+.. note:: CephFS uses the application name ``cephfs``, RBD uses the
+ application name ``rbd``, and RGW uses the application name ``rgw``.
+
Set Pool Quotas
===============
See `Monitor Configuration`_ for more information.
.. _Monitor Configuration: ../../configuration/mon-config-ref
-
-If you created your own rulesets and rules for a pool you created, you should
-consider removing them when you no longer need your pool::
- ceph osd pool get {pool-name} crush_ruleset
+If you created your own rules for a pool you created, you should consider
+removing them when you no longer need your pool::
-If the ruleset was "123", for example, you can check the other pools like so::
+ ceph osd pool get {pool-name} crush_rule
- ceph osd dump | grep "^pool" | grep "crush_ruleset 123"
+If the rule was "123", for example, you can check the other pools like so::
-If no other pools use that custom ruleset, then it's safe to delete that
-ruleset from the cluster.
+ ceph osd dump | grep "^pool" | grep "crush_rule 123"
+
+If no other pools use that custom rule, then it's safe to delete that
+rule from the cluster.
If you created users with permissions strictly for a pool that no longer
exists, you should consider deleting those users too::
- ceph auth list | grep -C 5 {pool-name}
+ ceph auth ls | grep -C 5 {pool-name}
ceph auth del {user}
You may set values for the following keys:
+.. _compression_algorithm:
+
+``compression_algorithm``
+:Description: Sets inline compression algorithm to use for underlying BlueStore.
+ This setting overrides the `global setting <rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression algorithm``.
+
+:Type: String
+:Valid Settings: ``lz4``, ``snappy``, ``zlib``, ``zstd``
+
+``compression_mode``
+
+:Description: Sets the policy for the inline compression algorithm for underlying BlueStore.
+ This setting overrides the `global setting <rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression mode``.
+
+:Type: String
+:Valid Settings: ``none``, ``passive``, ``aggressive``, ``force``
+
+``compression_min_blob_size``
+
+:Description: Chunks smaller than this are never compressed.
+ This setting overrides the `global setting <rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression min blob *``.
+
+:Type: Unsigned Integer
+
+``compression_max_blob_size``
+
+:Description: Chunks larger than this are broken into smaller blobs sizing
+ ``compression_max_blob_size`` before being compressed.
+
+:Type: Unsigned Integer
+
.. _size:
``size``
:Type: Integer
:Valid Range: Equal to or less than ``pg_num``.
-.. _crush_ruleset:
+.. _crush_rule:
-``crush_ruleset``
+``crush_rule``
-:Description: The ruleset to use for mapping object placement in the cluster.
+:Description: The rule to use for mapping object placement in the cluster.
:Type: Integer
.. _allow_ec_overwrites:
:Valid Range: Equal to or less than ``pg_num``.
-``crush_ruleset``
+``crush_rule``
-:Description: see crush_ruleset_
+:Description: see crush_rule_
``hit_set_type``
.. _Bloom Filter: http://en.wikipedia.org/wiki/Bloom_filter
.. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups
.. _Erasure Coding with Overwrites: ../erasure-code#erasure-coding-with-overwrites
+.. _Block Device Commands: ../../../rbd/rados-rbd-cmds/#create-a-block-device-pool
+