]> git.proxmox.com Git - ceph.git/blame - ceph/doc/mgr/localpool.rst
update source to Ceph Pacific 16.2.2
[ceph.git] / ceph / doc / mgr / localpool.rst
CommitLineData
11fdf7f2 1Local Pool Module
3efd9988
FG
2=================
3
11fdf7f2 4The *localpool* module can automatically create RADOS pools that are
3efd9988 5localized to a subset of the overall cluster. For example, by default, it will
f67539c2
TL
6create a pool for each distinct ``rack`` in the cluster. This can be useful for
7deployments where it is desirable to distribute some data locally and other data
8globally across the cluster. One use-case is measuring performance and testing
9behavior of specific drive, NIC, or chassis models in isolation.
3efd9988
FG
10
11Enabling
12--------
13
14The *localpool* module is enabled with::
15
16 ceph mgr module enable localpool
17
18Configuring
19-----------
20
21The *localpool* module understands the following options:
22
23* **subtree** (default: `rack`): which CRUSH subtree type the module
24 should create a pool for.
25* **failure_domain** (default: `host`): what failure domain we should
26 separate data replicas across.
27* **pg_num** (default: `128`): number of PGs to create for each pool
28* **num_rep** (default: `3`): number of replicas for each pool.
29 (Currently, pools are always replicated.)
30* **min_size** (default: none): value to set min_size to (unchanged from Ceph's default if this option is not set)
31* **prefix** (default: `by-$subtreetype-`): prefix for the pool name.
32
33These options are set via the config-key interface. For example, to
34change the replication level to 2x with only 64 PGs, ::
35
11fdf7f2
TL
36 ceph config set mgr mgr/localpool/num_rep 2
37 ceph config set mgr mgr/localpool/pg_num 64