]> git.proxmox.com Git - ceph.git/blob - ceph/doc/mgr/localpool.rst
fe8bd3942a9ad2f5b5767eb1100861181e9919e3
[ceph.git] / ceph / doc / mgr / localpool.rst
1 Local Pool Module
2 =================
3
4 The *localpool* module can automatically create RADOS pools that are
5 localized to a subset of the overall cluster. For example, by default, it will
6 create a pool for each distinct ``rack`` in the cluster. This can be useful for
7 deployments where it is desirable to distribute some data locally and other data
8 globally across the cluster. One use-case is measuring performance and testing
9 behavior of specific drive, NIC, or chassis models in isolation.
10
11 Enabling
12 --------
13
14 The *localpool* module is enabled with::
15
16 ceph mgr module enable localpool
17
18 Configuring
19 -----------
20
21 The *localpool* module understands the following options:
22
23 * **subtree** (default: `rack`): which CRUSH subtree type the module
24 should create a pool for.
25 * **failure_domain** (default: `host`): what failure domain we should
26 separate data replicas across.
27 * **pg_num** (default: `128`): number of PGs to create for each pool
28 * **num_rep** (default: `3`): number of replicas for each pool.
29 (Currently, pools are always replicated.)
30 * **min_size** (default: none): value to set min_size to (unchanged from Ceph's default if this option is not set)
31 * **prefix** (default: `by-$subtreetype-`): prefix for the pool name.
32
33 These options are set via the config-key interface. For example, to
34 change the replication level to 2x with only 64 PGs, ::
35
36 ceph config set mgr mgr/localpool/num_rep 2
37 ceph config set mgr mgr/localpool/pg_num 64