4 The *localpool* module can automatically create RADOS pools that are
5 localized to a subset of the overall cluster. For example, by default, it will
6 create a pool for each distinct ``rack`` in the cluster. This can be useful for
7 deployments where it is desirable to distribute some data locally and other data
8 globally across the cluster. One use-case is measuring performance and testing
9 behavior of specific drive, NIC, or chassis models in isolation.
14 The *localpool* module is enabled with::
16 ceph mgr module enable localpool
21 The *localpool* module understands the following options:
23 * **subtree** (default: `rack`): which CRUSH subtree type the module
24 should create a pool for.
25 * **failure_domain** (default: `host`): what failure domain we should
26 separate data replicas across.
27 * **pg_num** (default: `128`): number of PGs to create for each pool
28 * **num_rep** (default: `3`): number of replicas for each pool.
29 (Currently, pools are always replicated.)
30 * **min_size** (default: none): value to set min_size to (unchanged from Ceph's default if this option is not set)
31 * **prefix** (default: `by-$subtreetype-`): prefix for the pool name.
33 These options are set via the config-key interface. For example, to
34 change the replication level to 2x with only 64 PGs, ::
36 ceph config set mgr mgr/localpool/num_rep 2
37 ceph config set mgr mgr/localpool/pg_num 64