]>
Commit | Line | Data |
---|---|---|
3efd9988 FG |
1 | Local pool plugin |
2 | ================= | |
3 | ||
4 | The *localpool* plugin can automatically create RADOS pools that are | |
5 | localized to a subset of the overall cluster. For example, by default, it will | |
6 | create a pool for each distinct rack in the cluster. This can be useful for some | |
7 | deployments that want to distribute some data locally as well as globally across the cluster . | |
8 | ||
9 | Enabling | |
10 | -------- | |
11 | ||
12 | The *localpool* module is enabled with:: | |
13 | ||
14 | ceph mgr module enable localpool | |
15 | ||
16 | Configuring | |
17 | ----------- | |
18 | ||
19 | The *localpool* module understands the following options: | |
20 | ||
21 | * **subtree** (default: `rack`): which CRUSH subtree type the module | |
22 | should create a pool for. | |
23 | * **failure_domain** (default: `host`): what failure domain we should | |
24 | separate data replicas across. | |
25 | * **pg_num** (default: `128`): number of PGs to create for each pool | |
26 | * **num_rep** (default: `3`): number of replicas for each pool. | |
27 | (Currently, pools are always replicated.) | |
28 | * **min_size** (default: none): value to set min_size to (unchanged from Ceph's default if this option is not set) | |
29 | * **prefix** (default: `by-$subtreetype-`): prefix for the pool name. | |
30 | ||
31 | These options are set via the config-key interface. For example, to | |
32 | change the replication level to 2x with only 64 PGs, :: | |
33 | ||
34 | ceph config-key set mgr/localpool/num_rep 2 | |
35 | ceph config-key set mgr/localpool/pg_num 64 |