]>
Commit | Line | Data |
---|---|---|
aa039b0f DM |
1 | Ceph RADOS Block Devices (RBD) |
2 | ------------------------------ | |
3 | ||
4 | Storage pool type: `rbd` | |
5 | ||
6 | http://ceph.com[Ceph] is a distributed object store and file system | |
7 | designed to provide excellent performance, reliability and | |
8 | scalability. RADOS block devices implement a feature rich block level | |
9 | storage, and you get the following advantages: | |
10 | ||
11 | * thin provisioning | |
12 | * resizable volumes | |
13 | * distributed and redundant (striped over multiple OSDs) | |
14 | * full snapshot and clone capabilities | |
15 | * self healing | |
16 | * no single point of failure | |
17 | * scalable to the exabyte level | |
18 | * kernel and unser space implementation available | |
19 | ||
20 | NOTE: For smaller deployments, it is also possible to run Ceph | |
21 | services directly on your {pve} nodes. Recent hardware has plenty | |
22 | of CPU power and RAM, so running storage services and VMs on same node | |
23 | is possible. | |
24 | ||
25 | Configuration | |
26 | ~~~~~~~~~~~~~ | |
27 | ||
28 | This backend supports the common storage properties `nodes`, | |
29 | `disable`, `content`, and the following `rbd` specific properties: | |
30 | ||
31 | monhost:: | |
32 | ||
33 | List of monitor daemon IPs. | |
34 | ||
35 | pool:: | |
36 | ||
37 | Ceph pool name. | |
38 | ||
39 | username:: | |
40 | ||
41 | RBD user Id. | |
42 | ||
43 | krbd:: | |
44 | ||
45 | Access rbd through krbd kernel module. This is required if you want to | |
46 | use the storage for containers. | |
47 | ||
48 | .Configuration Example ('/etc/pve/storage.cfg') | |
49 | ---- | |
50 | rbd: ceph3 | |
51 | monhost 10.1.1.20 10.1.1.21 10.1.1.22 | |
52 | pool ceph3 | |
53 | content images | |
54 | username admin | |
55 | ---- | |
56 | ||
57 | TIP: You can use the 'rbd' utility to do low-level management tasks. | |
58 | ||
59 | Authentication | |
60 | ~~~~~~~~~~~~~~ | |
61 | ||
62 | If you use cephx authentication, you need to copy the keyfile from | |
63 | Ceph to Proxmox VE host. | |
64 | ||
65 | Create the directory '/etc/pve/priv/ceph' with | |
66 | ||
67 | mkdir /etc/pve/priv/ceph | |
68 | ||
69 | Then copy the keyring | |
70 | ||
71 | scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring | |
72 | ||
73 | The keyring must be named to match your `<STORAGE_ID>`. Copying the | |
74 | keyring generally requires root privileges. | |
75 | ||
76 | Storage Features | |
77 | ~~~~~~~~~~~~~~~~ | |
78 | ||
79 | The `rbd` backend is a block level storage, and implements full | |
80 | snapshot and clone functionality. | |
81 | ||
82 | .Storage features for backend `rbd` | |
83 | [width="100%",cols="m,m,3*d",options="header"] | |
84 | |============================================================================== | |
85 | |Content types |Image formats |Shared |Snapshots |Clones | |
86 | |images rootdir |raw |yes |yes |yes | |
87 | |============================================================================== | |
88 |