]>
Commit | Line | Data |
---|---|---|
aa039b0f DM |
1 | Ceph RADOS Block Devices (RBD) |
2 | ------------------------------ | |
fc3425bd | 3 | include::attributes.txt[] |
aa039b0f DM |
4 | |
5 | Storage pool type: `rbd` | |
6 | ||
7 | http://ceph.com[Ceph] is a distributed object store and file system | |
8 | designed to provide excellent performance, reliability and | |
9 | scalability. RADOS block devices implement a feature rich block level | |
10 | storage, and you get the following advantages: | |
11 | ||
12 | * thin provisioning | |
13 | * resizable volumes | |
14 | * distributed and redundant (striped over multiple OSDs) | |
15 | * full snapshot and clone capabilities | |
16 | * self healing | |
17 | * no single point of failure | |
18 | * scalable to the exabyte level | |
19 | * kernel and unser space implementation available | |
20 | ||
21 | NOTE: For smaller deployments, it is also possible to run Ceph | |
22 | services directly on your {pve} nodes. Recent hardware has plenty | |
23 | of CPU power and RAM, so running storage services and VMs on same node | |
24 | is possible. | |
25 | ||
26 | Configuration | |
27 | ~~~~~~~~~~~~~ | |
28 | ||
29 | This backend supports the common storage properties `nodes`, | |
30 | `disable`, `content`, and the following `rbd` specific properties: | |
31 | ||
32 | monhost:: | |
33 | ||
34 | List of monitor daemon IPs. | |
35 | ||
36 | pool:: | |
37 | ||
38 | Ceph pool name. | |
39 | ||
40 | username:: | |
41 | ||
42 | RBD user Id. | |
43 | ||
44 | krbd:: | |
45 | ||
46 | Access rbd through krbd kernel module. This is required if you want to | |
47 | use the storage for containers. | |
48 | ||
49 | .Configuration Example ('/etc/pve/storage.cfg') | |
50 | ---- | |
51 | rbd: ceph3 | |
52 | monhost 10.1.1.20 10.1.1.21 10.1.1.22 | |
53 | pool ceph3 | |
54 | content images | |
55 | username admin | |
56 | ---- | |
57 | ||
58 | TIP: You can use the 'rbd' utility to do low-level management tasks. | |
59 | ||
60 | Authentication | |
61 | ~~~~~~~~~~~~~~ | |
62 | ||
63 | If you use cephx authentication, you need to copy the keyfile from | |
64 | Ceph to Proxmox VE host. | |
65 | ||
66 | Create the directory '/etc/pve/priv/ceph' with | |
67 | ||
68 | mkdir /etc/pve/priv/ceph | |
69 | ||
70 | Then copy the keyring | |
71 | ||
72 | scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring | |
73 | ||
74 | The keyring must be named to match your `<STORAGE_ID>`. Copying the | |
75 | keyring generally requires root privileges. | |
76 | ||
77 | Storage Features | |
78 | ~~~~~~~~~~~~~~~~ | |
79 | ||
80 | The `rbd` backend is a block level storage, and implements full | |
81 | snapshot and clone functionality. | |
82 | ||
83 | .Storage features for backend `rbd` | |
84 | [width="100%",cols="m,m,3*d",options="header"] | |
85 | |============================================================================== | |
86 | |Content types |Image formats |Shared |Snapshots |Clones | |
87 | |images rootdir |raw |yes |yes |yes | |
88 | |============================================================================== | |
89 | ||
deb4673f DM |
90 | ifdef::wiki[] |
91 | ||
92 | See Also | |
93 | ~~~~~~~~ | |
94 | ||
95 | * link:/index.php/Storage[Storage] | |
96 | ||
97 | endif::wiki[] | |
98 |