]>
Commit | Line | Data |
---|---|---|
950229ff | 1 | [[ceph_rados_block_devices]] |
aa039b0f DM |
2 | Ceph RADOS Block Devices (RBD) |
3 | ------------------------------ | |
5f09af76 DM |
4 | ifdef::wiki[] |
5 | :pve-toplevel: | |
cb84ed18 | 6 | :title: Storage: RBD |
5f09af76 DM |
7 | endif::wiki[] |
8 | ||
aa039b0f DM |
9 | Storage pool type: `rbd` |
10 | ||
11 | http://ceph.com[Ceph] is a distributed object store and file system | |
12 | designed to provide excellent performance, reliability and | |
13 | scalability. RADOS block devices implement a feature rich block level | |
14 | storage, and you get the following advantages: | |
15 | ||
16 | * thin provisioning | |
17 | * resizable volumes | |
18 | * distributed and redundant (striped over multiple OSDs) | |
19 | * full snapshot and clone capabilities | |
20 | * self healing | |
21 | * no single point of failure | |
78f02fed | 22 | * scalable to the exabyte level |
5eba0743 | 23 | * kernel and user space implementation available |
aa039b0f DM |
24 | |
25 | NOTE: For smaller deployments, it is also possible to run Ceph | |
26 | services directly on your {pve} nodes. Recent hardware has plenty | |
27 | of CPU power and RAM, so running storage services and VMs on same node | |
28 | is possible. | |
29 | ||
d9a27ee1 | 30 | [[storage_rbd_config]] |
aa039b0f DM |
31 | Configuration |
32 | ~~~~~~~~~~~~~ | |
33 | ||
34 | This backend supports the common storage properties `nodes`, | |
35 | `disable`, `content`, and the following `rbd` specific properties: | |
36 | ||
37 | monhost:: | |
38 | ||
78f02fed AA |
39 | List of monitor daemon IPs. Optional, only needed if Ceph is not running on the |
40 | PVE cluster. | |
aa039b0f DM |
41 | |
42 | pool:: | |
43 | ||
44 | Ceph pool name. | |
45 | ||
46 | username:: | |
47 | ||
78f02fed | 48 | RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster. |
aa039b0f DM |
49 | |
50 | krbd:: | |
51 | ||
52 | Access rbd through krbd kernel module. This is required if you want to | |
53 | use the storage for containers. | |
54 | ||
78f02fed | 55 | .Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`) |
aa039b0f | 56 | ---- |
78f02fed | 57 | rbd: ceph-external |
aa039b0f | 58 | monhost 10.1.1.20 10.1.1.21 10.1.1.22 |
78f02fed | 59 | pool ceph-external |
aa039b0f DM |
60 | content images |
61 | username admin | |
62 | ---- | |
63 | ||
8c1189b6 | 64 | TIP: You can use the `rbd` utility to do low-level management tasks. |
aa039b0f DM |
65 | |
66 | Authentication | |
67 | ~~~~~~~~~~~~~~ | |
68 | ||
78f02fed AA |
69 | If you use `cephx` authentication, you need to copy the keyfile from your |
70 | external Ceph cluster to a Proxmox VE host. | |
aa039b0f | 71 | |
8c1189b6 | 72 | Create the directory `/etc/pve/priv/ceph` with |
aa039b0f DM |
73 | |
74 | mkdir /etc/pve/priv/ceph | |
75 | ||
76 | Then copy the keyring | |
77 | ||
78 | scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring | |
79 | ||
80 | The keyring must be named to match your `<STORAGE_ID>`. Copying the | |
81 | keyring generally requires root privileges. | |
82 | ||
78f02fed AA |
83 | If Ceph is installed locally on the PVE cluster, this is done automatically by |
84 | 'pveceph' or in the GUI. | |
85 | ||
aa039b0f DM |
86 | Storage Features |
87 | ~~~~~~~~~~~~~~~~ | |
88 | ||
89 | The `rbd` backend is a block level storage, and implements full | |
90 | snapshot and clone functionality. | |
91 | ||
92 | .Storage features for backend `rbd` | |
93 | [width="100%",cols="m,m,3*d",options="header"] | |
94 | |============================================================================== | |
95 | |Content types |Image formats |Shared |Snapshots |Clones | |
96 | |images rootdir |raw |yes |yes |yes | |
97 | |============================================================================== | |
98 | ||
deb4673f DM |
99 | ifdef::wiki[] |
100 | ||
101 | See Also | |
102 | ~~~~~~~~ | |
103 | ||
f532afb7 | 104 | * link:/wiki/Storage[Storage] |
deb4673f DM |
105 | |
106 | endif::wiki[] | |
107 |