]>
Commit | Line | Data |
---|---|---|
1 | [[ceph_rados_block_devices]] | |
2 | Ceph RADOS Block Devices (RBD) | |
3 | ------------------------------ | |
4 | ifdef::wiki[] | |
5 | :pve-toplevel: | |
6 | :title: Storage: RBD | |
7 | endif::wiki[] | |
8 | ||
9 | Storage pool type: `rbd` | |
10 | ||
11 | https://ceph.com[Ceph] is a distributed object store and file system | |
12 | designed to provide excellent performance, reliability and | |
13 | scalability. RADOS block devices implement a feature rich block level | |
14 | storage, and you get the following advantages: | |
15 | ||
16 | * thin provisioning | |
17 | * resizable volumes | |
18 | * distributed and redundant (striped over multiple OSDs) | |
19 | * full snapshot and clone capabilities | |
20 | * self healing | |
21 | * no single point of failure | |
22 | * scalable to the exabyte level | |
23 | * kernel and user space implementation available | |
24 | ||
25 | NOTE: For smaller deployments, it is also possible to run Ceph | |
26 | services directly on your {pve} nodes. Recent hardware has plenty | |
27 | of CPU power and RAM, so running storage services and VMs on same node | |
28 | is possible. | |
29 | ||
30 | [[storage_rbd_config]] | |
31 | Configuration | |
32 | ~~~~~~~~~~~~~ | |
33 | ||
34 | This backend supports the common storage properties `nodes`, | |
35 | `disable`, `content`, and the following `rbd` specific properties: | |
36 | ||
37 | monhost:: | |
38 | ||
39 | List of monitor daemon IPs. Optional, only needed if Ceph is not running on the | |
40 | PVE cluster. | |
41 | ||
42 | pool:: | |
43 | ||
44 | Ceph pool name. | |
45 | ||
46 | username:: | |
47 | ||
48 | RBD user ID. Optional, only needed if Ceph is not running on the PVE cluster. | |
49 | Note that only the user ID should be used. The "client." type prefix must be | |
50 | left out. | |
51 | ||
52 | krbd:: | |
53 | ||
54 | Enforce access to rados block devices through the krbd kernel module. Optional. | |
55 | ||
56 | NOTE: Containers will use `krbd` independent of the option value. | |
57 | ||
58 | .Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`) | |
59 | ---- | |
60 | rbd: ceph-external | |
61 | monhost 10.1.1.20 10.1.1.21 10.1.1.22 | |
62 | pool ceph-external | |
63 | content images | |
64 | username admin | |
65 | ---- | |
66 | ||
67 | TIP: You can use the `rbd` utility to do low-level management tasks. | |
68 | ||
69 | Authentication | |
70 | ~~~~~~~~~~~~~~ | |
71 | ||
72 | If you use `cephx` authentication, you need to copy the keyfile from your | |
73 | external Ceph cluster to a Proxmox VE host. | |
74 | ||
75 | Create the directory `/etc/pve/priv/ceph` with | |
76 | ||
77 | mkdir /etc/pve/priv/ceph | |
78 | ||
79 | Then copy the keyring | |
80 | ||
81 | scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring | |
82 | ||
83 | The keyring must be named to match your `<STORAGE_ID>`. Copying the | |
84 | keyring generally requires root privileges. | |
85 | ||
86 | If Ceph is installed locally on the PVE cluster, this is done automatically by | |
87 | 'pveceph' or in the GUI. | |
88 | ||
89 | Storage Features | |
90 | ~~~~~~~~~~~~~~~~ | |
91 | ||
92 | The `rbd` backend is a block level storage, and implements full | |
93 | snapshot and clone functionality. | |
94 | ||
95 | .Storage features for backend `rbd` | |
96 | [width="100%",cols="m,m,3*d",options="header"] | |
97 | |============================================================================== | |
98 | |Content types |Image formats |Shared |Snapshots |Clones | |
99 | |images rootdir |raw |yes |yes |yes | |
100 | |============================================================================== | |
101 | ||
102 | ifdef::wiki[] | |
103 | ||
104 | See Also | |
105 | ~~~~~~~~ | |
106 | ||
107 | * link:/wiki/Storage[Storage] | |
108 | ||
109 | endif::wiki[] | |
110 |