]>
Commit | Line | Data |
---|---|---|
876f6dce | 1 | :fn-ceph-user-mgmt: footnote:cephusermgmt[Ceph user management {cephdocs-url}/rados/operations/user-management/] |
950229ff | 2 | [[ceph_rados_block_devices]] |
aa039b0f DM |
3 | Ceph RADOS Block Devices (RBD) |
4 | ------------------------------ | |
5f09af76 DM |
5 | ifdef::wiki[] |
6 | :pve-toplevel: | |
cb84ed18 | 7 | :title: Storage: RBD |
5f09af76 DM |
8 | endif::wiki[] |
9 | ||
aa039b0f DM |
10 | Storage pool type: `rbd` |
11 | ||
a55d30db | 12 | https://ceph.com[Ceph] is a distributed object store and file system |
aa039b0f DM |
13 | designed to provide excellent performance, reliability and |
14 | scalability. RADOS block devices implement a feature rich block level | |
15 | storage, and you get the following advantages: | |
16 | ||
17 | * thin provisioning | |
18 | * resizable volumes | |
19 | * distributed and redundant (striped over multiple OSDs) | |
20 | * full snapshot and clone capabilities | |
21 | * self healing | |
22 | * no single point of failure | |
78f02fed | 23 | * scalable to the exabyte level |
5eba0743 | 24 | * kernel and user space implementation available |
aa039b0f DM |
25 | |
26 | NOTE: For smaller deployments, it is also possible to run Ceph | |
27 | services directly on your {pve} nodes. Recent hardware has plenty | |
28 | of CPU power and RAM, so running storage services and VMs on same node | |
29 | is possible. | |
30 | ||
d9a27ee1 | 31 | [[storage_rbd_config]] |
aa039b0f DM |
32 | Configuration |
33 | ~~~~~~~~~~~~~ | |
34 | ||
35 | This backend supports the common storage properties `nodes`, | |
36 | `disable`, `content`, and the following `rbd` specific properties: | |
37 | ||
38 | monhost:: | |
39 | ||
78f02fed | 40 | List of monitor daemon IPs. Optional, only needed if Ceph is not running on the |
8ebd2453 | 41 | {pve} cluster. |
aa039b0f DM |
42 | |
43 | pool:: | |
44 | ||
45 | Ceph pool name. | |
46 | ||
47 | username:: | |
48 | ||
8ebd2453 | 49 | RBD user ID. Optional, only needed if Ceph is not running on the {pve} cluster. |
e9872054 DW |
50 | Note that only the user ID should be used. The "client." type prefix must be |
51 | left out. | |
aa039b0f DM |
52 | |
53 | krbd:: | |
54 | ||
7d607884 | 55 | Enforce access to rados block devices through the krbd kernel module. Optional. |
b08dc187 | 56 | |
b4359c08 | 57 | NOTE: Containers will use `krbd` independent of the option value. |
aa039b0f | 58 | |
78f02fed | 59 | .Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`) |
aa039b0f | 60 | ---- |
78f02fed | 61 | rbd: ceph-external |
aa039b0f | 62 | monhost 10.1.1.20 10.1.1.21 10.1.1.22 |
78f02fed | 63 | pool ceph-external |
aa039b0f DM |
64 | content images |
65 | username admin | |
66 | ---- | |
67 | ||
8c1189b6 | 68 | TIP: You can use the `rbd` utility to do low-level management tasks. |
aa039b0f DM |
69 | |
70 | Authentication | |
71 | ~~~~~~~~~~~~~~ | |
72 | ||
33ccd432 FE |
73 | NOTE: If Ceph is installed locally on the {pve} cluster, the following is done |
74 | automatically when adding the storage. | |
75 | ||
876f6dce AL |
76 | If you use `cephx` authentication, which is enabled by default, you need to |
77 | provide the keyring from the external Ceph cluster. | |
aa039b0f | 78 | |
876f6dce AL |
79 | To configure the storage via the CLI, you first need to make the file |
80 | containing the keyring available. One way is to copy the file from the external | |
81 | Ceph cluster directly to one of the {pve} nodes. The following example will | |
82 | copy it to the `/root` directory of the node on which we run it: | |
aa039b0f | 83 | |
876f6dce AL |
84 | ---- |
85 | # scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring | |
86 | ---- | |
87 | ||
88 | Then use the `pvesm` CLI tool to configure the external RBD storage, use the | |
89 | `--keyring` parameter, which needs to be a path to the keyring file that you | |
90 | copied. For example: | |
91 | ||
92 | ---- | |
93 | # pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring | |
94 | ---- | |
95 | ||
96 | When configuring an external RBD storage via the GUI, you can copy and paste | |
97 | the keyring into the appropriate field. | |
98 | ||
99 | The keyring will be stored at | |
100 | ||
101 | ---- | |
102 | # /etc/pve/priv/ceph/<STORAGE_ID>.keyring | |
103 | ---- | |
aa039b0f | 104 | |
876f6dce AL |
105 | TIP: Creating a keyring with only the needed capabilities is recommend when |
106 | connecting to an external cluster. For further information on Ceph user | |
107 | management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]] | |
aa039b0f | 108 | |
0b32cd0a AA |
109 | Ceph client configuration (optional) |
110 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
111 | ||
f226da0e | 112 | Connecting to an external Ceph storage doesn't always allow setting |
0b32cd0a | 113 | client-specific options in the config DB on the external cluster. You can add a |
f226da0e | 114 | `ceph.conf` beside the Ceph keyring to change the Ceph client configuration for |
0b32cd0a AA |
115 | the storage. |
116 | ||
117 | The ceph.conf needs to have the same name as the storage. | |
118 | ||
119 | ---- | |
120 | # /etc/pve/priv/ceph/<STORAGE_ID>.conf | |
121 | ---- | |
122 | ||
123 | See the RBD configuration reference footnote:[RBD configuration reference | |
124 | {cephdocs-url}/rbd/rbd-config-ref/] for possible settings. | |
125 | ||
126 | NOTE: Do not change these settings lightly. {PVE} is merging the | |
127 | <STORAGE_ID>.conf with the storage configuration. | |
aa039b0f | 128 | |
78f02fed | 129 | |
aa039b0f DM |
130 | Storage Features |
131 | ~~~~~~~~~~~~~~~~ | |
132 | ||
133 | The `rbd` backend is a block level storage, and implements full | |
134 | snapshot and clone functionality. | |
135 | ||
136 | .Storage features for backend `rbd` | |
137 | [width="100%",cols="m,m,3*d",options="header"] | |
138 | |============================================================================== | |
139 | |Content types |Image formats |Shared |Snapshots |Clones | |
140 | |images rootdir |raw |yes |yes |yes | |
141 | |============================================================================== | |
142 | ||
deb4673f DM |
143 | ifdef::wiki[] |
144 | ||
145 | See Also | |
146 | ~~~~~~~~ | |
147 | ||
f532afb7 | 148 | * link:/wiki/Storage[Storage] |
deb4673f DM |
149 | |
150 | endif::wiki[] | |
151 |