]> git.proxmox.com Git - pve-docs.git/blame - pve-storage-rbd.adoc
Reword krbd option description
[pve-docs.git] / pve-storage-rbd.adoc
CommitLineData
950229ff 1[[ceph_rados_block_devices]]
aa039b0f
DM
2Ceph RADOS Block Devices (RBD)
3------------------------------
5f09af76
DM
4ifdef::wiki[]
5:pve-toplevel:
cb84ed18 6:title: Storage: RBD
5f09af76
DM
7endif::wiki[]
8
aa039b0f
DM
9Storage pool type: `rbd`
10
11http://ceph.com[Ceph] is a distributed object store and file system
12designed to provide excellent performance, reliability and
13scalability. RADOS block devices implement a feature rich block level
14storage, and you get the following advantages:
15
16* thin provisioning
17* resizable volumes
18* distributed and redundant (striped over multiple OSDs)
19* full snapshot and clone capabilities
20* self healing
21* no single point of failure
78f02fed 22* scalable to the exabyte level
5eba0743 23* kernel and user space implementation available
aa039b0f
DM
24
25NOTE: For smaller deployments, it is also possible to run Ceph
26services directly on your {pve} nodes. Recent hardware has plenty
27of CPU power and RAM, so running storage services and VMs on same node
28is possible.
29
d9a27ee1 30[[storage_rbd_config]]
aa039b0f
DM
31Configuration
32~~~~~~~~~~~~~
33
34This backend supports the common storage properties `nodes`,
35`disable`, `content`, and the following `rbd` specific properties:
36
37monhost::
38
78f02fed
AA
39List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
40PVE cluster.
aa039b0f
DM
41
42pool::
43
44Ceph pool name.
45
46username::
47
78f02fed 48RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster.
aa039b0f
DM
49
50krbd::
51
b08dc187
AA
52Access rbd through the krbd kernel module. Optional.
53
54NOTE: Irrespective of the krbd option, containers use always krbd. You can
55choose the access method used for VMs, by setting the krbd option.
aa039b0f 56
78f02fed 57.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
aa039b0f 58----
78f02fed 59rbd: ceph-external
aa039b0f 60 monhost 10.1.1.20 10.1.1.21 10.1.1.22
78f02fed 61 pool ceph-external
aa039b0f
DM
62 content images
63 username admin
64----
65
8c1189b6 66TIP: You can use the `rbd` utility to do low-level management tasks.
aa039b0f
DM
67
68Authentication
69~~~~~~~~~~~~~~
70
78f02fed
AA
71If you use `cephx` authentication, you need to copy the keyfile from your
72external Ceph cluster to a Proxmox VE host.
aa039b0f 73
8c1189b6 74Create the directory `/etc/pve/priv/ceph` with
aa039b0f
DM
75
76 mkdir /etc/pve/priv/ceph
77
78Then copy the keyring
79
80 scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
81
82The keyring must be named to match your `<STORAGE_ID>`. Copying the
83keyring generally requires root privileges.
84
78f02fed
AA
85If Ceph is installed locally on the PVE cluster, this is done automatically by
86'pveceph' or in the GUI.
87
aa039b0f
DM
88Storage Features
89~~~~~~~~~~~~~~~~
90
91The `rbd` backend is a block level storage, and implements full
92snapshot and clone functionality.
93
94.Storage features for backend `rbd`
95[width="100%",cols="m,m,3*d",options="header"]
96|==============================================================================
97|Content types |Image formats |Shared |Snapshots |Clones
98|images rootdir |raw |yes |yes |yes
99|==============================================================================
100
deb4673f
DM
101ifdef::wiki[]
102
103See Also
104~~~~~~~~
105
f532afb7 106* link:/wiki/Storage[Storage]
deb4673f
DM
107
108endif::wiki[]
109