]> git.proxmox.com Git - pve-docs.git/blame - pve-storage-rbd.adoc
backup: clarify that CLI means FS-level and highlight retention-note
[pve-docs.git] / pve-storage-rbd.adoc
CommitLineData
950229ff 1[[ceph_rados_block_devices]]
aa039b0f
DM
2Ceph RADOS Block Devices (RBD)
3------------------------------
5f09af76
DM
4ifdef::wiki[]
5:pve-toplevel:
cb84ed18 6:title: Storage: RBD
5f09af76
DM
7endif::wiki[]
8
aa039b0f
DM
9Storage pool type: `rbd`
10
a55d30db 11https://ceph.com[Ceph] is a distributed object store and file system
aa039b0f
DM
12designed to provide excellent performance, reliability and
13scalability. RADOS block devices implement a feature rich block level
14storage, and you get the following advantages:
15
16* thin provisioning
17* resizable volumes
18* distributed and redundant (striped over multiple OSDs)
19* full snapshot and clone capabilities
20* self healing
21* no single point of failure
78f02fed 22* scalable to the exabyte level
5eba0743 23* kernel and user space implementation available
aa039b0f
DM
24
25NOTE: For smaller deployments, it is also possible to run Ceph
26services directly on your {pve} nodes. Recent hardware has plenty
27of CPU power and RAM, so running storage services and VMs on same node
28is possible.
29
d9a27ee1 30[[storage_rbd_config]]
aa039b0f
DM
31Configuration
32~~~~~~~~~~~~~
33
34This backend supports the common storage properties `nodes`,
35`disable`, `content`, and the following `rbd` specific properties:
36
37monhost::
38
78f02fed
AA
39List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
40PVE cluster.
aa039b0f
DM
41
42pool::
43
44Ceph pool name.
45
46username::
47
e9872054
DW
48RBD user ID. Optional, only needed if Ceph is not running on the PVE cluster.
49Note that only the user ID should be used. The "client." type prefix must be
50left out.
aa039b0f
DM
51
52krbd::
53
7d607884 54Enforce access to rados block devices through the krbd kernel module. Optional.
b08dc187 55
b4359c08 56NOTE: Containers will use `krbd` independent of the option value.
aa039b0f 57
78f02fed 58.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
aa039b0f 59----
78f02fed 60rbd: ceph-external
aa039b0f 61 monhost 10.1.1.20 10.1.1.21 10.1.1.22
78f02fed 62 pool ceph-external
aa039b0f
DM
63 content images
64 username admin
65----
66
8c1189b6 67TIP: You can use the `rbd` utility to do low-level management tasks.
aa039b0f
DM
68
69Authentication
70~~~~~~~~~~~~~~
71
78f02fed
AA
72If you use `cephx` authentication, you need to copy the keyfile from your
73external Ceph cluster to a Proxmox VE host.
aa039b0f 74
8c1189b6 75Create the directory `/etc/pve/priv/ceph` with
aa039b0f
DM
76
77 mkdir /etc/pve/priv/ceph
78
79Then copy the keyring
80
81 scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
82
83The keyring must be named to match your `<STORAGE_ID>`. Copying the
84keyring generally requires root privileges.
85
78f02fed
AA
86If Ceph is installed locally on the PVE cluster, this is done automatically by
87'pveceph' or in the GUI.
88
aa039b0f
DM
89Storage Features
90~~~~~~~~~~~~~~~~
91
92The `rbd` backend is a block level storage, and implements full
93snapshot and clone functionality.
94
95.Storage features for backend `rbd`
96[width="100%",cols="m,m,3*d",options="header"]
97|==============================================================================
98|Content types |Image formats |Shared |Snapshots |Clones
99|images rootdir |raw |yes |yes |yes
100|==============================================================================
101
deb4673f
DM
102ifdef::wiki[]
103
104See Also
105~~~~~~~~
106
f532afb7 107* link:/wiki/Storage[Storage]
deb4673f
DM
108
109endif::wiki[]
110