]> git.proxmox.com Git - pve-docs.git/blame_incremental - pve-storage-rbd.adoc
change http links to https
[pve-docs.git] / pve-storage-rbd.adoc
... / ...
CommitLineData
1[[ceph_rados_block_devices]]
2Ceph RADOS Block Devices (RBD)
3------------------------------
4ifdef::wiki[]
5:pve-toplevel:
6:title: Storage: RBD
7endif::wiki[]
8
9Storage pool type: `rbd`
10
11https://ceph.com[Ceph] is a distributed object store and file system
12designed to provide excellent performance, reliability and
13scalability. RADOS block devices implement a feature rich block level
14storage, and you get the following advantages:
15
16* thin provisioning
17* resizable volumes
18* distributed and redundant (striped over multiple OSDs)
19* full snapshot and clone capabilities
20* self healing
21* no single point of failure
22* scalable to the exabyte level
23* kernel and user space implementation available
24
25NOTE: For smaller deployments, it is also possible to run Ceph
26services directly on your {pve} nodes. Recent hardware has plenty
27of CPU power and RAM, so running storage services and VMs on same node
28is possible.
29
30[[storage_rbd_config]]
31Configuration
32~~~~~~~~~~~~~
33
34This backend supports the common storage properties `nodes`,
35`disable`, `content`, and the following `rbd` specific properties:
36
37monhost::
38
39List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
40PVE cluster.
41
42pool::
43
44Ceph pool name.
45
46username::
47
48RBD user ID. Optional, only needed if Ceph is not running on the PVE cluster.
49Note that only the user ID should be used. The "client." type prefix must be
50left out.
51
52krbd::
53
54Enforce access to rados block devices through the krbd kernel module. Optional.
55
56NOTE: Containers will use `krbd` independent of the option value.
57
58.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
59----
60rbd: ceph-external
61 monhost 10.1.1.20 10.1.1.21 10.1.1.22
62 pool ceph-external
63 content images
64 username admin
65----
66
67TIP: You can use the `rbd` utility to do low-level management tasks.
68
69Authentication
70~~~~~~~~~~~~~~
71
72If you use `cephx` authentication, you need to copy the keyfile from your
73external Ceph cluster to a Proxmox VE host.
74
75Create the directory `/etc/pve/priv/ceph` with
76
77 mkdir /etc/pve/priv/ceph
78
79Then copy the keyring
80
81 scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
82
83The keyring must be named to match your `<STORAGE_ID>`. Copying the
84keyring generally requires root privileges.
85
86If Ceph is installed locally on the PVE cluster, this is done automatically by
87'pveceph' or in the GUI.
88
89Storage Features
90~~~~~~~~~~~~~~~~
91
92The `rbd` backend is a block level storage, and implements full
93snapshot and clone functionality.
94
95.Storage features for backend `rbd`
96[width="100%",cols="m,m,3*d",options="header"]
97|==============================================================================
98|Content types |Image formats |Shared |Snapshots |Clones
99|images rootdir |raw |yes |yes |yes
100|==============================================================================
101
102ifdef::wiki[]
103
104See Also
105~~~~~~~~
106
107* link:/wiki/Storage[Storage]
108
109endif::wiki[]
110