]> git.proxmox.com Git - pve-docs.git/blame - pve-storage-rbd.adoc
Add audio device documentation
[pve-docs.git] / pve-storage-rbd.adoc
CommitLineData
950229ff 1[[ceph_rados_block_devices]]
aa039b0f
DM
2Ceph RADOS Block Devices (RBD)
3------------------------------
5f09af76
DM
4ifdef::wiki[]
5:pve-toplevel:
cb84ed18 6:title: Storage: RBD
5f09af76
DM
7endif::wiki[]
8
aa039b0f
DM
9Storage pool type: `rbd`
10
11http://ceph.com[Ceph] is a distributed object store and file system
12designed to provide excellent performance, reliability and
13scalability. RADOS block devices implement a feature rich block level
14storage, and you get the following advantages:
15
16* thin provisioning
17* resizable volumes
18* distributed and redundant (striped over multiple OSDs)
19* full snapshot and clone capabilities
20* self healing
21* no single point of failure
78f02fed 22* scalable to the exabyte level
5eba0743 23* kernel and user space implementation available
aa039b0f
DM
24
25NOTE: For smaller deployments, it is also possible to run Ceph
26services directly on your {pve} nodes. Recent hardware has plenty
27of CPU power and RAM, so running storage services and VMs on same node
28is possible.
29
d9a27ee1 30[[storage_rbd_config]]
aa039b0f
DM
31Configuration
32~~~~~~~~~~~~~
33
34This backend supports the common storage properties `nodes`,
35`disable`, `content`, and the following `rbd` specific properties:
36
37monhost::
38
78f02fed
AA
39List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
40PVE cluster.
aa039b0f
DM
41
42pool::
43
44Ceph pool name.
45
46username::
47
78f02fed 48RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster.
aa039b0f
DM
49
50krbd::
51
7d607884 52Enforce access to rados block devices through the krbd kernel module. Optional.
b08dc187 53
b4359c08 54NOTE: Containers will use `krbd` independent of the option value.
aa039b0f 55
78f02fed 56.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
aa039b0f 57----
78f02fed 58rbd: ceph-external
aa039b0f 59 monhost 10.1.1.20 10.1.1.21 10.1.1.22
78f02fed 60 pool ceph-external
aa039b0f
DM
61 content images
62 username admin
63----
64
8c1189b6 65TIP: You can use the `rbd` utility to do low-level management tasks.
aa039b0f
DM
66
67Authentication
68~~~~~~~~~~~~~~
69
78f02fed
AA
70If you use `cephx` authentication, you need to copy the keyfile from your
71external Ceph cluster to a Proxmox VE host.
aa039b0f 72
8c1189b6 73Create the directory `/etc/pve/priv/ceph` with
aa039b0f
DM
74
75 mkdir /etc/pve/priv/ceph
76
77Then copy the keyring
78
79 scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
80
81The keyring must be named to match your `<STORAGE_ID>`. Copying the
82keyring generally requires root privileges.
83
78f02fed
AA
84If Ceph is installed locally on the PVE cluster, this is done automatically by
85'pveceph' or in the GUI.
86
aa039b0f
DM
87Storage Features
88~~~~~~~~~~~~~~~~
89
90The `rbd` backend is a block level storage, and implements full
91snapshot and clone functionality.
92
93.Storage features for backend `rbd`
94[width="100%",cols="m,m,3*d",options="header"]
95|==============================================================================
96|Content types |Image formats |Shared |Snapshots |Clones
97|images rootdir |raw |yes |yes |yes
98|==============================================================================
99
deb4673f
DM
100ifdef::wiki[]
101
102See Also
103~~~~~~~~
104
f532afb7 105* link:/wiki/Storage[Storage]
deb4673f
DM
106
107endif::wiki[]
108