+[[ceph_rados_block_devices]]
Ceph RADOS Block Devices (RBD)
------------------------------
+ifdef::wiki[]
+:pve-toplevel:
+:title: Storage: RBD
+endif::wiki[]
Storage pool type: `rbd`
-http://ceph.com[Ceph] is a distributed object store and file system
+https://ceph.com[Ceph] is a distributed object store and file system
designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages:
* full snapshot and clone capabilities
* self healing
* no single point of failure
-* scalable to the exabyte level
-* kernel and unser space implementation available
+* scalable to the exabyte level
+* kernel and user space implementation available
NOTE: For smaller deployments, it is also possible to run Ceph
services directly on your {pve} nodes. Recent hardware has plenty
of CPU power and RAM, so running storage services and VMs on same node
is possible.
+[[storage_rbd_config]]
Configuration
~~~~~~~~~~~~~
monhost::
-List of monitor daemon IPs.
+List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
+PVE cluster.
pool::
username::
-RBD user Id.
+RBD user ID. Optional, only needed if Ceph is not running on the PVE cluster.
+Note that only the user ID should be used. The "client." type prefix must be
+left out.
krbd::
-Access rbd through krbd kernel module. This is required if you want to
-use the storage for containers.
+Enforce access to rados block devices through the krbd kernel module. Optional.
-.Configuration Example ('/etc/pve/storage.cfg')
+NOTE: Containers will use `krbd` independent of the option value.
+
+.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
----
-rbd: ceph3
+rbd: ceph-external
monhost 10.1.1.20 10.1.1.21 10.1.1.22
- pool ceph3
+ pool ceph-external
content images
username admin
----
-TIP: You can use the 'rbd' utility to do low-level management tasks.
+TIP: You can use the `rbd` utility to do low-level management tasks.
Authentication
~~~~~~~~~~~~~~
-If you use cephx authentication, you need to copy the keyfile from
-Ceph to Proxmox VE host.
+If you use `cephx` authentication, you need to copy the keyfile from your
+external Ceph cluster to a Proxmox VE host.
-Create the directory '/etc/pve/priv/ceph' with
+Create the directory `/etc/pve/priv/ceph` with
mkdir /etc/pve/priv/ceph
The keyring must be named to match your `<STORAGE_ID>`. Copying the
keyring generally requires root privileges.
+If Ceph is installed locally on the PVE cluster, this is done automatically by
+'pveceph' or in the GUI.
+
Storage Features
~~~~~~~~~~~~~~~~
|images rootdir |raw |yes |yes |yes
|==============================================================================
+ifdef::wiki[]
+
+See Also
+~~~~~~~~
+
+* link:/wiki/Storage[Storage]
+
+endif::wiki[]
+