+:fn-ceph-user-mgmt: footnote:cephusermgmt[Ceph user management {cephdocs-url}/rados/operations/user-management/]
[[ceph_rados_block_devices]]
Ceph RADOS Block Devices (RBD)
------------------------------
Storage pool type: `rbd`
-http://ceph.com[Ceph] is a distributed object store and file system
+https://ceph.com[Ceph] is a distributed object store and file system
designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages:
monhost::
List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
-PVE cluster.
+{pve} cluster.
pool::
username::
-RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster.
+RBD user ID. Optional, only needed if Ceph is not running on the {pve} cluster.
+Note that only the user ID should be used. The "client." type prefix must be
+left out.
krbd::
-Enforce access to rados block devies through the krbd kernel module. Optional.
+Enforce access to rados block devices through the krbd kernel module. Optional.
NOTE: Containers will use `krbd` independent of the option value.
Authentication
~~~~~~~~~~~~~~
-If you use `cephx` authentication, you need to copy the keyfile from your
-external Ceph cluster to a Proxmox VE host.
+NOTE: If Ceph is installed locally on the {pve} cluster, the following is done
+automatically when adding the storage.
-Create the directory `/etc/pve/priv/ceph` with
+If you use `cephx` authentication, which is enabled by default, you need to
+provide the keyring from the external Ceph cluster.
- mkdir /etc/pve/priv/ceph
+To configure the storage via the CLI, you first need to make the file
+containing the keyring available. One way is to copy the file from the external
+Ceph cluster directly to one of the {pve} nodes. The following example will
+copy it to the `/root` directory of the node on which we run it:
-Then copy the keyring
+----
+# scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
+----
+
+Then use the `pvesm` CLI tool to configure the external RBD storage, use the
+`--keyring` parameter, which needs to be a path to the keyring file that you
+copied. For example:
+
+----
+# pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
+----
+
+When configuring an external RBD storage via the GUI, you can copy and paste
+the keyring into the appropriate field.
+
+The keyring will be stored at
+
+----
+# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
+----
- scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
+TIP: Creating a keyring with only the needed capabilities is recommend when
+connecting to an external cluster. For further information on Ceph user
+management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]]
-The keyring must be named to match your `<STORAGE_ID>`. Copying the
-keyring generally requires root privileges.
-If Ceph is installed locally on the PVE cluster, this is done automatically by
-'pveceph' or in the GUI.
Storage Features
~~~~~~~~~~~~~~~~