+:fn-ceph-user-mgmt: footnote:cephusermgmt[Ceph user management {cephdocs-url}/rados/operations/user-management/]
+[[ceph_rados_block_devices]]
Ceph RADOS Block Devices (RBD)
------------------------------
ifdef::wiki[]
Storage pool type: `rbd`
-http://ceph.com[Ceph] is a distributed object store and file system
+https://ceph.com[Ceph] is a distributed object store and file system
designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages:
* full snapshot and clone capabilities
* self healing
* no single point of failure
-* scalable to the exabyte level
+* scalable to the exabyte level
* kernel and user space implementation available
NOTE: For smaller deployments, it is also possible to run Ceph
of CPU power and RAM, so running storage services and VMs on same node
is possible.
+[[storage_rbd_config]]
Configuration
~~~~~~~~~~~~~
monhost::
-List of monitor daemon IPs.
+List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
+{pve} cluster.
pool::
username::
-RBD user Id.
+RBD user ID. Optional, only needed if Ceph is not running on the {pve} cluster.
+Note that only the user ID should be used. The "client." type prefix must be
+left out.
krbd::
-Access rbd through krbd kernel module. This is required if you want to
-use the storage for containers.
+Enforce access to rados block devices through the krbd kernel module. Optional.
-.Configuration Example (`/etc/pve/storage.cfg`)
+NOTE: Containers will use `krbd` independent of the option value.
+
+.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
----
-rbd: ceph3
+rbd: ceph-external
monhost 10.1.1.20 10.1.1.21 10.1.1.22
- pool ceph3
+ pool ceph-external
content images
username admin
----
Authentication
~~~~~~~~~~~~~~
-If you use `cephx` authentication, you need to copy the keyfile from
-Ceph to Proxmox VE host.
+NOTE: If Ceph is installed locally on the {pve} cluster, the following is done
+automatically when adding the storage.
+
+If you use `cephx` authentication, which is enabled by default, you need to
+provide the keyring from the external Ceph cluster.
+
+To configure the storage via the CLI, you first need to make the file
+containing the keyring available. One way is to copy the file from the external
+Ceph cluster directly to one of the {pve} nodes. The following example will
+copy it to the `/root` directory of the node on which we run it:
+
+----
+# scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
+----
+
+Then use the `pvesm` CLI tool to configure the external RBD storage, use the
+`--keyring` parameter, which needs to be a path to the keyring file that you
+copied. For example:
+
+----
+# pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
+----
+
+When configuring an external RBD storage via the GUI, you can copy and paste
+the keyring into the appropriate field.
+
+The keyring will be stored at
+
+----
+# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
+----
+
+TIP: Creating a keyring with only the needed capabilities is recommend when
+connecting to an external cluster. For further information on Ceph user
+management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]]
+
+Ceph client configuration (optional)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Connecting to an external Ceph storage doesn't always allow setting
+client-specific options in the config DB on the external cluster. You can add a
+`ceph.conf` beside the Ceph keyring to change the Ceph client configuration for
+the storage.
-Create the directory `/etc/pve/priv/ceph` with
+The ceph.conf needs to have the same name as the storage.
- mkdir /etc/pve/priv/ceph
+----
+# /etc/pve/priv/ceph/<STORAGE_ID>.conf
+----
-Then copy the keyring
+See the RBD configuration reference footnote:[RBD configuration reference
+{cephdocs-url}/rbd/rbd-config-ref/] for possible settings.
- scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
+NOTE: Do not change these settings lightly. {PVE} is merging the
+<STORAGE_ID>.conf with the storage configuration.
-The keyring must be named to match your `<STORAGE_ID>`. Copying the
-keyring generally requires root privileges.
Storage Features
~~~~~~~~~~~~~~~~