Storage pool type: `cephfs`
-http://ceph.com[Ceph] is a distributed object store and file system designed to
-provide excellent performance, reliability and scalability. CephFS implements a
-POSIX-compliant filesystem storage, with the following advantages:
-
-* thin provisioning
-* distributed and redundant (striped over multiple OSDs)
-* snapshot capabilities
-* self healing
-* no single point of failure
-* scalable to the exabyte level
-* kernel and user space implementation available
-
-NOTE: For smaller deployments, it is also possible to run Ceph
-services directly on your {pve} nodes. Recent hardware has plenty
-of CPU power and RAM, so running storage services and VMs on same node
-is possible.
+CephFS implements a POSIX-compliant filesystem using a http://ceph.com[Ceph]
+storage cluster to store its data. As CephFS builds on Ceph it shares most of
+its properties, this includes redundancy, scalability, self healing and high
+availability.
+
+TIP: {pve} can xref:chapter_pveceph[manage ceph setups], which makes
+configuring a CephFS storage easier. As recent hardware has plenty of CPU power
+and RAM, running storage services and VMs on same node is possible without a
+big performance impact.
+
+To use the CephFS storage plugin you need update the debian stock Ceph client.
+Add our Ceph repository xref:sysadmin_package_repositories_ceph[Ceph repository].
+Once added, run an `apt update` and `apt dist-upgrade` cycle to get the newest
+packages.
+
+You need to make sure that there is no other Ceph repository configured,
+otherwise the installation will fail or there will be mixed package
+versions on the node, leading to unexpected behavior.
[[storage_cephfs_config]]
Configuration
monhost::
-List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
-PVE cluster.
+List of monitor daemon addresses. Optional, only needed if Ceph is not running
+on the PVE cluster.
path::
username::
-Ceph user Id. Optional, only needed if Ceph is not running on the PVE cluster.
+Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster
+where it defaults to `admin`.
subdir::
content backup
username admin
----
+NOTE: Don't forget to setup the client secret key file if cephx was not turned
+off.
Authentication
~~~~~~~~~~~~~~
-If you use `cephx` authentication, you need to copy the secret from your
-external Ceph cluster to a Proxmox VE host.
+If you use the, by-default enabled, `cephx` authentication, you need to copy
+the secret from your external Ceph cluster to a Proxmox VE host.
Create the directory `/etc/pve/priv/ceph` with
Then copy the secret
- scp <cephserver>:/etc/ceph/cephfs.secret /etc/pve/priv/ceph/<STORAGE_ID>.secret
+ scp cephfs.secret <proxmox>:/etc/pve/priv/ceph/<STORAGE_ID>.secret
The secret must be named to match your `<STORAGE_ID>`. Copying the
secret generally requires root privileges. The file must only contain the
-secret itself, opposed to the `rbd` backend.
+secret key itself, opposed to the `rbd` backend which also contains a
+`[client.userid]` section.
+
+A secret can be received from the ceph cluster (as ceph admin) by issuing the
+following command. Replace the `userid` with the actual client ID configured to
+access the cluster. For further ceph user management see the Ceph docs
+footnote:[Ceph user management http://docs.ceph.com/docs/luminous/rados/operations/user-management/].
+
+ ceph auth get-key client.userid > cephfs.secret
-If Ceph is installed locally on the PVE cluster, this is done automatically.
+If Ceph is installed locally on the PVE cluster, i.e., setup with `pveceph`,
+this is done automatically.
Storage Features
~~~~~~~~~~~~~~~~
.Storage features for backend `cephfs`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
-|Content types |Image formats |Shared |Snapshots |Clones
-|vztmpl iso backup |none |yes |yes |no
+|Content types |Image formats |Shared |Snapshots |Clones
+|vztmpl iso backup snippets |none |yes |yes^[1]^ |no
|==============================================================================
+^[1]^ Snapshots, while no known bugs, cannot be guaranteed to be stable yet, as
+they lack testing.
ifdef::wiki[]