X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pve-storage-cephfs.adoc;h=45933f0007068a47f0b4e7323881c9c26e5b3474;hb=c6e098a291471715218db3edb6b90f09b3dd8f33;hp=b7f3f4d265eed2d57bc76b370f6ff6a90e7b559e;hpb=6a8897ca46be9afffe266a2cf422d7d98ad4b1e3;p=pve-docs.git diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc index b7f3f4d..45933f0 100644 --- a/pve-storage-cephfs.adoc +++ b/pve-storage-cephfs.adoc @@ -18,6 +18,15 @@ configuring a CephFS storage easier. As recent hardware has plenty of CPU power and RAM, running storage services and VMs on same node is possible without a big performance impact. +To use the CephFS storage plugin you need update the debian stock Ceph client. +Add our Ceph repository xref:sysadmin_package_repositories_ceph[Ceph repository]. +Once added, run an `apt update` and `apt dist-upgrade` cycle to get the newest +packages. + +You need to make sure that there is no other Ceph repository configured, +otherwise the installation will fail or there will be mixed package +versions on the node, leading to unexpected behavior. + [[storage_cephfs_config]] Configuration ~~~~~~~~~~~~~ @@ -71,13 +80,20 @@ Create the directory `/etc/pve/priv/ceph` with Then copy the secret - scp :/etc/ceph/cephfs.secret /etc/pve/priv/ceph/.secret + scp cephfs.secret :/etc/pve/priv/ceph/.secret The secret must be named to match your ``. Copying the secret generally requires root privileges. The file must only contain the secret key itself, opposed to the `rbd` backend which also contains a `[client.userid]` section. +A secret can be received from the ceph cluster (as ceph admin) by issuing the +following command. Replace the `userid` with the actual client ID configured to +access the cluster. For further ceph user management see the Ceph docs +footnote:[Ceph user management http://docs.ceph.com/docs/luminous/rados/operations/user-management/]. + + ceph auth get-key client.userid > cephfs.secret + If Ceph is installed locally on the PVE cluster, i.e., setup with `pveceph`, this is done automatically. @@ -89,8 +105,8 @@ The `cephfs` backend is a POSIX-compliant filesystem on top of a Ceph cluster. .Storage features for backend `cephfs` [width="100%",cols="m,m,3*d",options="header"] |============================================================================== -|Content types |Image formats |Shared |Snapshots |Clones -|vztmpl iso backup |none |yes |yes^[1]^ |no +|Content types |Image formats |Shared |Snapshots |Clones +|vztmpl iso backup snippets |none |yes |yes^[1]^ |no |============================================================================== ^[1]^ Snapshots, while no known bugs, cannot be guaranteed to be stable yet, as they lack testing.