X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pve-storage-cephfs.adoc;h=c10976bdac3578f73e2efb946621b47ad20a3e6c;hb=6577d36ec0fa692b1e2ba4fd40340ad888ee1abb;hp=96f4991169f57cb268e0a5587ffe96b78768c17f;hpb=fdbb2634fb74a10bf051e3e3bd341dfeaf3c0ce5;p=pve-docs.git diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc index 96f4991..c10976b 100644 --- a/pve-storage-cephfs.adoc +++ b/pve-storage-cephfs.adoc @@ -8,27 +8,36 @@ endif::wiki[] Storage pool type: `cephfs` -CephFS implements a POSIX-compliant filesystem using a http://ceph.com[Ceph] -storage cluster to store its data. As CephFS builds on Ceph it shares most of -its properties, this includes redundancy, scalability, self healing and high +CephFS implements a POSIX-compliant filesystem, using a https://ceph.com[Ceph] +storage cluster to store its data. As CephFS builds upon Ceph, it shares most of +its properties. This includes redundancy, scalability, self-healing, and high availability. -TIP: {pve} can xref:chapter_pveceph[manage ceph setups], which makes -configuring a CephFS storage easier. As recent hardware has plenty of CPU power -and RAM, running storage services and VMs on same node is possible without a -big performance impact. +TIP: {pve} can xref:chapter_pveceph[manage Ceph setups], which makes +configuring a CephFS storage easier. As modern hardware offers a lot of +processing power and RAM, running storage services and VMs on same node is +possible without a significant performance impact. + +To use the CephFS storage plugin, you must replace the stock Debian Ceph client, +by adding our xref:sysadmin_package_repositories_ceph[Ceph repository]. +Once added, run `apt update`, followed by `apt dist-upgrade`, in order to get +the newest packages. + +WARNING: Please ensure that there are no other Ceph repositories configured. +Otherwise the installation will fail or there will be mixed package versions on +the node, leading to unexpected behavior. [[storage_cephfs_config]] Configuration ~~~~~~~~~~~~~ This backend supports the common storage properties `nodes`, -`disable`, `content`, and the following `cephfs` specific properties: +`disable`, `content`, as well as the following `cephfs` specific properties: monhost:: List of monitor daemon addresses. Optional, only needed if Ceph is not running -on the PVE cluster. +on the {pve} cluster. path:: @@ -36,7 +45,7 @@ The local mount point. Optional, defaults to `/mnt/pve//`. username:: -Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster +Ceph user id. Optional, only needed if Ceph is not running on the {pve} cluster, where it defaults to `admin`. subdir:: @@ -48,7 +57,7 @@ fuse:: Access CephFS through FUSE, instead of the kernel client. Optional, defaults to `0`. -.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`) +.Configuration example for an external Ceph cluster (`/etc/pve/storage.cfg`) ---- cephfs: cephfs-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 @@ -56,42 +65,60 @@ cephfs: cephfs-external content backup username admin ---- -NOTE: Don't forget to setup the client secret key file if cephx was not turned -off. +NOTE: Don't forget to set up the client's secret key file, if cephx was not +disabled. Authentication ~~~~~~~~~~~~~~ -If you use the, by-default enabled, `cephx` authentication, you need to copy -the secret from your external Ceph cluster to a Proxmox VE host. +NOTE: If Ceph is installed locally on the {pve} cluster, the following is done +automatically when adding the storage. -Create the directory `/etc/pve/priv/ceph` with +If you use `cephx` authentication, which is enabled by default, you need to +provide the secret from the external Ceph cluster. - mkdir /etc/pve/priv/ceph +To configure the storage via the CLI, you first need to make the file +containing the secret available. One way is to copy the file from the external +Ceph cluster directly to one of the {pve} nodes. The following example will +copy it to the `/root` directory of the node on which we run it: -Then copy the secret +---- +# scp :/etc/ceph/cephfs.secret /root/cephfs.secret +---- - scp cephfs.secret :/etc/pve/priv/ceph/.secret +Then use the `pvesm` CLI tool to configure the external RBD storage, use the +`--keyring` parameter, which needs to be a path to the secret file that you +copied. For example: + +---- +# pvesm add cephfs --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret +---- -The secret must be named to match your ``. Copying the -secret generally requires root privileges. The file must only contain the -secret key itself, opposed to the `rbd` backend which also contains a -`[client.userid]` section. +When configuring an external RBD storage via the GUI, you can copy and paste +the secret into the appropriate field. -A secret can be received from the ceph cluster (as ceph admin) by issuing the -following command. Replace the `userid` with the actual client ID configured to -access the cluster. For further ceph user managment see the Ceph docs -footnote:[Ceph user management http://docs.ceph.com/docs/luminous/rados/operations/user-management/]. +The secret is only the key itself, as opposed to the `rbd` backend which also +contains a `[client.userid]` section. - ceph auth get-key client.userid > cephfs.secret +The secret will be stored at -If Ceph is installed locally on the PVE cluster, i.e., setup with `pveceph`, -this is done automatically. +---- +# /etc/pve/priv/ceph/.secret +---- + +A secret can be received from the Ceph cluster (as Ceph admin) by issuing the +command below, where `userid` is the client ID that has been configured to +access the cluster. For further information on Ceph user management, see the +Ceph docs.footnoteref:[cephusermgmt] + +---- +# ceph auth get-key client.userid > cephfs.secret +---- Storage Features ~~~~~~~~~~~~~~~~ -The `cephfs` backend is a POSIX-compliant filesystem on top of a Ceph cluster. +The `cephfs` backend is a POSIX-compliant filesystem, on top of a Ceph cluster. .Storage features for backend `cephfs` [width="100%",cols="m,m,3*d",options="header"] @@ -99,8 +126,8 @@ The `cephfs` backend is a POSIX-compliant filesystem on top of a Ceph cluster. |Content types |Image formats |Shared |Snapshots |Clones |vztmpl iso backup snippets |none |yes |yes^[1]^ |no |============================================================================== -^[1]^ Snapshots, while no known bugs, cannot be guaranteed to be stable yet, as -they lack testing. +^[1]^ While no known bugs exist, snapshots are not yet guaranteed to be stable, +as they lack sufficient testing. ifdef::wiki[]