[[storage_cephfs]] Ceph Filesystem (CephFS) ------------------------ ifdef::wiki[] :pve-toplevel: :title: Storage: CephFS endif::wiki[] Storage pool type: `cephfs` CephFS implements a POSIX-compliant filesystem using a http://ceph.com[Ceph] storage cluster to store its data. As CephFS builds on Ceph it shares most of its properties, this includes redundancy, scalability, self healing and high availability. TIP: {pve} can xref:chapter_pveceph[manage ceph setups], which makes configuring a CephFS storage easier. As recent hardware has plenty of CPU power and RAM, running storage services and VMs on same node is possible without a big performance impact. [[storage_cephfs_config]] Configuration ~~~~~~~~~~~~~ This backend supports the common storage properties `nodes`, `disable`, `content`, and the following `cephfs` specific properties: monhost:: List of monitor daemon addresses. Optional, only needed if Ceph is not running on the PVE cluster. path:: The local mount point. Optional, defaults to `/mnt/pve//`. username:: Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster where it defaults to `admin`. subdir:: CephFS subdirectory to mount. Optional, defaults to `/`. fuse:: Access CephFS through FUSE, instead of the kernel client. Optional, defaults to `0`. .Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`) ---- cephfs: cephfs-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 path /mnt/pve/cephfs-external content backup username admin ---- NOTE: Don't forget to setup the client secret key file if cephx was not turned off. Authentication ~~~~~~~~~~~~~~ If you use the, by-default enabled, `cephx` authentication, you need to copy the secret from your external Ceph cluster to a Proxmox VE host. Create the directory `/etc/pve/priv/ceph` with mkdir /etc/pve/priv/ceph Then copy the secret scp :/etc/ceph/cephfs.secret /etc/pve/priv/ceph/.secret The secret must be named to match your ``. Copying the secret generally requires root privileges. The file must only contain the secret key itself, opposed to the `rbd` backend which also contains a `[client.userid]` section. If Ceph is installed locally on the PVE cluster, i.e., setup with `pveceph`, this is done automatically. Storage Features ~~~~~~~~~~~~~~~~ The `cephfs` backend is a POSIX-compliant filesystem on top of a Ceph cluster. .Storage features for backend `cephfs` [width="100%",cols="m,m,3*d",options="header"] |============================================================================== |Content types |Image formats |Shared |Snapshots |Clones |vztmpl iso backup |none |yes |yes^[1]^ |no |============================================================================== ^[1]^ Snapshots, while no known bugs, cannot be guaranteed to be stable yet, as they lack testing. ifdef::wiki[] See Also ~~~~~~~~ * link:/wiki/Storage[Storage] endif::wiki[]