]> git.proxmox.com Git - pve-docs.git/blame_incremental - pve-storage-cephfs.adoc
totp: fix copy/paste mistake
[pve-docs.git] / pve-storage-cephfs.adoc
... / ...
CommitLineData
1[[storage_cephfs]]
2Ceph Filesystem (CephFS)
3------------------------
4ifdef::wiki[]
5:pve-toplevel:
6:title: Storage: CephFS
7endif::wiki[]
8
9Storage pool type: `cephfs`
10
11CephFS implements a POSIX-compliant filesystem, using a https://ceph.com[Ceph]
12storage cluster to store its data. As CephFS builds upon Ceph, it shares most of
13its properties. This includes redundancy, scalability, self-healing, and high
14availability.
15
16TIP: {pve} can xref:chapter_pveceph[manage Ceph setups], which makes
17configuring a CephFS storage easier. As modern hardware offers a lot of
18processing power and RAM, running storage services and VMs on same node is
19possible without a significant performance impact.
20
21To use the CephFS storage plugin, you must replace the stock Debian Ceph client,
22by adding our xref:sysadmin_package_repositories_ceph[Ceph repository].
23Once added, run `apt update`, followed by `apt dist-upgrade`, in order to get
24the newest packages.
25
26WARNING: Please ensure that there are no other Ceph repositories configured.
27Otherwise the installation will fail or there will be mixed package versions on
28the node, leading to unexpected behavior.
29
30[[storage_cephfs_config]]
31Configuration
32~~~~~~~~~~~~~
33
34This backend supports the common storage properties `nodes`,
35`disable`, `content`, as well as the following `cephfs` specific properties:
36
37fs-name::
38
39Name of the Ceph FS.
40
41monhost::
42
43List of monitor daemon addresses. Optional, only needed if Ceph is not running
44on the {pve} cluster.
45
46path::
47
48The local mount point. Optional, defaults to `/mnt/pve/<STORAGE_ID>/`.
49
50username::
51
52Ceph user id. Optional, only needed if Ceph is not running on the {pve} cluster,
53where it defaults to `admin`.
54
55subdir::
56
57CephFS subdirectory to mount. Optional, defaults to `/`.
58
59fuse::
60
61Access CephFS through FUSE, instead of the kernel client. Optional, defaults
62to `0`.
63
64.Configuration example for an external Ceph cluster (`/etc/pve/storage.cfg`)
65----
66cephfs: cephfs-external
67 monhost 10.1.1.20 10.1.1.21 10.1.1.22
68 path /mnt/pve/cephfs-external
69 content backup
70 username admin
71 fs-name cephfs
72----
73NOTE: Don't forget to set up the client's secret key file, if cephx was not
74disabled.
75
76Authentication
77~~~~~~~~~~~~~~
78
79NOTE: If Ceph is installed locally on the {pve} cluster, the following is done
80automatically when adding the storage.
81
82If you use `cephx` authentication, which is enabled by default, you need to
83provide the secret from the external Ceph cluster.
84
85To configure the storage via the CLI, you first need to make the file
86containing the secret available. One way is to copy the file from the external
87Ceph cluster directly to one of the {pve} nodes. The following example will
88copy it to the `/root` directory of the node on which we run it:
89
90----
91# scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret
92----
93
94Then use the `pvesm` CLI tool to configure the external RBD storage, use the
95`--keyring` parameter, which needs to be a path to the secret file that you
96copied. For example:
97
98----
99# pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret
100----
101
102When configuring an external RBD storage via the GUI, you can copy and paste
103the secret into the appropriate field.
104
105The secret is only the key itself, as opposed to the `rbd` backend which also
106contains a `[client.userid]` section.
107
108The secret will be stored at
109
110----
111# /etc/pve/priv/ceph/<STORAGE_ID>.secret
112----
113
114A secret can be received from the Ceph cluster (as Ceph admin) by issuing the
115command below, where `userid` is the client ID that has been configured to
116access the cluster. For further information on Ceph user management, see the
117Ceph docs.footnoteref:[cephusermgmt]
118
119----
120# ceph auth get-key client.userid > cephfs.secret
121----
122
123Storage Features
124~~~~~~~~~~~~~~~~
125
126The `cephfs` backend is a POSIX-compliant filesystem, on top of a Ceph cluster.
127
128.Storage features for backend `cephfs`
129[width="100%",cols="m,m,3*d",options="header"]
130|==============================================================================
131|Content types |Image formats |Shared |Snapshots |Clones
132|vztmpl iso backup snippets |none |yes |yes^[1]^ |no
133|==============================================================================
134^[1]^ While no known bugs exist, snapshots are not yet guaranteed to be stable,
135as they lack sufficient testing.
136
137ifdef::wiki[]
138
139See Also
140~~~~~~~~
141
142* link:/wiki/Storage[Storage]
143
144endif::wiki[]
145