]>
Commit | Line | Data |
---|---|---|
1 | [[storage_cephfs]] | |
2 | Ceph Filesystem (CephFS) | |
3 | ------------------------ | |
4 | ifdef::wiki[] | |
5 | :pve-toplevel: | |
6 | :title: Storage: CephFS | |
7 | endif::wiki[] | |
8 | ||
9 | Storage pool type: `cephfs` | |
10 | ||
11 | CephFS implements a POSIX-compliant filesystem, using a https://ceph.com[Ceph] | |
12 | storage cluster to store its data. As CephFS builds upon Ceph, it shares most of | |
13 | its properties. This includes redundancy, scalability, self-healing, and high | |
14 | availability. | |
15 | ||
16 | TIP: {pve} can xref:chapter_pveceph[manage Ceph setups], which makes | |
17 | configuring a CephFS storage easier. As modern hardware offers a lot of | |
18 | processing power and RAM, running storage services and VMs on same node is | |
19 | possible without a significant performance impact. | |
20 | ||
21 | To use the CephFS storage plugin, you must replace the stock Debian Ceph client, | |
22 | by adding our xref:sysadmin_package_repositories_ceph[Ceph repository]. | |
23 | Once added, run `apt update`, followed by `apt dist-upgrade`, in order to get | |
24 | the newest packages. | |
25 | ||
26 | WARNING: Please ensure that there are no other Ceph repositories configured. | |
27 | Otherwise the installation will fail or there will be mixed package versions on | |
28 | the node, leading to unexpected behavior. | |
29 | ||
30 | [[storage_cephfs_config]] | |
31 | Configuration | |
32 | ~~~~~~~~~~~~~ | |
33 | ||
34 | This backend supports the common storage properties `nodes`, | |
35 | `disable`, `content`, as well as the following `cephfs` specific properties: | |
36 | ||
37 | fs-name:: | |
38 | ||
39 | Name of the Ceph FS. | |
40 | ||
41 | monhost:: | |
42 | ||
43 | List of monitor daemon addresses. Optional, only needed if Ceph is not running | |
44 | on the {pve} cluster. | |
45 | ||
46 | path:: | |
47 | ||
48 | The local mount point. Optional, defaults to `/mnt/pve/<STORAGE_ID>/`. | |
49 | ||
50 | username:: | |
51 | ||
52 | Ceph user id. Optional, only needed if Ceph is not running on the {pve} cluster, | |
53 | where it defaults to `admin`. | |
54 | ||
55 | subdir:: | |
56 | ||
57 | CephFS subdirectory to mount. Optional, defaults to `/`. | |
58 | ||
59 | fuse:: | |
60 | ||
61 | Access CephFS through FUSE, instead of the kernel client. Optional, defaults | |
62 | to `0`. | |
63 | ||
64 | .Configuration example for an external Ceph cluster (`/etc/pve/storage.cfg`) | |
65 | ---- | |
66 | cephfs: cephfs-external | |
67 | monhost 10.1.1.20 10.1.1.21 10.1.1.22 | |
68 | path /mnt/pve/cephfs-external | |
69 | content backup | |
70 | username admin | |
71 | fs-name cephfs | |
72 | ---- | |
73 | NOTE: Don't forget to set up the client's secret key file, if cephx was not | |
74 | disabled. | |
75 | ||
76 | Authentication | |
77 | ~~~~~~~~~~~~~~ | |
78 | ||
79 | NOTE: If Ceph is installed locally on the {pve} cluster, the following is done | |
80 | automatically when adding the storage. | |
81 | ||
82 | If you use `cephx` authentication, which is enabled by default, you need to | |
83 | provide the secret from the external Ceph cluster. | |
84 | ||
85 | To configure the storage via the CLI, you first need to make the file | |
86 | containing the secret available. One way is to copy the file from the external | |
87 | Ceph cluster directly to one of the {pve} nodes. The following example will | |
88 | copy it to the `/root` directory of the node on which we run it: | |
89 | ||
90 | ---- | |
91 | # scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret | |
92 | ---- | |
93 | ||
94 | Then use the `pvesm` CLI tool to configure the external RBD storage, use the | |
95 | `--keyring` parameter, which needs to be a path to the secret file that you | |
96 | copied. For example: | |
97 | ||
98 | ---- | |
99 | # pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret | |
100 | ---- | |
101 | ||
102 | When configuring an external RBD storage via the GUI, you can copy and paste | |
103 | the secret into the appropriate field. | |
104 | ||
105 | The secret is only the key itself, as opposed to the `rbd` backend which also | |
106 | contains a `[client.userid]` section. | |
107 | ||
108 | The secret will be stored at | |
109 | ||
110 | ---- | |
111 | # /etc/pve/priv/ceph/<STORAGE_ID>.secret | |
112 | ---- | |
113 | ||
114 | A secret can be received from the Ceph cluster (as Ceph admin) by issuing the | |
115 | command below, where `userid` is the client ID that has been configured to | |
116 | access the cluster. For further information on Ceph user management, see the | |
117 | Ceph docs.footnoteref:[cephusermgmt] | |
118 | ||
119 | ---- | |
120 | # ceph auth get-key client.userid > cephfs.secret | |
121 | ---- | |
122 | ||
123 | Storage Features | |
124 | ~~~~~~~~~~~~~~~~ | |
125 | ||
126 | The `cephfs` backend is a POSIX-compliant filesystem, on top of a Ceph cluster. | |
127 | ||
128 | .Storage features for backend `cephfs` | |
129 | [width="100%",cols="m,m,3*d",options="header"] | |
130 | |============================================================================== | |
131 | |Content types |Image formats |Shared |Snapshots |Clones | |
132 | |vztmpl iso backup snippets |none |yes |yes^[1]^ |no | |
133 | |============================================================================== | |
134 | ^[1]^ While no known bugs exist, snapshots are not yet guaranteed to be stable, | |
135 | as they lack sufficient testing. | |
136 | ||
137 | ifdef::wiki[] | |
138 | ||
139 | See Also | |
140 | ~~~~~~~~ | |
141 | ||
142 | * link:/wiki/Storage[Storage] | |
143 | ||
144 | endif::wiki[] | |
145 |