]>
Commit | Line | Data |
---|---|---|
1 | [[storage_cephfs]] | |
2 | Ceph Filesystem (CephFS) | |
3 | ------------------------ | |
4 | ifdef::wiki[] | |
5 | :pve-toplevel: | |
6 | :title: Storage: CephFS | |
7 | endif::wiki[] | |
8 | ||
9 | Storage pool type: `cephfs` | |
10 | ||
11 | CephFS implements a POSIX-compliant filesystem, using a https://ceph.com[Ceph] | |
12 | storage cluster to store its data. As CephFS builds upon Ceph, it shares most of | |
13 | its properties. This includes redundancy, scalability, self-healing, and high | |
14 | availability. | |
15 | ||
16 | TIP: {pve} can xref:chapter_pveceph[manage Ceph setups], which makes | |
17 | configuring a CephFS storage easier. As modern hardware offers a lot of | |
18 | processing power and RAM, running storage services and VMs on same node is | |
19 | possible without a significant performance impact. | |
20 | ||
21 | To use the CephFS storage plugin, you must replace the stock Debian Ceph client, | |
22 | by adding our xref:sysadmin_package_repositories_ceph[Ceph repository]. | |
23 | Once added, run `apt update`, followed by `apt dist-upgrade`, in order to get | |
24 | the newest packages. | |
25 | ||
26 | WARNING: Please ensure that there are no other Ceph repositories configured. | |
27 | Otherwise the installation will fail or there will be mixed package versions on | |
28 | the node, leading to unexpected behavior. | |
29 | ||
30 | [[storage_cephfs_config]] | |
31 | Configuration | |
32 | ~~~~~~~~~~~~~ | |
33 | ||
34 | This backend supports the common storage properties `nodes`, | |
35 | `disable`, `content`, as well as the following `cephfs` specific properties: | |
36 | ||
37 | monhost:: | |
38 | ||
39 | List of monitor daemon addresses. Optional, only needed if Ceph is not running | |
40 | on the {pve} cluster. | |
41 | ||
42 | path:: | |
43 | ||
44 | The local mount point. Optional, defaults to `/mnt/pve/<STORAGE_ID>/`. | |
45 | ||
46 | username:: | |
47 | ||
48 | Ceph user id. Optional, only needed if Ceph is not running on the {pve} cluster, | |
49 | where it defaults to `admin`. | |
50 | ||
51 | subdir:: | |
52 | ||
53 | CephFS subdirectory to mount. Optional, defaults to `/`. | |
54 | ||
55 | fuse:: | |
56 | ||
57 | Access CephFS through FUSE, instead of the kernel client. Optional, defaults | |
58 | to `0`. | |
59 | ||
60 | .Configuration example for an external Ceph cluster (`/etc/pve/storage.cfg`) | |
61 | ---- | |
62 | cephfs: cephfs-external | |
63 | monhost 10.1.1.20 10.1.1.21 10.1.1.22 | |
64 | path /mnt/pve/cephfs-external | |
65 | content backup | |
66 | username admin | |
67 | ---- | |
68 | NOTE: Don't forget to set up the client's secret key file, if cephx was not | |
69 | disabled. | |
70 | ||
71 | Authentication | |
72 | ~~~~~~~~~~~~~~ | |
73 | ||
74 | NOTE: If Ceph is installed locally on the {pve} cluster, the following is done | |
75 | automatically when adding the storage. | |
76 | ||
77 | If you use `cephx` authentication, which is enabled by default, you need to | |
78 | provide the secret from the external Ceph cluster. | |
79 | ||
80 | To configure the storage via the CLI, you first need to make the file | |
81 | containing the secret available. One way is to copy the file from the external | |
82 | Ceph cluster directly to one of the {pve} nodes. The following example will | |
83 | copy it to the `/root` directory of the node on which we run it: | |
84 | ||
85 | ---- | |
86 | # scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret | |
87 | ---- | |
88 | ||
89 | Then use the `pvesm` CLI tool to configure the external RBD storage, use the | |
90 | `--keyring` parameter, which needs to be a path to the secret file that you | |
91 | copied. For example: | |
92 | ||
93 | ---- | |
94 | # pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret | |
95 | ---- | |
96 | ||
97 | When configuring an external RBD storage via the GUI, you can copy and paste | |
98 | the secret into the appropriate field. | |
99 | ||
100 | The secret is only the key itself, as opposed to the `rbd` backend which also | |
101 | contains a `[client.userid]` section. | |
102 | ||
103 | The secret will be stored at | |
104 | ||
105 | ---- | |
106 | # /etc/pve/priv/ceph/<STORAGE_ID>.secret | |
107 | ---- | |
108 | ||
109 | A secret can be received from the Ceph cluster (as Ceph admin) by issuing the | |
110 | command below, where `userid` is the client ID that has been configured to | |
111 | access the cluster. For further information on Ceph user management, see the | |
112 | Ceph docs.footnoteref:[cephusermgmt] | |
113 | ||
114 | ---- | |
115 | # ceph auth get-key client.userid > cephfs.secret | |
116 | ---- | |
117 | ||
118 | Storage Features | |
119 | ~~~~~~~~~~~~~~~~ | |
120 | ||
121 | The `cephfs` backend is a POSIX-compliant filesystem, on top of a Ceph cluster. | |
122 | ||
123 | .Storage features for backend `cephfs` | |
124 | [width="100%",cols="m,m,3*d",options="header"] | |
125 | |============================================================================== | |
126 | |Content types |Image formats |Shared |Snapshots |Clones | |
127 | |vztmpl iso backup snippets |none |yes |yes^[1]^ |no | |
128 | |============================================================================== | |
129 | ^[1]^ While no known bugs exist, snapshots are not yet guaranteed to be stable, | |
130 | as they lack sufficient testing. | |
131 | ||
132 | ifdef::wiki[] | |
133 | ||
134 | See Also | |
135 | ~~~~~~~~ | |
136 | ||
137 | * link:/wiki/Storage[Storage] | |
138 | ||
139 | endif::wiki[] | |
140 |