]>
Commit | Line | Data |
---|---|---|
1 | [[storage_cephfs]] | |
2 | Ceph Filesystem (CephFS) | |
3 | ------------------------ | |
4 | ifdef::wiki[] | |
5 | :pve-toplevel: | |
6 | :title: Storage: CephFS | |
7 | endif::wiki[] | |
8 | ||
9 | Storage pool type: `cephfs` | |
10 | ||
11 | CephFS implements a POSIX-compliant filesystem, using a https://ceph.com[Ceph] | |
12 | storage cluster to store its data. As CephFS builds upon Ceph, it shares most of | |
13 | its properties. This includes redundancy, scalability, self-healing, and high | |
14 | availability. | |
15 | ||
16 | TIP: {pve} can xref:chapter_pveceph[manage Ceph setups], which makes | |
17 | configuring a CephFS storage easier. As modern hardware offers a lot of | |
18 | processing power and RAM, running storage services and VMs on same node is | |
19 | possible without a significant performance impact. | |
20 | ||
21 | To use the CephFS storage plugin, you must replace the stock Debian Ceph client, | |
22 | by adding our xref:sysadmin_package_repositories_ceph[Ceph repository]. | |
23 | Once added, run `apt update`, followed by `apt dist-upgrade`, in order to get | |
24 | the newest packages. | |
25 | ||
26 | WARNING: Please ensure that there are no other Ceph repositories configured. | |
27 | Otherwise the installation will fail or there will be mixed package versions on | |
28 | the node, leading to unexpected behavior. | |
29 | ||
30 | [[storage_cephfs_config]] | |
31 | Configuration | |
32 | ~~~~~~~~~~~~~ | |
33 | ||
34 | This backend supports the common storage properties `nodes`, | |
35 | `disable`, `content`, as well as the following `cephfs` specific properties: | |
36 | ||
37 | monhost:: | |
38 | ||
39 | List of monitor daemon addresses. Optional, only needed if Ceph is not running | |
40 | on the PVE cluster. | |
41 | ||
42 | path:: | |
43 | ||
44 | The local mount point. Optional, defaults to `/mnt/pve/<STORAGE_ID>/`. | |
45 | ||
46 | username:: | |
47 | ||
48 | Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster, | |
49 | where it defaults to `admin`. | |
50 | ||
51 | subdir:: | |
52 | ||
53 | CephFS subdirectory to mount. Optional, defaults to `/`. | |
54 | ||
55 | fuse:: | |
56 | ||
57 | Access CephFS through FUSE, instead of the kernel client. Optional, defaults | |
58 | to `0`. | |
59 | ||
60 | .Configuration example for an external Ceph cluster (`/etc/pve/storage.cfg`) | |
61 | ---- | |
62 | cephfs: cephfs-external | |
63 | monhost 10.1.1.20 10.1.1.21 10.1.1.22 | |
64 | path /mnt/pve/cephfs-external | |
65 | content backup | |
66 | username admin | |
67 | ---- | |
68 | NOTE: Don't forget to set up the client's secret key file, if cephx was not | |
69 | disabled. | |
70 | ||
71 | Authentication | |
72 | ~~~~~~~~~~~~~~ | |
73 | ||
74 | If you use `cephx` authentication, which is enabled by default, you need to copy | |
75 | the secret from your external Ceph cluster to a Proxmox VE host. | |
76 | ||
77 | Create the directory `/etc/pve/priv/ceph` with | |
78 | ||
79 | mkdir /etc/pve/priv/ceph | |
80 | ||
81 | Then copy the secret | |
82 | ||
83 | scp cephfs.secret <proxmox>:/etc/pve/priv/ceph/<STORAGE_ID>.secret | |
84 | ||
85 | The secret must be renamed to match your `<STORAGE_ID>`. Copying the | |
86 | secret generally requires root privileges. The file must only contain the | |
87 | secret key itself, as opposed to the `rbd` backend which also contains a | |
88 | `[client.userid]` section. | |
89 | ||
90 | A secret can be received from the Ceph cluster (as Ceph admin) by issuing the | |
91 | command below, where `userid` is the client ID that has been configured to | |
92 | access the cluster. For further information on Ceph user management, see the | |
93 | Ceph docs footnote:[Ceph user management | |
94 | {cephdocs-url}/rados/operations/user-management/]. | |
95 | ||
96 | ceph auth get-key client.userid > cephfs.secret | |
97 | ||
98 | If Ceph is installed locally on the PVE cluster, that is, it was set up using | |
99 | `pveceph`, this is done automatically. | |
100 | ||
101 | Storage Features | |
102 | ~~~~~~~~~~~~~~~~ | |
103 | ||
104 | The `cephfs` backend is a POSIX-compliant filesystem, on top of a Ceph cluster. | |
105 | ||
106 | .Storage features for backend `cephfs` | |
107 | [width="100%",cols="m,m,3*d",options="header"] | |
108 | |============================================================================== | |
109 | |Content types |Image formats |Shared |Snapshots |Clones | |
110 | |vztmpl iso backup snippets |none |yes |yes^[1]^ |no | |
111 | |============================================================================== | |
112 | ^[1]^ While no known bugs exist, snapshots are not yet guaranteed to be stable, | |
113 | as they lack sufficient testing. | |
114 | ||
115 | ifdef::wiki[] | |
116 | ||
117 | See Also | |
118 | ~~~~~~~~ | |
119 | ||
120 | * link:/wiki/Storage[Storage] | |
121 | ||
122 | endif::wiki[] | |
123 |