]>
Commit | Line | Data |
---|---|---|
b3b6e05e TL |
1 | .. _cephfs-nfs: |
2 | ||
f6b5b4d7 TL |
3 | ======================= |
4 | CephFS Exports over NFS | |
5 | ======================= | |
6 | ||
f67539c2 | 7 | CephFS namespaces can be exported over NFS protocol using the `NFS-Ganesha NFS server`_ |
f6b5b4d7 TL |
8 | |
9 | Requirements | |
10 | ============ | |
11 | ||
12 | - Latest Ceph file system with mgr enabled | |
f67539c2 TL |
13 | - ``nfs-ganesha``, ``nfs-ganesha-ceph``, ``nfs-ganesha-rados-grace`` and |
14 | ``nfs-ganesha-rados-urls`` packages (version 3.3 and above) | |
f6b5b4d7 | 15 | |
b3b6e05e TL |
16 | .. note:: From Pacific, the nfs mgr module must be enabled prior to use. |
17 | ||
f6b5b4d7 TL |
18 | Create NFS Ganesha Cluster |
19 | ========================== | |
20 | ||
21 | .. code:: bash | |
22 | ||
b3b6e05e | 23 | $ ceph nfs cluster create <clusterid> [<placement>] [--ingress --virtual-ip <ip>] |
f6b5b4d7 | 24 | |
f91f0fd5 | 25 | This creates a common recovery pool for all NFS Ganesha daemons, new user based on |
f67539c2 | 26 | ``clusterid``, and a common NFS Ganesha config RADOS object. |
f6b5b4d7 | 27 | |
f67539c2 TL |
28 | .. note:: Since this command also brings up NFS Ganesha daemons using a ceph-mgr |
29 | orchestrator module (see :doc:`/mgr/orchestrator`) such as "mgr/cephadm", at | |
30 | least one such module must be enabled for it to work. | |
f91f0fd5 | 31 | |
f67539c2 TL |
32 | Currently, NFS Ganesha daemon deployed by cephadm listens on the standard |
33 | port. So only one daemon will be deployed on a host. | |
f91f0fd5 | 34 | |
f67539c2 | 35 | ``<clusterid>`` is an arbitrary string by which this NFS Ganesha cluster will be |
f91f0fd5 TL |
36 | known. |
37 | ||
f67539c2 | 38 | ``<placement>`` is an optional string signifying which hosts should have NFS Ganesha |
f91f0fd5 | 39 | daemon containers running on them and, optionally, the total number of NFS |
f67539c2 | 40 | Ganesha daemons on the cluster (should you want to have more than one NFS Ganesha |
f91f0fd5 | 41 | daemon running per node). For example, the following placement string means |
f67539c2 | 42 | "deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host):: |
f91f0fd5 TL |
43 | |
44 | "host1,host2" | |
45 | ||
f67539c2 TL |
46 | and this placement specification says to deploy single NFS Ganesha daemon each |
47 | on nodes host1 and host2 (for a total of two NFS Ganesha daemons in the | |
48 | cluster):: | |
f91f0fd5 | 49 | |
f67539c2 | 50 | "2 host1,host2" |
f91f0fd5 | 51 | |
b3b6e05e TL |
52 | To deploy NFS with an HA front-end (virtual IP and load balancer), add the |
53 | ``--ingress`` flag and specify a virtual IP address. This will deploy a combination | |
54 | of keepalived and haproxy to provide an high-availability NFS frontend for the NFS | |
55 | service. | |
56 | ||
f67539c2 TL |
57 | For more details, refer :ref:`orchestrator-cli-placement-spec` but keep |
58 | in mind that specifying the placement via a YAML file is not supported. | |
f6b5b4d7 | 59 | |
f6b5b4d7 TL |
60 | Delete NFS Ganesha Cluster |
61 | ========================== | |
62 | ||
63 | .. code:: bash | |
64 | ||
b3b6e05e | 65 | $ ceph nfs cluster rm <clusterid> |
f6b5b4d7 TL |
66 | |
67 | This deletes the deployed cluster. | |
68 | ||
69 | List NFS Ganesha Cluster | |
70 | ======================== | |
71 | ||
72 | .. code:: bash | |
73 | ||
74 | $ ceph nfs cluster ls | |
75 | ||
76 | This lists deployed clusters. | |
77 | ||
78 | Show NFS Ganesha Cluster Information | |
79 | ==================================== | |
80 | ||
81 | .. code:: bash | |
82 | ||
83 | $ ceph nfs cluster info [<clusterid>] | |
84 | ||
85 | This displays ip and port of deployed cluster. | |
86 | ||
f67539c2 TL |
87 | .. note:: This will not work with rook backend. Instead expose port with |
88 | kubectl patch command and fetch the port details with kubectl get services | |
89 | command:: | |
90 | ||
91 | $ kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-<cluster-name>-<node-id> | |
92 | $ kubectl get services -n rook-ceph rook-ceph-nfs-<cluster-name>-<node-id> | |
93 | ||
f91f0fd5 TL |
94 | Set Customized NFS Ganesha Configuration |
95 | ======================================== | |
f6b5b4d7 TL |
96 | |
97 | .. code:: bash | |
98 | ||
99 | $ ceph nfs cluster config set <clusterid> -i <config_file> | |
100 | ||
101 | With this the nfs cluster will use the specified config and it will have | |
102 | precedence over default config blocks. | |
103 | ||
f67539c2 TL |
104 | Example use cases |
105 | ||
106 | 1) Changing log level | |
107 | ||
108 | It can be done by adding LOG block in the following way:: | |
109 | ||
110 | LOG { | |
111 | COMPONENTS { | |
112 | ALL = FULL_DEBUG; | |
113 | } | |
114 | } | |
115 | ||
116 | 2) Adding custom export block | |
117 | ||
118 | The following sample block creates a single export. This export will not be | |
119 | managed by `ceph nfs export` interface:: | |
120 | ||
121 | EXPORT { | |
122 | Export_Id = 100; | |
123 | Transports = TCP; | |
124 | Path = /; | |
125 | Pseudo = /ceph/; | |
126 | Protocols = 4; | |
127 | Access_Type = RW; | |
128 | Attr_Expiration_Time = 0; | |
129 | Squash = None; | |
130 | FSAL { | |
131 | Name = CEPH; | |
132 | Filesystem = "filesystem name"; | |
133 | User_Id = "user id"; | |
134 | Secret_Access_Key = "secret key"; | |
135 | } | |
136 | } | |
137 | ||
138 | .. note:: User specified in FSAL block should have proper caps for NFS-Ganesha | |
139 | daemons to access ceph cluster. User can be created in following way using | |
140 | `auth get-or-create`:: | |
141 | ||
142 | # ceph auth get-or-create client.<user_id> mon 'allow r' osd 'allow rw pool=nfs-ganesha namespace=<nfs_cluster_name>, allow rw tag cephfs data=<fs_name>' mds 'allow rw path=<export_path>' | |
143 | ||
f91f0fd5 TL |
144 | Reset NFS Ganesha Configuration |
145 | =============================== | |
f6b5b4d7 TL |
146 | |
147 | .. code:: bash | |
148 | ||
149 | $ ceph nfs cluster config reset <clusterid> | |
150 | ||
151 | This removes the user defined configuration. | |
152 | ||
f67539c2 TL |
153 | .. note:: With a rook deployment, ganesha pods must be explicitly restarted |
154 | for the new config blocks to be effective. | |
155 | ||
f6b5b4d7 TL |
156 | Create CephFS Export |
157 | ==================== | |
158 | ||
b3b6e05e TL |
159 | .. warning:: Currently, the nfs interface is not integrated with dashboard. Both |
160 | dashboard and nfs interface have different export requirements and | |
f67539c2 TL |
161 | create exports differently. Management of dashboard created exports is not |
162 | supported. | |
163 | ||
f6b5b4d7 TL |
164 | .. code:: bash |
165 | ||
166 | $ ceph nfs export create cephfs <fsname> <clusterid> <binding> [--readonly] [--path=/path/in/cephfs] | |
167 | ||
f91f0fd5 TL |
168 | This creates export RADOS objects containing the export block, where |
169 | ||
f67539c2 TL |
170 | ``<fsname>`` is the name of the FS volume used by the NFS Ganesha cluster |
171 | that will serve this export. | |
f91f0fd5 | 172 | |
f67539c2 | 173 | ``<clusterid>`` is the NFS Ganesha cluster ID. |
f91f0fd5 | 174 | |
f67539c2 TL |
175 | ``<binding>`` is the pseudo root path (must be an absolute path and unique). |
176 | It specifies the export position within the NFS v4 Pseudo Filesystem. | |
177 | ||
178 | ``<path>`` is the path within cephfs. Valid path should be given and default | |
179 | path is '/'. It need not be unique. Subvolume path can be fetched using: | |
180 | ||
181 | .. code:: | |
182 | ||
183 | $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>] | |
f6b5b4d7 | 184 | |
b3b6e05e TL |
185 | .. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface. |
186 | ||
f6b5b4d7 TL |
187 | Delete CephFS Export |
188 | ==================== | |
189 | ||
190 | .. code:: bash | |
191 | ||
b3b6e05e | 192 | $ ceph nfs export rm <clusterid> <binding> |
f6b5b4d7 | 193 | |
f91f0fd5 TL |
194 | This deletes an export in an NFS Ganesha cluster, where: |
195 | ||
f67539c2 | 196 | ``<clusterid>`` is the NFS Ganesha cluster ID. |
f91f0fd5 | 197 | |
f67539c2 | 198 | ``<binding>`` is the pseudo root path (must be an absolute path). |
f6b5b4d7 | 199 | |
f91f0fd5 TL |
200 | List CephFS Exports |
201 | =================== | |
f6b5b4d7 TL |
202 | |
203 | .. code:: bash | |
204 | ||
205 | $ ceph nfs export ls <clusterid> [--detailed] | |
206 | ||
f91f0fd5 TL |
207 | It lists exports for a cluster, where: |
208 | ||
f67539c2 | 209 | ``<clusterid>`` is the NFS Ganesha cluster ID. |
f91f0fd5 TL |
210 | |
211 | With the ``--detailed`` option enabled it shows entire export block. | |
f6b5b4d7 TL |
212 | |
213 | Get CephFS Export | |
214 | ================= | |
215 | ||
216 | .. code:: bash | |
217 | ||
218 | $ ceph nfs export get <clusterid> <binding> | |
219 | ||
f91f0fd5 TL |
220 | This displays export block for a cluster based on pseudo root name (binding), |
221 | where: | |
f6b5b4d7 | 222 | |
f67539c2 TL |
223 | ``<clusterid>`` is the NFS Ganesha cluster ID. |
224 | ||
225 | ``<binding>`` is the pseudo root path (must be an absolute path). | |
226 | ||
227 | ||
228 | Update CephFS Export | |
229 | ==================== | |
230 | ||
231 | .. code:: bash | |
232 | ||
233 | $ ceph nfs export update -i <json_file> | |
234 | ||
235 | This updates the cephfs export specified in the json file. Export in json | |
236 | format can be fetched with above get command. For example:: | |
237 | ||
238 | $ ceph nfs export get vstart /cephfs > update_cephfs_export.json | |
239 | $ cat update_cephfs_export.json | |
240 | { | |
241 | "export_id": 1, | |
242 | "path": "/", | |
243 | "cluster_id": "vstart", | |
244 | "pseudo": "/cephfs", | |
245 | "access_type": "RW", | |
246 | "squash": "no_root_squash", | |
247 | "security_label": true, | |
248 | "protocols": [ | |
249 | 4 | |
250 | ], | |
251 | "transports": [ | |
252 | "TCP" | |
253 | ], | |
254 | "fsal": { | |
255 | "name": "CEPH", | |
256 | "user_id": "vstart1", | |
257 | "fs_name": "a", | |
258 | "sec_label_xattr": "" | |
259 | }, | |
260 | "clients": [] | |
261 | } | |
262 | # Here in the fetched export, pseudo and access_type is modified. Then the modified file is passed to update interface | |
263 | $ ceph nfs export update -i update_cephfs_export.json | |
264 | $ cat update_cephfs_export.json | |
265 | { | |
266 | "export_id": 1, | |
267 | "path": "/", | |
268 | "cluster_id": "vstart", | |
269 | "pseudo": "/cephfs_testing", | |
270 | "access_type": "RO", | |
271 | "squash": "no_root_squash", | |
272 | "security_label": true, | |
273 | "protocols": [ | |
274 | 4 | |
275 | ], | |
276 | "transports": [ | |
277 | "TCP" | |
278 | ], | |
279 | "fsal": { | |
280 | "name": "CEPH", | |
281 | "user_id": "vstart1", | |
282 | "fs_name": "a", | |
283 | "sec_label_xattr": "" | |
284 | }, | |
285 | "clients": [] | |
286 | } | |
f91f0fd5 | 287 | |
f91f0fd5 TL |
288 | |
289 | Configuring NFS Ganesha to export CephFS with vstart | |
f6b5b4d7 TL |
290 | ==================================================== |
291 | ||
f91f0fd5 | 292 | 1) Using ``cephadm`` |
f6b5b4d7 TL |
293 | |
294 | .. code:: bash | |
295 | ||
296 | $ MDS=1 MON=1 OSD=3 NFS=1 ../src/vstart.sh -n -d --cephadm | |
297 | ||
f67539c2 TL |
298 | This will deploy a single NFS Ganesha daemon using ``vstart.sh``, where |
299 | the daemon will listen on the default NFS Ganesha port. | |
f6b5b4d7 TL |
300 | |
301 | 2) Using test orchestrator | |
302 | ||
303 | .. code:: bash | |
304 | ||
305 | $ MDS=1 MON=1 OSD=3 NFS=1 ../src/vstart.sh -n -d | |
306 | ||
f67539c2 TL |
307 | Environment variable ``NFS`` is the number of NFS Ganesha daemons to be |
308 | deployed, each listening on a random port. | |
f6b5b4d7 | 309 | |
f67539c2 | 310 | .. note:: NFS Ganesha packages must be pre-installed for this to work. |
f6b5b4d7 TL |
311 | |
312 | Mount | |
313 | ===== | |
314 | ||
f91f0fd5 | 315 | After the exports are successfully created and NFS Ganesha daemons are no longer in |
f6b5b4d7 TL |
316 | grace period. The exports can be mounted by |
317 | ||
318 | .. code:: bash | |
319 | ||
320 | $ mount -t nfs -o port=<ganesha-port> <ganesha-host-name>:<ganesha-pseudo-path> <mount-point> | |
321 | ||
322 | .. note:: Only NFS v4.0+ is supported. | |
f67539c2 | 323 | |
b3b6e05e TL |
324 | Troubleshooting |
325 | =============== | |
326 | ||
327 | Checking NFS-Ganesha logs with | |
328 | ||
329 | 1) ``cephadm`` | |
330 | ||
331 | .. code:: bash | |
332 | ||
333 | $ cephadm logs --fsid <fsid> --name nfs.<cluster_id>.hostname | |
334 | ||
335 | 2) ``rook`` | |
336 | ||
337 | .. code:: bash | |
338 | ||
339 | $ kubectl logs -n rook-ceph rook-ceph-nfs-<cluster_id>-<node_id> nfs-ganesha | |
340 | ||
341 | Log level can be changed using `nfs cluster config set` command. | |
342 | ||
f67539c2 | 343 | .. _NFS-Ganesha NFS Server: https://github.com/nfs-ganesha/nfs-ganesha/wiki |