]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/fs-nfs-exports.rst
bump version to 16.2.6-pve2
[ceph.git] / ceph / doc / cephfs / fs-nfs-exports.rst
CommitLineData
b3b6e05e
TL
1.. _cephfs-nfs:
2
f6b5b4d7
TL
3=======================
4CephFS Exports over NFS
5=======================
6
f67539c2 7CephFS namespaces can be exported over NFS protocol using the `NFS-Ganesha NFS server`_
f6b5b4d7
TL
8
9Requirements
10============
11
12- Latest Ceph file system with mgr enabled
f67539c2
TL
13- ``nfs-ganesha``, ``nfs-ganesha-ceph``, ``nfs-ganesha-rados-grace`` and
14 ``nfs-ganesha-rados-urls`` packages (version 3.3 and above)
f6b5b4d7 15
b3b6e05e
TL
16.. note:: From Pacific, the nfs mgr module must be enabled prior to use.
17
522d829b
TL
18Ganesha Configuration Hierarchy
19===============================
20
21Cephadm and rook starts nfs-ganesha daemon with `bootstrap configuration`
22containing minimal ganesha configuration, creates empty rados `common config`
23object in `nfs-ganesha` pool and watches this config object. The `mgr/nfs`
24module adds rados export object urls to the common config object. If cluster
25config is set, it creates `user config` object containing custom ganesha
26configuration and adds it url to common config object.
27
28.. ditaa::
29
30
31 rados://$pool/$namespace/export-$i rados://$pool/$namespace/userconf-nfs.$cluster_id
32 (export config) (user config)
33
34 +----------+ +----------+ +----------+ +---------------------------+
35 | | | | | | | |
36 | export-1 | | export-2 | | export-3 | | userconf-nfs.$cluster_id |
37 | | | | | | | |
38 +----+-----+ +----+-----+ +-----+----+ +-------------+-------------+
39 ^ ^ ^ ^
40 | | | |
41 +--------------------------------+-------------------------+
42 %url |
43 |
44 +--------+--------+
45 | | rados://$pool/$namespace/conf-nfs.$svc
46 | conf+nfs.$svc | (common config)
47 | |
48 +--------+--------+
49 ^
50 |
51 watch_url |
52 +----------------------------------------------+
53 | | |
54 | | | RADOS
55 +----------------------------------------------------------------------------------+
56 | | | CONTAINER
57 watch_url | watch_url | watch_url |
58 | | |
59 +--------+-------+ +--------+-------+ +-------+--------+
60 | | | | | | /etc/ganesha/ganesha.conf
61 | nfs.$svc.a | | nfs.$svc.b | | nfs.$svc.c | (bootstrap config)
62 | | | | | |
63 +----------------+ +----------------+ +----------------+
64
f6b5b4d7
TL
65Create NFS Ganesha Cluster
66==========================
67
68.. code:: bash
69
b3b6e05e 70 $ ceph nfs cluster create <clusterid> [<placement>] [--ingress --virtual-ip <ip>]
f6b5b4d7 71
f91f0fd5 72This creates a common recovery pool for all NFS Ganesha daemons, new user based on
f67539c2 73``clusterid``, and a common NFS Ganesha config RADOS object.
f6b5b4d7 74
f67539c2
TL
75.. note:: Since this command also brings up NFS Ganesha daemons using a ceph-mgr
76 orchestrator module (see :doc:`/mgr/orchestrator`) such as "mgr/cephadm", at
77 least one such module must be enabled for it to work.
f91f0fd5 78
f67539c2
TL
79 Currently, NFS Ganesha daemon deployed by cephadm listens on the standard
80 port. So only one daemon will be deployed on a host.
f91f0fd5 81
f67539c2 82``<clusterid>`` is an arbitrary string by which this NFS Ganesha cluster will be
f91f0fd5
TL
83known.
84
f67539c2 85``<placement>`` is an optional string signifying which hosts should have NFS Ganesha
f91f0fd5 86daemon containers running on them and, optionally, the total number of NFS
f67539c2 87Ganesha daemons on the cluster (should you want to have more than one NFS Ganesha
f91f0fd5 88daemon running per node). For example, the following placement string means
f67539c2 89"deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host)::
f91f0fd5
TL
90
91 "host1,host2"
92
f67539c2
TL
93and this placement specification says to deploy single NFS Ganesha daemon each
94on nodes host1 and host2 (for a total of two NFS Ganesha daemons in the
95cluster)::
f91f0fd5 96
f67539c2 97 "2 host1,host2"
f91f0fd5 98
b3b6e05e
TL
99To deploy NFS with an HA front-end (virtual IP and load balancer), add the
100``--ingress`` flag and specify a virtual IP address. This will deploy a combination
101of keepalived and haproxy to provide an high-availability NFS frontend for the NFS
102service.
103
f67539c2
TL
104For more details, refer :ref:`orchestrator-cli-placement-spec` but keep
105in mind that specifying the placement via a YAML file is not supported.
f6b5b4d7 106
f6b5b4d7
TL
107Delete NFS Ganesha Cluster
108==========================
109
110.. code:: bash
111
b3b6e05e 112 $ ceph nfs cluster rm <clusterid>
f6b5b4d7
TL
113
114This deletes the deployed cluster.
115
116List NFS Ganesha Cluster
117========================
118
119.. code:: bash
120
121 $ ceph nfs cluster ls
122
123This lists deployed clusters.
124
125Show NFS Ganesha Cluster Information
126====================================
127
128.. code:: bash
129
130 $ ceph nfs cluster info [<clusterid>]
131
132This displays ip and port of deployed cluster.
133
f67539c2
TL
134.. note:: This will not work with rook backend. Instead expose port with
135 kubectl patch command and fetch the port details with kubectl get services
136 command::
137
138 $ kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-<cluster-name>-<node-id>
139 $ kubectl get services -n rook-ceph rook-ceph-nfs-<cluster-name>-<node-id>
140
f91f0fd5
TL
141Set Customized NFS Ganesha Configuration
142========================================
f6b5b4d7
TL
143
144.. code:: bash
145
146 $ ceph nfs cluster config set <clusterid> -i <config_file>
147
148With this the nfs cluster will use the specified config and it will have
149precedence over default config blocks.
150
f67539c2
TL
151Example use cases
152
1531) Changing log level
154
155 It can be done by adding LOG block in the following way::
156
157 LOG {
158 COMPONENTS {
159 ALL = FULL_DEBUG;
160 }
161 }
162
1632) Adding custom export block
164
165 The following sample block creates a single export. This export will not be
166 managed by `ceph nfs export` interface::
167
168 EXPORT {
169 Export_Id = 100;
170 Transports = TCP;
171 Path = /;
172 Pseudo = /ceph/;
173 Protocols = 4;
174 Access_Type = RW;
175 Attr_Expiration_Time = 0;
176 Squash = None;
177 FSAL {
178 Name = CEPH;
179 Filesystem = "filesystem name";
180 User_Id = "user id";
181 Secret_Access_Key = "secret key";
182 }
183 }
184
185.. note:: User specified in FSAL block should have proper caps for NFS-Ganesha
186 daemons to access ceph cluster. User can be created in following way using
187 `auth get-or-create`::
188
189 # ceph auth get-or-create client.<user_id> mon 'allow r' osd 'allow rw pool=nfs-ganesha namespace=<nfs_cluster_name>, allow rw tag cephfs data=<fs_name>' mds 'allow rw path=<export_path>'
190
f91f0fd5
TL
191Reset NFS Ganesha Configuration
192===============================
f6b5b4d7
TL
193
194.. code:: bash
195
196 $ ceph nfs cluster config reset <clusterid>
197
198This removes the user defined configuration.
199
f67539c2
TL
200.. note:: With a rook deployment, ganesha pods must be explicitly restarted
201 for the new config blocks to be effective.
202
f6b5b4d7
TL
203Create CephFS Export
204====================
205
b3b6e05e
TL
206.. warning:: Currently, the nfs interface is not integrated with dashboard. Both
207 dashboard and nfs interface have different export requirements and
f67539c2
TL
208 create exports differently. Management of dashboard created exports is not
209 supported.
210
f6b5b4d7
TL
211.. code:: bash
212
213 $ ceph nfs export create cephfs <fsname> <clusterid> <binding> [--readonly] [--path=/path/in/cephfs]
214
f91f0fd5
TL
215This creates export RADOS objects containing the export block, where
216
f67539c2
TL
217``<fsname>`` is the name of the FS volume used by the NFS Ganesha cluster
218that will serve this export.
f91f0fd5 219
f67539c2 220``<clusterid>`` is the NFS Ganesha cluster ID.
f91f0fd5 221
f67539c2
TL
222``<binding>`` is the pseudo root path (must be an absolute path and unique).
223It specifies the export position within the NFS v4 Pseudo Filesystem.
224
225``<path>`` is the path within cephfs. Valid path should be given and default
226path is '/'. It need not be unique. Subvolume path can be fetched using:
227
228.. code::
229
230 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
f6b5b4d7 231
b3b6e05e
TL
232.. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface.
233
f6b5b4d7
TL
234Delete CephFS Export
235====================
236
237.. code:: bash
238
b3b6e05e 239 $ ceph nfs export rm <clusterid> <binding>
f6b5b4d7 240
f91f0fd5
TL
241This deletes an export in an NFS Ganesha cluster, where:
242
f67539c2 243``<clusterid>`` is the NFS Ganesha cluster ID.
f91f0fd5 244
f67539c2 245``<binding>`` is the pseudo root path (must be an absolute path).
f6b5b4d7 246
f91f0fd5
TL
247List CephFS Exports
248===================
f6b5b4d7
TL
249
250.. code:: bash
251
252 $ ceph nfs export ls <clusterid> [--detailed]
253
f91f0fd5
TL
254It lists exports for a cluster, where:
255
f67539c2 256``<clusterid>`` is the NFS Ganesha cluster ID.
f91f0fd5
TL
257
258With the ``--detailed`` option enabled it shows entire export block.
f6b5b4d7
TL
259
260Get CephFS Export
261=================
262
263.. code:: bash
264
265 $ ceph nfs export get <clusterid> <binding>
266
f91f0fd5
TL
267This displays export block for a cluster based on pseudo root name (binding),
268where:
f6b5b4d7 269
f67539c2
TL
270``<clusterid>`` is the NFS Ganesha cluster ID.
271
272``<binding>`` is the pseudo root path (must be an absolute path).
273
274
275Update CephFS Export
276====================
277
278.. code:: bash
279
280 $ ceph nfs export update -i <json_file>
281
282This updates the cephfs export specified in the json file. Export in json
283format can be fetched with above get command. For example::
284
285 $ ceph nfs export get vstart /cephfs > update_cephfs_export.json
286 $ cat update_cephfs_export.json
287 {
288 "export_id": 1,
289 "path": "/",
290 "cluster_id": "vstart",
291 "pseudo": "/cephfs",
292 "access_type": "RW",
293 "squash": "no_root_squash",
294 "security_label": true,
295 "protocols": [
296 4
297 ],
298 "transports": [
299 "TCP"
300 ],
301 "fsal": {
302 "name": "CEPH",
303 "user_id": "vstart1",
304 "fs_name": "a",
305 "sec_label_xattr": ""
306 },
307 "clients": []
308 }
309 # Here in the fetched export, pseudo and access_type is modified. Then the modified file is passed to update interface
310 $ ceph nfs export update -i update_cephfs_export.json
311 $ cat update_cephfs_export.json
312 {
313 "export_id": 1,
314 "path": "/",
315 "cluster_id": "vstart",
316 "pseudo": "/cephfs_testing",
317 "access_type": "RO",
318 "squash": "no_root_squash",
319 "security_label": true,
320 "protocols": [
321 4
322 ],
323 "transports": [
324 "TCP"
325 ],
326 "fsal": {
327 "name": "CEPH",
328 "user_id": "vstart1",
329 "fs_name": "a",
330 "sec_label_xattr": ""
331 },
332 "clients": []
333 }
f91f0fd5 334
f91f0fd5
TL
335
336Configuring NFS Ganesha to export CephFS with vstart
f6b5b4d7
TL
337====================================================
338
f91f0fd5 3391) Using ``cephadm``
f6b5b4d7
TL
340
341 .. code:: bash
342
343 $ MDS=1 MON=1 OSD=3 NFS=1 ../src/vstart.sh -n -d --cephadm
344
f67539c2
TL
345 This will deploy a single NFS Ganesha daemon using ``vstart.sh``, where
346 the daemon will listen on the default NFS Ganesha port.
f6b5b4d7
TL
347
3482) Using test orchestrator
349
350 .. code:: bash
351
352 $ MDS=1 MON=1 OSD=3 NFS=1 ../src/vstart.sh -n -d
353
f67539c2
TL
354 Environment variable ``NFS`` is the number of NFS Ganesha daemons to be
355 deployed, each listening on a random port.
f6b5b4d7 356
f67539c2 357 .. note:: NFS Ganesha packages must be pre-installed for this to work.
f6b5b4d7
TL
358
359Mount
360=====
361
f91f0fd5 362After the exports are successfully created and NFS Ganesha daemons are no longer in
f6b5b4d7
TL
363grace period. The exports can be mounted by
364
365.. code:: bash
366
367 $ mount -t nfs -o port=<ganesha-port> <ganesha-host-name>:<ganesha-pseudo-path> <mount-point>
368
369.. note:: Only NFS v4.0+ is supported.
f67539c2 370
b3b6e05e
TL
371Troubleshooting
372===============
373
374Checking NFS-Ganesha logs with
375
3761) ``cephadm``
377
378 .. code:: bash
379
380 $ cephadm logs --fsid <fsid> --name nfs.<cluster_id>.hostname
381
3822) ``rook``
383
384 .. code:: bash
385
386 $ kubectl logs -n rook-ceph rook-ceph-nfs-<cluster_id>-<node_id> nfs-ganesha
387
388Log level can be changed using `nfs cluster config set` command.
389
f67539c2 390.. _NFS-Ganesha NFS Server: https://github.com/nfs-ganesha/nfs-ganesha/wiki