]> git.proxmox.com Git - ceph.git/blob - ceph/doc/mgr/nfs.rst
777e2ee8173b20cda0e62f59d47d7eaed6de52c4
[ceph.git] / ceph / doc / mgr / nfs.rst
1 .. _mgr-nfs:
2
3 =============================
4 CephFS & RGW Exports over NFS
5 =============================
6
7 CephFS namespaces and RGW buckets can be exported over NFS protocol
8 using the `NFS-Ganesha NFS server`_.
9
10 The ``nfs`` manager module provides a general interface for managing
11 NFS exports of either CephFS directories or RGW buckets. Exports can
12 be managed either via the CLI ``ceph nfs export ...`` commands
13 or via the dashboard.
14
15 The deployment of the nfs-ganesha daemons can also be managed
16 automatically if either the :ref:`cephadm` or :ref:`mgr-rook`
17 orchestrators are enabled. If neither are in use (e.g., Ceph is
18 deployed via an external orchestrator like Ansible or Puppet), the
19 nfs-ganesha daemons must be manually deployed; for more information,
20 see :ref:`nfs-ganesha-config`.
21
22 .. note:: Starting with Ceph Pacific, the ``nfs`` mgr module must be enabled.
23
24 NFS Cluster management
25 ======================
26
27 Create NFS Ganesha Cluster
28 --------------------------
29
30 .. code:: bash
31
32 $ ceph nfs cluster create <cluster_id> [<placement>] [--port <port>] [--ingress --virtual-ip <ip>]
33
34 This creates a common recovery pool for all NFS Ganesha daemons, new user based on
35 ``cluster_id``, and a common NFS Ganesha config RADOS object.
36
37 .. note:: Since this command also brings up NFS Ganesha daemons using a ceph-mgr
38 orchestrator module (see :doc:`/mgr/orchestrator`) such as cephadm or rook, at
39 least one such module must be enabled for it to work.
40
41 Currently, NFS Ganesha daemon deployed by cephadm listens on the standard
42 port. So only one daemon will be deployed on a host.
43
44 ``<cluster_id>`` is an arbitrary string by which this NFS Ganesha cluster will be
45 known (e.g., ``mynfs``).
46
47 ``<placement>`` is an optional string signifying which hosts should have NFS Ganesha
48 daemon containers running on them and, optionally, the total number of NFS
49 Ganesha daemons on the cluster (should you want to have more than one NFS Ganesha
50 daemon running per node). For example, the following placement string means
51 "deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host)::
52
53 "host1,host2"
54
55 and this placement specification says to deploy single NFS Ganesha daemon each
56 on nodes host1 and host2 (for a total of two NFS Ganesha daemons in the
57 cluster)::
58
59 "2 host1,host2"
60
61 NFS can be deployed on a port other than 2049 (the default) with ``--port <port>``.
62
63 To deploy NFS with a high-availability front-end (virtual IP and load balancer), add the
64 ``--ingress`` flag and specify a virtual IP address. This will deploy a combination
65 of keepalived and haproxy to provide an high-availability NFS frontend for the NFS
66 service.
67
68 .. note:: The ingress implementation is not yet complete. Enabling
69 ingress will deploy multiple ganesha instances and balance
70 load across them, but a host failure will not immediately
71 cause cephadm to deploy a replacement daemon before the NFS
72 grace period expires. This high-availability functionality
73 is expected to be completed by the Quincy release (March
74 2022).
75
76 For more details, refer :ref:`orchestrator-cli-placement-spec` but keep
77 in mind that specifying the placement via a YAML file is not supported.
78
79 Ingress
80 -------
81
82 The core *nfs* service will deploy one or more nfs-ganesha daemons,
83 each of which will provide a working NFS endpoint. The IP for each
84 NFS endpoint will depend on which host the nfs-ganesha daemons are
85 deployed. By default, daemons are placed semi-randomly, but users can
86 also explicitly control where daemons are placed; see
87 :ref:`orchestrator-cli-placement-spec`.
88
89 When a cluster is created with ``--ingress``, an *ingress* service is
90 additionally deployed to provide load balancing and high-availability
91 for the NFS servers. A virtual IP is used to provide a known, stable
92 NFS endpoint that all clients can use to mount. Ceph will take care
93 of the details of NFS redirecting traffic on the virtual IP to the
94 appropriate backend NFS servers, and redeploying NFS servers when they
95 fail.
96
97 Enabling ingress via the ``ceph nfs cluster create`` command deploys a
98 simple ingress configuration with the most common configuration
99 options. Ingress can also be added to an existing NFS service (e.g.,
100 one created without the ``--ingress`` flag), and the basic NFS service can
101 also be modified after the fact to include non-default options, by modifying
102 the services directly. For more information, see :ref:`cephadm-ha-nfs`.
103
104 Show NFS Cluster IP(s)
105 ----------------------
106
107 To examine an NFS cluster's IP endpoints, including the IPs for the individual NFS
108 daemons, and the virtual IP (if any) for the ingress service,
109
110 .. code:: bash
111
112 $ ceph nfs cluster info [<cluster_id>]
113
114 .. note:: This will not work with the rook backend. Instead, expose the port with
115 the kubectl patch command and fetch the port details with kubectl get services
116 command::
117
118 $ kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-<cluster-name>-<node-id>
119 $ kubectl get services -n rook-ceph rook-ceph-nfs-<cluster-name>-<node-id>
120
121
122 Delete NFS Ganesha Cluster
123 --------------------------
124
125 .. code:: bash
126
127 $ ceph nfs cluster rm <cluster_id>
128
129 This deletes the deployed cluster.
130
131 Updating an NFS Cluster
132 -----------------------
133
134 In order to modify cluster parameters (like the port or placement), you need to
135 use the orchestrator interface to update the NFS service spec. The safest way to do
136 that is to export the current spec, modify it, and then re-apply it. For example,
137 to modify the ``nfs.foo`` service,
138
139 .. code:: bash
140
141 $ ceph orch ls --service-name nfs.foo --export > nfs.foo.yaml
142 $ vi nfs.foo.yaml
143 $ ceph orch apply -i nfs.foo.yaml
144
145 For more information about the NFS service spec, see :ref:`deploy-cephadm-nfs-ganesha`.
146
147 List NFS Ganesha Clusters
148 -------------------------
149
150 .. code:: bash
151
152 $ ceph nfs cluster ls
153
154 This lists deployed clusters.
155
156 .. _nfs-cluster-set:
157
158 Set Customized NFS Ganesha Configuration
159 ----------------------------------------
160
161 .. code:: bash
162
163 $ ceph nfs cluster config set <cluster_id> -i <config_file>
164
165 With this the nfs cluster will use the specified config and it will have
166 precedence over default config blocks.
167
168 Example use cases include:
169
170 #. Changing log level. The logging level can be adjusted with the following config
171 fragment::
172
173 LOG {
174 COMPONENTS {
175 ALL = FULL_DEBUG;
176 }
177 }
178
179 #. Adding custom export block.
180
181 The following sample block creates a single export. This export will not be
182 managed by `ceph nfs export` interface::
183
184 EXPORT {
185 Export_Id = 100;
186 Transports = TCP;
187 Path = /;
188 Pseudo = /ceph/;
189 Protocols = 4;
190 Access_Type = RW;
191 Attr_Expiration_Time = 0;
192 Squash = None;
193 FSAL {
194 Name = CEPH;
195 Filesystem = "filesystem name";
196 User_Id = "user id";
197 Secret_Access_Key = "secret key";
198 }
199 }
200
201 .. note:: User specified in FSAL block should have proper caps for NFS-Ganesha
202 daemons to access ceph cluster. User can be created in following way using
203 `auth get-or-create`::
204
205 # ceph auth get-or-create client.<user_id> mon 'allow r' osd 'allow rw pool=.nfs namespace=<nfs_cluster_name>, allow rw tag cephfs data=<fs_name>' mds 'allow rw path=<export_path>'
206
207 View Customized NFS Ganesha Configuration
208 -----------------------------------------
209
210 .. code:: bash
211
212 $ ceph nfs cluster config get <cluster_id>
213
214 This will output the user defined configuration (if any).
215
216 Reset NFS Ganesha Configuration
217 -------------------------------
218
219 .. code:: bash
220
221 $ ceph nfs cluster config reset <cluster_id>
222
223 This removes the user defined configuration.
224
225 .. note:: With a rook deployment, ganesha pods must be explicitly restarted
226 for the new config blocks to be effective.
227
228
229 Export Management
230 =================
231
232 .. warning:: Currently, the nfs interface is not integrated with dashboard. Both
233 dashboard and nfs interface have different export requirements and
234 create exports differently. Management of dashboard created exports is not
235 supported.
236
237 Create CephFS Export
238 --------------------
239
240 .. code:: bash
241
242 $ ceph nfs export create cephfs --cluster-id <cluster_id> --pseudo-path <pseudo_path> --fsname <fsname> [--readonly] [--path=/path/in/cephfs] [--client_addr <value>...] [--squash <value>]
243
244 This creates export RADOS objects containing the export block, where
245
246 ``<cluster_id>`` is the NFS Ganesha cluster ID.
247
248 ``<pseudo_path>`` is the export position within the NFS v4 Pseudo Filesystem where the export will be available on the server. It must be an absolute path and unique.
249
250 ``<fsname>`` is the name of the FS volume used by the NFS Ganesha cluster
251 that will serve this export.
252
253 ``<path>`` is the path within cephfs. Valid path should be given and default
254 path is '/'. It need not be unique. Subvolume path can be fetched using:
255
256 .. code::
257
258 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
259
260 ``<client_addr>`` is the list of client address for which these export
261 permissions will be applicable. By default all clients can access the export
262 according to specified export permissions. See the `NFS-Ganesha Export Sample`_
263 for permissible values.
264
265 ``<squash>`` defines the kind of user id squashing to be performed. The default
266 value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for
267 permissible values.
268
269 .. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface.
270
271 Create RGW Export
272 -----------------
273
274 There are two kinds of RGW exports:
275
276 - a *user* export will export all buckets owned by an
277 RGW user, where the top-level directory of the export is a list of buckets.
278 - a *bucket* export will export a single bucket, where the top-level directory contains
279 the objects in the bucket.
280
281 RGW bucket export
282 ^^^^^^^^^^^^^^^^^
283
284 To export a *bucket*:
285
286 .. code::
287
288 $ ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --bucket <bucket_name> [--user-id <user-id>] [--readonly] [--client_addr <value>...] [--squash <value>]
289
290 For example, to export *mybucket* via NFS cluster *mynfs* at the pseudo-path */bucketdata* to any host in the ``192.168.10.0/24`` network
291
292 .. code::
293
294 $ ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --bucket mybucket --client_addr 192.168.10.0/24
295
296 .. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface.
297
298 ``<cluster_id>`` is the NFS Ganesha cluster ID.
299
300 ``<pseudo_path>`` is the export position within the NFS v4 Pseudo Filesystem where the export will be available on the server. It must be an absolute path and unique.
301
302 ``<bucket_name>`` is the name of the bucket that will be exported.
303
304 ``<user_id>`` is optional, and specifies which RGW user will be used for read and write
305 operations to the bucket. If it is not specified, the user who owns the bucket will be
306 used.
307
308 .. note:: Currently, if multi-site RGW is enabled, Ceph can only export RGW buckets in the default realm.
309
310 ``<client_addr>`` is the list of client address for which these export
311 permissions will be applicable. By default all clients can access the export
312 according to specified export permissions. See the `NFS-Ganesha Export Sample`_
313 for permissible values.
314
315 ``<squash>`` defines the kind of user id squashing to be performed. The default
316 value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for
317 permissible values.
318
319 RGW user export
320 ^^^^^^^^^^^^^^^
321
322 To export an RGW *user*:
323
324 .. code::
325
326 $ ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --user-id <user-id> [--readonly] [--client_addr <value>...] [--squash <value>]
327
328 For example, to export *myuser* via NFS cluster *mynfs* at the pseudo-path */myuser* to any host in the ``192.168.10.0/24`` network
329
330 .. code::
331
332 $ ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --user-id myuser --client_addr 192.168.10.0/24
333
334
335 Delete Export
336 -------------
337
338 .. code:: bash
339
340 $ ceph nfs export rm <cluster_id> <pseudo_path>
341
342 This deletes an export in an NFS Ganesha cluster, where:
343
344 ``<cluster_id>`` is the NFS Ganesha cluster ID.
345
346 ``<pseudo_path>`` is the pseudo root path (must be an absolute path).
347
348 List Exports
349 ------------
350
351 .. code:: bash
352
353 $ ceph nfs export ls <cluster_id> [--detailed]
354
355 It lists exports for a cluster, where:
356
357 ``<cluster_id>`` is the NFS Ganesha cluster ID.
358
359 With the ``--detailed`` option enabled it shows entire export block.
360
361 Get Export
362 ----------
363
364 .. code:: bash
365
366 $ ceph nfs export info <cluster_id> <pseudo_path>
367
368 This displays export block for a cluster based on pseudo root name,
369 where:
370
371 ``<cluster_id>`` is the NFS Ganesha cluster ID.
372
373 ``<pseudo_path>`` is the pseudo root path (must be an absolute path).
374
375
376 Create or update export via JSON specification
377 ----------------------------------------------
378
379 An existing export can be dumped in JSON format with:
380
381 .. prompt:: bash #
382
383 ceph nfs export info *<cluster_id>* *<pseudo_path>*
384
385 An export can be created or modified by importing a JSON description in the
386 same format:
387
388 .. prompt:: bash #
389
390 ceph nfs export apply *<cluster_id>* -i <json_file>
391
392 For example,::
393
394 $ ceph nfs export info mynfs /cephfs > update_cephfs_export.json
395 $ cat update_cephfs_export.json
396 {
397 "export_id": 1,
398 "path": "/",
399 "cluster_id": "mynfs",
400 "pseudo": "/cephfs",
401 "access_type": "RW",
402 "squash": "no_root_squash",
403 "security_label": true,
404 "protocols": [
405 4
406 ],
407 "transports": [
408 "TCP"
409 ],
410 "fsal": {
411 "name": "CEPH",
412 "user_id": "nfs.mynfs.1",
413 "fs_name": "a",
414 "sec_label_xattr": ""
415 },
416 "clients": []
417 }
418
419 The imported JSON can be a single dict describing a single export, or a JSON list
420 containing multiple export dicts.
421
422 The exported JSON can be modified and then reapplied. Below, *pseudo*
423 and *access_type* are modified. When modifying an export, the
424 provided JSON should fully describe the new state of the export (just
425 as when creating a new export), with the exception of the
426 authentication credentials, which will be carried over from the
427 previous state of the export where possible.
428
429 ::
430
431 $ ceph nfs export apply mynfs -i update_cephfs_export.json
432 $ cat update_cephfs_export.json
433 {
434 "export_id": 1,
435 "path": "/",
436 "cluster_id": "mynfs",
437 "pseudo": "/cephfs_testing",
438 "access_type": "RO",
439 "squash": "no_root_squash",
440 "security_label": true,
441 "protocols": [
442 4
443 ],
444 "transports": [
445 "TCP"
446 ],
447 "fsal": {
448 "name": "CEPH",
449 "user_id": "nfs.mynfs.1",
450 "fs_name": "a",
451 "sec_label_xattr": ""
452 },
453 "clients": []
454 }
455
456 An export can also be created or updated by injecting a Ganesha NFS EXPORT config
457 fragment. For example,::
458
459 $ ceph nfs export apply mynfs -i update_cephfs_export.conf
460 $ cat update_cephfs_export.conf
461 EXPORT {
462 FSAL {
463 name = "CEPH";
464 filesystem = "a";
465 }
466 export_id = 1;
467 path = "/";
468 pseudo = "/a";
469 access_type = "RW";
470 squash = "none";
471 attr_expiration_time = 0;
472 security_label = true;
473 protocols = 4;
474 transports = "TCP";
475 }
476
477
478 Mounting
479 ========
480
481 After the exports are successfully created and NFS Ganesha daemons are
482 deployed, exports can be mounted with:
483
484 .. code:: bash
485
486 $ mount -t nfs <ganesha-host-name>:<pseudo_path> <mount-point>
487
488 For example, if the NFS cluster was created with ``--ingress --virtual-ip 192.168.10.10``
489 and the export's pseudo-path was ``/foo``, the export can be mounted at ``/mnt`` with:
490
491 .. code:: bash
492
493 $ mount -t nfs 192.168.10.10:/foo /mnt
494
495 If the NFS service is running on a non-standard port number:
496
497 .. code:: bash
498
499 $ mount -t nfs -o port=<ganesha-port> <ganesha-host-name>:<ganesha-pseudo_path> <mount-point>
500
501 .. note:: Only NFS v4.0+ is supported.
502
503 Troubleshooting
504 ===============
505
506 Checking NFS-Ganesha logs with
507
508 1) ``cephadm``: The NFS daemons can be listed with:
509
510 .. code:: bash
511
512 $ ceph orch ps --daemon-type nfs
513
514 You can via the logs for a specific daemon (e.g., ``nfs.mynfs.0.0.myhost.xkfzal``) on
515 the relevant host with:
516
517 .. code:: bash
518
519 # cephadm logs --fsid <fsid> --name nfs.mynfs.0.0.myhost.xkfzal
520
521 2) ``rook``:
522
523 .. code:: bash
524
525 $ kubectl logs -n rook-ceph rook-ceph-nfs-<cluster_id>-<node_id> nfs-ganesha
526
527 The NFS log level can be adjusted using `nfs cluster config set` command (see :ref:`nfs-cluster-set`).
528
529
530 .. _nfs-ganesha-config:
531
532
533 Manual Ganesha deployment
534 =========================
535
536 It may be possible to deploy and manage the NFS ganesha daemons without
537 orchestration frameworks such as cephadm or rook.
538
539 .. note:: Manual configuration is not tested or fully documented; your
540 mileage may vary. If you make this work, please help us by
541 updating this documentation.
542
543 Limitations
544 ------------
545
546 If no orchestrator module is enabled for the Ceph Manager the NFS cluster
547 management commands, such as those starting with ``ceph nfs cluster``, will not
548 function. However, commands that manage NFS exports, like those prefixed with
549 ``ceph nfs export`` are expected to work as long as the necessary RADOS objects
550 have already been created. The exact RADOS objects required are not documented
551 at this time as support for this feature is incomplete. A curious reader can
552 find some details about the object by reading the source code for the
553 ``mgr/nfs`` module (found in the ceph source tree under
554 ``src/pybind/mgr/nfs``).
555
556
557 Requirements
558 ------------
559
560 The following packages are required to enable CephFS and RGW exports with nfs-ganesha:
561
562 - ``nfs-ganesha``, ``nfs-ganesha-ceph``, ``nfs-ganesha-rados-grace`` and
563 ``nfs-ganesha-rados-urls`` packages (version 3.3 and above)
564
565 Ganesha Configuration Hierarchy
566 -------------------------------
567
568 Cephadm and rook start each nfs-ganesha daemon with a minimal
569 `bootstrap` configuration file that pulls from a shared `common`
570 configuration stored in the ``.nfs`` RADOS pool and watches the common
571 config for changes. Each export is written to a separate RADOS object
572 that is referenced by URL from the common config.
573
574 .. ditaa::
575
576 rados://$pool/$namespace/export-$i rados://$pool/$namespace/userconf-nfs.$cluster_id
577 (export config) (user config)
578
579 +----------+ +----------+ +----------+ +---------------------------+
580 | | | | | | | |
581 | export-1 | | export-2 | | export-3 | | userconf-nfs.$cluster_id |
582 | | | | | | | |
583 +----+-----+ +----+-----+ +-----+----+ +-------------+-------------+
584 ^ ^ ^ ^
585 | | | |
586 +--------------------------------+-------------------------+
587 %url |
588 |
589 +--------+--------+
590 | | rados://$pool/$namespace/conf-nfs.$svc
591 | conf+nfs.$svc | (common config)
592 | |
593 +--------+--------+
594 ^
595 |
596 watch_url |
597 +----------------------------------------------+
598 | | |
599 | | | RADOS
600 +----------------------------------------------------------------------------------+
601 | | | CONTAINER
602 watch_url | watch_url | watch_url |
603 | | |
604 +--------+-------+ +--------+-------+ +-------+--------+
605 | | | | | | /etc/ganesha/ganesha.conf
606 | nfs.$svc.a | | nfs.$svc.b | | nfs.$svc.c | (bootstrap config)
607 | | | | | |
608 +----------------+ +----------------+ +----------------+
609
610
611 .. _NFS-Ganesha NFS Server: https://github.com/nfs-ganesha/nfs-ganesha/wiki
612 .. _NFS-Ganesha Export Sample: https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/export.txt