]> git.proxmox.com Git - ceph.git/blame - ceph/doc/mgr/nfs.rst
update ceph source to reef 18.2.0
[ceph.git] / ceph / doc / mgr / nfs.rst
CommitLineData
a4b75251
TL
1.. _mgr-nfs:
2
3=============================
4CephFS & RGW Exports over NFS
5=============================
6
7CephFS namespaces and RGW buckets can be exported over NFS protocol
8using the `NFS-Ganesha NFS server`_.
9
10The ``nfs`` manager module provides a general interface for managing
11NFS exports of either CephFS directories or RGW buckets. Exports can
12be managed either via the CLI ``ceph nfs export ...`` commands
13or via the dashboard.
14
15The deployment of the nfs-ganesha daemons can also be managed
16automatically if either the :ref:`cephadm` or :ref:`mgr-rook`
17orchestrators are enabled. If neither are in use (e.g., Ceph is
18deployed via an external orchestrator like Ansible or Puppet), the
19nfs-ganesha daemons must be manually deployed; for more information,
20see :ref:`nfs-ganesha-config`.
21
22.. note:: Starting with Ceph Pacific, the ``nfs`` mgr module must be enabled.
23
24NFS Cluster management
25======================
26
1e59de90
TL
27.. _nfs-module-cluster-create:
28
a4b75251
TL
29Create NFS Ganesha Cluster
30--------------------------
31
32.. code:: bash
33
05a536ef 34 $ ceph nfs cluster create <cluster_id> [<placement>] [--ingress] [--virtual_ip <value>] [--ingress-mode {default|keepalive-only|haproxy-standard|haproxy-protocol}] [--port <int>]
a4b75251
TL
35
36This creates a common recovery pool for all NFS Ganesha daemons, new user based on
37``cluster_id``, and a common NFS Ganesha config RADOS object.
38
39.. note:: Since this command also brings up NFS Ganesha daemons using a ceph-mgr
40 orchestrator module (see :doc:`/mgr/orchestrator`) such as cephadm or rook, at
41 least one such module must be enabled for it to work.
42
43 Currently, NFS Ganesha daemon deployed by cephadm listens on the standard
44 port. So only one daemon will be deployed on a host.
45
46``<cluster_id>`` is an arbitrary string by which this NFS Ganesha cluster will be
47known (e.g., ``mynfs``).
48
49``<placement>`` is an optional string signifying which hosts should have NFS Ganesha
50daemon containers running on them and, optionally, the total number of NFS
51Ganesha daemons on the cluster (should you want to have more than one NFS Ganesha
52daemon running per node). For example, the following placement string means
53"deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host)::
54
55 "host1,host2"
56
57and this placement specification says to deploy single NFS Ganesha daemon each
58on nodes host1 and host2 (for a total of two NFS Ganesha daemons in the
59cluster)::
60
61 "2 host1,host2"
62
63NFS can be deployed on a port other than 2049 (the default) with ``--port <port>``.
64
65To deploy NFS with a high-availability front-end (virtual IP and load balancer), add the
66``--ingress`` flag and specify a virtual IP address. This will deploy a combination
67of keepalived and haproxy to provide an high-availability NFS frontend for the NFS
68service.
69
70.. note:: The ingress implementation is not yet complete. Enabling
71 ingress will deploy multiple ganesha instances and balance
72 load across them, but a host failure will not immediately
73 cause cephadm to deploy a replacement daemon before the NFS
74 grace period expires. This high-availability functionality
75 is expected to be completed by the Quincy release (March
76 2022).
77
78For more details, refer :ref:`orchestrator-cli-placement-spec` but keep
79in mind that specifying the placement via a YAML file is not supported.
80
1e59de90
TL
81Deployment of NFS daemons and the ingress service is asynchronous: the
82command may return before the services have completely started. You may
83wish to check that these services do successfully start and stay running.
84When using cephadm orchestration, these commands check service status:
85
86.. code:: bash
87
88 $ ceph orch ls --service_name=nfs.<cluster_id>
89 $ ceph orch ls --service_name=ingress.nfs.<cluster_id>
90
91
a4b75251
TL
92Ingress
93-------
94
95The core *nfs* service will deploy one or more nfs-ganesha daemons,
96each of which will provide a working NFS endpoint. The IP for each
97NFS endpoint will depend on which host the nfs-ganesha daemons are
98deployed. By default, daemons are placed semi-randomly, but users can
99also explicitly control where daemons are placed; see
100:ref:`orchestrator-cli-placement-spec`.
101
102When a cluster is created with ``--ingress``, an *ingress* service is
103additionally deployed to provide load balancing and high-availability
104for the NFS servers. A virtual IP is used to provide a known, stable
105NFS endpoint that all clients can use to mount. Ceph will take care
106of the details of NFS redirecting traffic on the virtual IP to the
107appropriate backend NFS servers, and redeploying NFS servers when they
108fail.
109
1e59de90
TL
110If a user additionally supplies ``--ingress-mode keepalive-only`` a
111partial *ingress* service will be deployed that still provides a virtual
112IP, but has nfs directly binding to that virtual IP and leaves out any
113sort of load balancing or traffic redirection. This setup will restrict
114users to deploying only 1 nfs daemon as multiple cannot bind to the same
115port on the virtual IP.
116
117Instead providing ``--ingress-mode default`` will result in the same setup
118as not providing the ``--ingress-mode`` flag. In this setup keepalived will be
119deployed to handle forming the virtual IP and haproxy will be deployed
120to handle load balancing and traffic redirection.
121
a4b75251
TL
122Enabling ingress via the ``ceph nfs cluster create`` command deploys a
123simple ingress configuration with the most common configuration
124options. Ingress can also be added to an existing NFS service (e.g.,
125one created without the ``--ingress`` flag), and the basic NFS service can
126also be modified after the fact to include non-default options, by modifying
127the services directly. For more information, see :ref:`cephadm-ha-nfs`.
128
129Show NFS Cluster IP(s)
130----------------------
131
132To examine an NFS cluster's IP endpoints, including the IPs for the individual NFS
133daemons, and the virtual IP (if any) for the ingress service,
134
135.. code:: bash
136
137 $ ceph nfs cluster info [<cluster_id>]
138
139.. note:: This will not work with the rook backend. Instead, expose the port with
140 the kubectl patch command and fetch the port details with kubectl get services
141 command::
142
143 $ kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-<cluster-name>-<node-id>
144 $ kubectl get services -n rook-ceph rook-ceph-nfs-<cluster-name>-<node-id>
145
146
147Delete NFS Ganesha Cluster
148--------------------------
149
150.. code:: bash
151
152 $ ceph nfs cluster rm <cluster_id>
153
154This deletes the deployed cluster.
155
1e59de90
TL
156
157Removal of NFS daemons and the ingress service is asynchronous: the
158command may return before the services have been completely deleted. You may
159wish to check that these services are no longer reported. When using cephadm
160orchestration, these commands check service status:
161
162.. code:: bash
163
164 $ ceph orch ls --service_name=nfs.<cluster_id>
165 $ ceph orch ls --service_name=ingress.nfs.<cluster_id>
166
167
a4b75251
TL
168Updating an NFS Cluster
169-----------------------
170
171In order to modify cluster parameters (like the port or placement), you need to
172use the orchestrator interface to update the NFS service spec. The safest way to do
173that is to export the current spec, modify it, and then re-apply it. For example,
174to modify the ``nfs.foo`` service,
175
176.. code:: bash
177
178 $ ceph orch ls --service-name nfs.foo --export > nfs.foo.yaml
179 $ vi nfs.foo.yaml
180 $ ceph orch apply -i nfs.foo.yaml
181
182For more information about the NFS service spec, see :ref:`deploy-cephadm-nfs-ganesha`.
183
184List NFS Ganesha Clusters
185-------------------------
186
187.. code:: bash
188
189 $ ceph nfs cluster ls
190
191This lists deployed clusters.
192
193.. _nfs-cluster-set:
194
195Set Customized NFS Ganesha Configuration
196----------------------------------------
197
198.. code:: bash
199
200 $ ceph nfs cluster config set <cluster_id> -i <config_file>
201
202With this the nfs cluster will use the specified config and it will have
203precedence over default config blocks.
204
205Example use cases include:
206
207#. Changing log level. The logging level can be adjusted with the following config
208 fragment::
209
210 LOG {
211 COMPONENTS {
212 ALL = FULL_DEBUG;
213 }
214 }
215
216#. Adding custom export block.
217
218 The following sample block creates a single export. This export will not be
219 managed by `ceph nfs export` interface::
220
221 EXPORT {
222 Export_Id = 100;
223 Transports = TCP;
224 Path = /;
225 Pseudo = /ceph/;
226 Protocols = 4;
227 Access_Type = RW;
228 Attr_Expiration_Time = 0;
229 Squash = None;
230 FSAL {
231 Name = CEPH;
232 Filesystem = "filesystem name";
233 User_Id = "user id";
234 Secret_Access_Key = "secret key";
235 }
236 }
237
238.. note:: User specified in FSAL block should have proper caps for NFS-Ganesha
239 daemons to access ceph cluster. User can be created in following way using
240 `auth get-or-create`::
241
242 # ceph auth get-or-create client.<user_id> mon 'allow r' osd 'allow rw pool=.nfs namespace=<nfs_cluster_name>, allow rw tag cephfs data=<fs_name>' mds 'allow rw path=<export_path>'
243
244View Customized NFS Ganesha Configuration
245-----------------------------------------
246
247.. code:: bash
248
249 $ ceph nfs cluster config get <cluster_id>
250
251This will output the user defined configuration (if any).
252
253Reset NFS Ganesha Configuration
254-------------------------------
255
256.. code:: bash
257
258 $ ceph nfs cluster config reset <cluster_id>
259
260This removes the user defined configuration.
261
262.. note:: With a rook deployment, ganesha pods must be explicitly restarted
263 for the new config blocks to be effective.
264
265
266Export Management
267=================
268
269.. warning:: Currently, the nfs interface is not integrated with dashboard. Both
270 dashboard and nfs interface have different export requirements and
271 create exports differently. Management of dashboard created exports is not
272 supported.
273
274Create CephFS Export
275--------------------
276
277.. code:: bash
278
39ae355f 279 $ ceph nfs export create cephfs --cluster-id <cluster_id> --pseudo-path <pseudo_path> --fsname <fsname> [--readonly] [--path=/path/in/cephfs] [--client_addr <value>...] [--squash <value>] [--sectype <value>...]
a4b75251
TL
280
281This creates export RADOS objects containing the export block, where
282
283``<cluster_id>`` is the NFS Ganesha cluster ID.
284
285``<pseudo_path>`` is the export position within the NFS v4 Pseudo Filesystem where the export will be available on the server. It must be an absolute path and unique.
286
287``<fsname>`` is the name of the FS volume used by the NFS Ganesha cluster
288that will serve this export.
289
290``<path>`` is the path within cephfs. Valid path should be given and default
291path is '/'. It need not be unique. Subvolume path can be fetched using:
292
293.. code::
294
295 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
296
297``<client_addr>`` is the list of client address for which these export
298permissions will be applicable. By default all clients can access the export
299according to specified export permissions. See the `NFS-Ganesha Export Sample`_
300for permissible values.
301
302``<squash>`` defines the kind of user id squashing to be performed. The default
303value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for
304permissible values.
305
39ae355f
TL
306``<sectype>`` specifies which authentication methods will be used when
307connecting to the export. Valid values include "krb5p", "krb5i", "krb5", "sys",
308and "none". More than one value can be supplied. The flag may be specified
309multiple times (example: ``--sectype=krb5p --sectype=krb5i``) or multiple
310values may be separated by a comma (example: ``--sectype krb5p,krb5i``). The
311server will negotatiate a supported security type with the client preferring
312the supplied methods left-to-right.
313
314.. note:: Specifying values for sectype that require Kerberos will only function on servers
315 that are configured to support Kerberos. Setting up NFS-Ganesha to support Kerberos
316 is outside the scope of this document.
317
a4b75251
TL
318.. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface.
319
320Create RGW Export
321-----------------
322
323There are two kinds of RGW exports:
324
325- a *user* export will export all buckets owned by an
326 RGW user, where the top-level directory of the export is a list of buckets.
327- a *bucket* export will export a single bucket, where the top-level directory contains
328 the objects in the bucket.
329
330RGW bucket export
331^^^^^^^^^^^^^^^^^
332
333To export a *bucket*:
334
335.. code::
336
39ae355f 337 $ ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --bucket <bucket_name> [--user-id <user-id>] [--readonly] [--client_addr <value>...] [--squash <value>] [--sectype <value>...]
a4b75251
TL
338
339For example, to export *mybucket* via NFS cluster *mynfs* at the pseudo-path */bucketdata* to any host in the ``192.168.10.0/24`` network
340
341.. code::
342
343 $ ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --bucket mybucket --client_addr 192.168.10.0/24
344
345.. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface.
346
347``<cluster_id>`` is the NFS Ganesha cluster ID.
348
349``<pseudo_path>`` is the export position within the NFS v4 Pseudo Filesystem where the export will be available on the server. It must be an absolute path and unique.
350
351``<bucket_name>`` is the name of the bucket that will be exported.
352
353``<user_id>`` is optional, and specifies which RGW user will be used for read and write
354operations to the bucket. If it is not specified, the user who owns the bucket will be
355used.
356
357.. note:: Currently, if multi-site RGW is enabled, Ceph can only export RGW buckets in the default realm.
358
359``<client_addr>`` is the list of client address for which these export
360permissions will be applicable. By default all clients can access the export
361according to specified export permissions. See the `NFS-Ganesha Export Sample`_
362for permissible values.
363
364``<squash>`` defines the kind of user id squashing to be performed. The default
365value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for
366permissible values.
367
39ae355f
TL
368``<sectype>`` specifies which authentication methods will be used when
369connecting to the export. Valid values include "krb5p", "krb5i", "krb5", "sys",
370and "none". More than one value can be supplied. The flag may be specified
371multiple times (example: ``--sectype=krb5p --sectype=krb5i``) or multiple
372values may be separated by a comma (example: ``--sectype krb5p,krb5i``). The
373server will negotatiate a supported security type with the client preferring
374the supplied methods left-to-right.
375
376.. note:: Specifying values for sectype that require Kerberos will only function on servers
377 that are configured to support Kerberos. Setting up NFS-Ganesha to support Kerberos
378 is outside the scope of this document.
379
a4b75251
TL
380RGW user export
381^^^^^^^^^^^^^^^
382
383To export an RGW *user*:
384
385.. code::
386
387 $ ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --user-id <user-id> [--readonly] [--client_addr <value>...] [--squash <value>]
388
389For example, to export *myuser* via NFS cluster *mynfs* at the pseudo-path */myuser* to any host in the ``192.168.10.0/24`` network
390
391.. code::
392
393 $ ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --user-id myuser --client_addr 192.168.10.0/24
394
395
396Delete Export
397-------------
398
399.. code:: bash
400
401 $ ceph nfs export rm <cluster_id> <pseudo_path>
402
403This deletes an export in an NFS Ganesha cluster, where:
404
405``<cluster_id>`` is the NFS Ganesha cluster ID.
406
407``<pseudo_path>`` is the pseudo root path (must be an absolute path).
408
409List Exports
410------------
411
412.. code:: bash
413
414 $ ceph nfs export ls <cluster_id> [--detailed]
415
416It lists exports for a cluster, where:
417
418``<cluster_id>`` is the NFS Ganesha cluster ID.
419
420With the ``--detailed`` option enabled it shows entire export block.
421
422Get Export
423----------
424
425.. code:: bash
426
427 $ ceph nfs export info <cluster_id> <pseudo_path>
428
429This displays export block for a cluster based on pseudo root name,
430where:
431
432``<cluster_id>`` is the NFS Ganesha cluster ID.
433
434``<pseudo_path>`` is the pseudo root path (must be an absolute path).
435
436
437Create or update export via JSON specification
438----------------------------------------------
439
440An existing export can be dumped in JSON format with:
441
442.. prompt:: bash #
443
444 ceph nfs export info *<cluster_id>* *<pseudo_path>*
445
446An export can be created or modified by importing a JSON description in the
447same format:
448
449.. prompt:: bash #
450
451 ceph nfs export apply *<cluster_id>* -i <json_file>
452
453For example,::
454
455 $ ceph nfs export info mynfs /cephfs > update_cephfs_export.json
456 $ cat update_cephfs_export.json
457 {
458 "export_id": 1,
459 "path": "/",
460 "cluster_id": "mynfs",
461 "pseudo": "/cephfs",
462 "access_type": "RW",
463 "squash": "no_root_squash",
464 "security_label": true,
465 "protocols": [
466 4
467 ],
468 "transports": [
469 "TCP"
470 ],
471 "fsal": {
472 "name": "CEPH",
473 "user_id": "nfs.mynfs.1",
474 "fs_name": "a",
475 "sec_label_xattr": ""
476 },
477 "clients": []
478 }
479
480The imported JSON can be a single dict describing a single export, or a JSON list
481containing multiple export dicts.
482
483The exported JSON can be modified and then reapplied. Below, *pseudo*
484and *access_type* are modified. When modifying an export, the
485provided JSON should fully describe the new state of the export (just
486as when creating a new export), with the exception of the
487authentication credentials, which will be carried over from the
488previous state of the export where possible.
489
490::
491
492 $ ceph nfs export apply mynfs -i update_cephfs_export.json
493 $ cat update_cephfs_export.json
494 {
495 "export_id": 1,
496 "path": "/",
497 "cluster_id": "mynfs",
498 "pseudo": "/cephfs_testing",
499 "access_type": "RO",
500 "squash": "no_root_squash",
501 "security_label": true,
502 "protocols": [
503 4
504 ],
505 "transports": [
506 "TCP"
507 ],
508 "fsal": {
509 "name": "CEPH",
510 "user_id": "nfs.mynfs.1",
511 "fs_name": "a",
512 "sec_label_xattr": ""
513 },
514 "clients": []
515 }
516
517An export can also be created or updated by injecting a Ganesha NFS EXPORT config
518fragment. For example,::
519
520 $ ceph nfs export apply mynfs -i update_cephfs_export.conf
521 $ cat update_cephfs_export.conf
522 EXPORT {
523 FSAL {
524 name = "CEPH";
525 filesystem = "a";
526 }
527 export_id = 1;
528 path = "/";
529 pseudo = "/a";
530 access_type = "RW";
531 squash = "none";
532 attr_expiration_time = 0;
533 security_label = true;
534 protocols = 4;
535 transports = "TCP";
536 }
537
538
539Mounting
540========
541
542After the exports are successfully created and NFS Ganesha daemons are
543deployed, exports can be mounted with:
544
545.. code:: bash
546
547 $ mount -t nfs <ganesha-host-name>:<pseudo_path> <mount-point>
548
549For example, if the NFS cluster was created with ``--ingress --virtual-ip 192.168.10.10``
550and the export's pseudo-path was ``/foo``, the export can be mounted at ``/mnt`` with:
551
552.. code:: bash
553
554 $ mount -t nfs 192.168.10.10:/foo /mnt
555
556If the NFS service is running on a non-standard port number:
557
558.. code:: bash
559
560 $ mount -t nfs -o port=<ganesha-port> <ganesha-host-name>:<ganesha-pseudo_path> <mount-point>
561
562.. note:: Only NFS v4.0+ is supported.
563
564Troubleshooting
565===============
566
567Checking NFS-Ganesha logs with
568
5691) ``cephadm``: The NFS daemons can be listed with:
570
571 .. code:: bash
572
573 $ ceph orch ps --daemon-type nfs
574
575 You can via the logs for a specific daemon (e.g., ``nfs.mynfs.0.0.myhost.xkfzal``) on
576 the relevant host with:
577
578 .. code:: bash
579
580 # cephadm logs --fsid <fsid> --name nfs.mynfs.0.0.myhost.xkfzal
581
5822) ``rook``:
583
584 .. code:: bash
585
586 $ kubectl logs -n rook-ceph rook-ceph-nfs-<cluster_id>-<node_id> nfs-ganesha
587
588The NFS log level can be adjusted using `nfs cluster config set` command (see :ref:`nfs-cluster-set`).
589
590
591.. _nfs-ganesha-config:
592
593
594Manual Ganesha deployment
595=========================
596
1d09f67e
TL
597It may be possible to deploy and manage the NFS ganesha daemons without
598orchestration frameworks such as cephadm or rook.
a4b75251
TL
599
600.. note:: Manual configuration is not tested or fully documented; your
601 mileage may vary. If you make this work, please help us by
602 updating this documentation.
603
1d09f67e 604Limitations
a4b75251
TL
605------------
606
1d09f67e
TL
607If no orchestrator module is enabled for the Ceph Manager the NFS cluster
608management commands, such as those starting with ``ceph nfs cluster``, will not
609function. However, commands that manage NFS exports, like those prefixed with
610``ceph nfs export`` are expected to work as long as the necessary RADOS objects
611have already been created. The exact RADOS objects required are not documented
612at this time as support for this feature is incomplete. A curious reader can
613find some details about the object by reading the source code for the
614``mgr/nfs`` module (found in the ceph source tree under
615``src/pybind/mgr/nfs``).
616
a4b75251
TL
617
618Requirements
619------------
620
621The following packages are required to enable CephFS and RGW exports with nfs-ganesha:
622
623- ``nfs-ganesha``, ``nfs-ganesha-ceph``, ``nfs-ganesha-rados-grace`` and
624 ``nfs-ganesha-rados-urls`` packages (version 3.3 and above)
625
626Ganesha Configuration Hierarchy
627-------------------------------
628
629Cephadm and rook start each nfs-ganesha daemon with a minimal
630`bootstrap` configuration file that pulls from a shared `common`
631configuration stored in the ``.nfs`` RADOS pool and watches the common
632config for changes. Each export is written to a separate RADOS object
633that is referenced by URL from the common config.
634
635.. ditaa::
636
637 rados://$pool/$namespace/export-$i rados://$pool/$namespace/userconf-nfs.$cluster_id
638 (export config) (user config)
639
640 +----------+ +----------+ +----------+ +---------------------------+
641 | | | | | | | |
642 | export-1 | | export-2 | | export-3 | | userconf-nfs.$cluster_id |
643 | | | | | | | |
644 +----+-----+ +----+-----+ +-----+----+ +-------------+-------------+
645 ^ ^ ^ ^
646 | | | |
647 +--------------------------------+-------------------------+
648 %url |
649 |
650 +--------+--------+
651 | | rados://$pool/$namespace/conf-nfs.$svc
652 | conf+nfs.$svc | (common config)
653 | |
654 +--------+--------+
655 ^
656 |
657 watch_url |
658 +----------------------------------------------+
659 | | |
660 | | | RADOS
661 +----------------------------------------------------------------------------------+
662 | | | CONTAINER
663 watch_url | watch_url | watch_url |
664 | | |
665 +--------+-------+ +--------+-------+ +-------+--------+
666 | | | | | | /etc/ganesha/ganesha.conf
667 | nfs.$svc.a | | nfs.$svc.b | | nfs.$svc.c | (bootstrap config)
668 | | | | | |
669 +----------------+ +----------------+ +----------------+
670
671
672.. _NFS-Ganesha NFS Server: https://github.com/nfs-ganesha/nfs-ganesha/wiki
673.. _NFS-Ganesha Export Sample: https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/export.txt