3 =============================
4 CephFS & RGW Exports over NFS
5 =============================
7 CephFS namespaces and RGW buckets can be exported over NFS protocol
8 using the `NFS-Ganesha NFS server`_.
10 The ``nfs`` manager module provides a general interface for managing
11 NFS exports of either CephFS directories or RGW buckets. Exports can
12 be managed either via the CLI ``ceph nfs export ...`` commands
15 The deployment of the nfs-ganesha daemons can also be managed
16 automatically if either the :ref:`cephadm` or :ref:`mgr-rook`
17 orchestrators are enabled. If neither are in use (e.g., Ceph is
18 deployed via an external orchestrator like Ansible or Puppet), the
19 nfs-ganesha daemons must be manually deployed; for more information,
20 see :ref:`nfs-ganesha-config`.
22 .. note:: Starting with Ceph Pacific, the ``nfs`` mgr module must be enabled.
24 NFS Cluster management
25 ======================
27 Create NFS Ganesha Cluster
28 --------------------------
32 $ ceph nfs cluster create <cluster_id> [<placement>] [--port <port>] [--ingress --virtual-ip <ip>]
34 This creates a common recovery pool for all NFS Ganesha daemons, new user based on
35 ``cluster_id``, and a common NFS Ganesha config RADOS object.
37 .. note:: Since this command also brings up NFS Ganesha daemons using a ceph-mgr
38 orchestrator module (see :doc:`/mgr/orchestrator`) such as cephadm or rook, at
39 least one such module must be enabled for it to work.
41 Currently, NFS Ganesha daemon deployed by cephadm listens on the standard
42 port. So only one daemon will be deployed on a host.
44 ``<cluster_id>`` is an arbitrary string by which this NFS Ganesha cluster will be
45 known (e.g., ``mynfs``).
47 ``<placement>`` is an optional string signifying which hosts should have NFS Ganesha
48 daemon containers running on them and, optionally, the total number of NFS
49 Ganesha daemons on the cluster (should you want to have more than one NFS Ganesha
50 daemon running per node). For example, the following placement string means
51 "deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host)::
55 and this placement specification says to deploy single NFS Ganesha daemon each
56 on nodes host1 and host2 (for a total of two NFS Ganesha daemons in the
61 NFS can be deployed on a port other than 2049 (the default) with ``--port <port>``.
63 To deploy NFS with a high-availability front-end (virtual IP and load balancer), add the
64 ``--ingress`` flag and specify a virtual IP address. This will deploy a combination
65 of keepalived and haproxy to provide an high-availability NFS frontend for the NFS
68 .. note:: The ingress implementation is not yet complete. Enabling
69 ingress will deploy multiple ganesha instances and balance
70 load across them, but a host failure will not immediately
71 cause cephadm to deploy a replacement daemon before the NFS
72 grace period expires. This high-availability functionality
73 is expected to be completed by the Quincy release (March
76 For more details, refer :ref:`orchestrator-cli-placement-spec` but keep
77 in mind that specifying the placement via a YAML file is not supported.
82 The core *nfs* service will deploy one or more nfs-ganesha daemons,
83 each of which will provide a working NFS endpoint. The IP for each
84 NFS endpoint will depend on which host the nfs-ganesha daemons are
85 deployed. By default, daemons are placed semi-randomly, but users can
86 also explicitly control where daemons are placed; see
87 :ref:`orchestrator-cli-placement-spec`.
89 When a cluster is created with ``--ingress``, an *ingress* service is
90 additionally deployed to provide load balancing and high-availability
91 for the NFS servers. A virtual IP is used to provide a known, stable
92 NFS endpoint that all clients can use to mount. Ceph will take care
93 of the details of NFS redirecting traffic on the virtual IP to the
94 appropriate backend NFS servers, and redeploying NFS servers when they
97 Enabling ingress via the ``ceph nfs cluster create`` command deploys a
98 simple ingress configuration with the most common configuration
99 options. Ingress can also be added to an existing NFS service (e.g.,
100 one created without the ``--ingress`` flag), and the basic NFS service can
101 also be modified after the fact to include non-default options, by modifying
102 the services directly. For more information, see :ref:`cephadm-ha-nfs`.
104 Show NFS Cluster IP(s)
105 ----------------------
107 To examine an NFS cluster's IP endpoints, including the IPs for the individual NFS
108 daemons, and the virtual IP (if any) for the ingress service,
112 $ ceph nfs cluster info [<cluster_id>]
114 .. note:: This will not work with the rook backend. Instead, expose the port with
115 the kubectl patch command and fetch the port details with kubectl get services
118 $ kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-<cluster-name>-<node-id>
119 $ kubectl get services -n rook-ceph rook-ceph-nfs-<cluster-name>-<node-id>
122 Delete NFS Ganesha Cluster
123 --------------------------
127 $ ceph nfs cluster rm <cluster_id>
129 This deletes the deployed cluster.
131 Updating an NFS Cluster
132 -----------------------
134 In order to modify cluster parameters (like the port or placement), you need to
135 use the orchestrator interface to update the NFS service spec. The safest way to do
136 that is to export the current spec, modify it, and then re-apply it. For example,
137 to modify the ``nfs.foo`` service,
141 $ ceph orch ls --service-name nfs.foo --export > nfs.foo.yaml
143 $ ceph orch apply -i nfs.foo.yaml
145 For more information about the NFS service spec, see :ref:`deploy-cephadm-nfs-ganesha`.
147 List NFS Ganesha Clusters
148 -------------------------
152 $ ceph nfs cluster ls
154 This lists deployed clusters.
158 Set Customized NFS Ganesha Configuration
159 ----------------------------------------
163 $ ceph nfs cluster config set <cluster_id> -i <config_file>
165 With this the nfs cluster will use the specified config and it will have
166 precedence over default config blocks.
168 Example use cases include:
170 #. Changing log level. The logging level can be adjusted with the following config
179 #. Adding custom export block.
181 The following sample block creates a single export. This export will not be
182 managed by `ceph nfs export` interface::
191 Attr_Expiration_Time = 0;
195 Filesystem = "filesystem name";
197 Secret_Access_Key = "secret key";
201 .. note:: User specified in FSAL block should have proper caps for NFS-Ganesha
202 daemons to access ceph cluster. User can be created in following way using
203 `auth get-or-create`::
205 # ceph auth get-or-create client.<user_id> mon 'allow r' osd 'allow rw pool=.nfs namespace=<nfs_cluster_name>, allow rw tag cephfs data=<fs_name>' mds 'allow rw path=<export_path>'
207 View Customized NFS Ganesha Configuration
208 -----------------------------------------
212 $ ceph nfs cluster config get <cluster_id>
214 This will output the user defined configuration (if any).
216 Reset NFS Ganesha Configuration
217 -------------------------------
221 $ ceph nfs cluster config reset <cluster_id>
223 This removes the user defined configuration.
225 .. note:: With a rook deployment, ganesha pods must be explicitly restarted
226 for the new config blocks to be effective.
232 .. warning:: Currently, the nfs interface is not integrated with dashboard. Both
233 dashboard and nfs interface have different export requirements and
234 create exports differently. Management of dashboard created exports is not
242 $ ceph nfs export create cephfs --cluster-id <cluster_id> --pseudo-path <pseudo_path> --fsname <fsname> [--readonly] [--path=/path/in/cephfs] [--client_addr <value>...] [--squash <value>]
244 This creates export RADOS objects containing the export block, where
246 ``<cluster_id>`` is the NFS Ganesha cluster ID.
248 ``<pseudo_path>`` is the export position within the NFS v4 Pseudo Filesystem where the export will be available on the server. It must be an absolute path and unique.
250 ``<fsname>`` is the name of the FS volume used by the NFS Ganesha cluster
251 that will serve this export.
253 ``<path>`` is the path within cephfs. Valid path should be given and default
254 path is '/'. It need not be unique. Subvolume path can be fetched using:
258 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
260 ``<client_addr>`` is the list of client address for which these export
261 permissions will be applicable. By default all clients can access the export
262 according to specified export permissions. See the `NFS-Ganesha Export Sample`_
263 for permissible values.
265 ``<squash>`` defines the kind of user id squashing to be performed. The default
266 value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for
269 .. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface.
274 There are two kinds of RGW exports:
276 - a *user* export will export all buckets owned by an
277 RGW user, where the top-level directory of the export is a list of buckets.
278 - a *bucket* export will export a single bucket, where the top-level directory contains
279 the objects in the bucket.
284 To export a *bucket*:
288 $ ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --bucket <bucket_name> [--user-id <user-id>] [--readonly] [--client_addr <value>...] [--squash <value>]
290 For example, to export *mybucket* via NFS cluster *mynfs* at the pseudo-path */bucketdata* to any host in the ``192.168.10.0/24`` network
294 $ ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --bucket mybucket --client_addr 192.168.10.0/24
296 .. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface.
298 ``<cluster_id>`` is the NFS Ganesha cluster ID.
300 ``<pseudo_path>`` is the export position within the NFS v4 Pseudo Filesystem where the export will be available on the server. It must be an absolute path and unique.
302 ``<bucket_name>`` is the name of the bucket that will be exported.
304 ``<user_id>`` is optional, and specifies which RGW user will be used for read and write
305 operations to the bucket. If it is not specified, the user who owns the bucket will be
308 .. note:: Currently, if multi-site RGW is enabled, Ceph can only export RGW buckets in the default realm.
310 ``<client_addr>`` is the list of client address for which these export
311 permissions will be applicable. By default all clients can access the export
312 according to specified export permissions. See the `NFS-Ganesha Export Sample`_
313 for permissible values.
315 ``<squash>`` defines the kind of user id squashing to be performed. The default
316 value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for
322 To export an RGW *user*:
326 $ ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --user-id <user-id> [--readonly] [--client_addr <value>...] [--squash <value>]
328 For example, to export *myuser* via NFS cluster *mynfs* at the pseudo-path */myuser* to any host in the ``192.168.10.0/24`` network
332 $ ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --user-id myuser --client_addr 192.168.10.0/24
340 $ ceph nfs export rm <cluster_id> <pseudo_path>
342 This deletes an export in an NFS Ganesha cluster, where:
344 ``<cluster_id>`` is the NFS Ganesha cluster ID.
346 ``<pseudo_path>`` is the pseudo root path (must be an absolute path).
353 $ ceph nfs export ls <cluster_id> [--detailed]
355 It lists exports for a cluster, where:
357 ``<cluster_id>`` is the NFS Ganesha cluster ID.
359 With the ``--detailed`` option enabled it shows entire export block.
366 $ ceph nfs export info <cluster_id> <pseudo_path>
368 This displays export block for a cluster based on pseudo root name,
371 ``<cluster_id>`` is the NFS Ganesha cluster ID.
373 ``<pseudo_path>`` is the pseudo root path (must be an absolute path).
376 Create or update export via JSON specification
377 ----------------------------------------------
379 An existing export can be dumped in JSON format with:
383 ceph nfs export info *<cluster_id>* *<pseudo_path>*
385 An export can be created or modified by importing a JSON description in the
390 ceph nfs export apply *<cluster_id>* -i <json_file>
394 $ ceph nfs export info mynfs /cephfs > update_cephfs_export.json
395 $ cat update_cephfs_export.json
399 "cluster_id": "mynfs",
402 "squash": "no_root_squash",
403 "security_label": true,
412 "user_id": "nfs.mynfs.1",
414 "sec_label_xattr": ""
419 The imported JSON can be a single dict describing a single export, or a JSON list
420 containing multiple export dicts.
422 The exported JSON can be modified and then reapplied. Below, *pseudo*
423 and *access_type* are modified. When modifying an export, the
424 provided JSON should fully describe the new state of the export (just
425 as when creating a new export), with the exception of the
426 authentication credentials, which will be carried over from the
427 previous state of the export where possible.
431 $ ceph nfs export apply mynfs -i update_cephfs_export.json
432 $ cat update_cephfs_export.json
436 "cluster_id": "mynfs",
437 "pseudo": "/cephfs_testing",
439 "squash": "no_root_squash",
440 "security_label": true,
449 "user_id": "nfs.mynfs.1",
451 "sec_label_xattr": ""
456 An export can also be created or updated by injecting a Ganesha NFS EXPORT config
457 fragment. For example,::
459 $ ceph nfs export apply mynfs -i update_cephfs_export.conf
460 $ cat update_cephfs_export.conf
471 attr_expiration_time = 0;
472 security_label = true;
481 After the exports are successfully created and NFS Ganesha daemons are
482 deployed, exports can be mounted with:
486 $ mount -t nfs <ganesha-host-name>:<pseudo_path> <mount-point>
488 For example, if the NFS cluster was created with ``--ingress --virtual-ip 192.168.10.10``
489 and the export's pseudo-path was ``/foo``, the export can be mounted at ``/mnt`` with:
493 $ mount -t nfs 192.168.10.10:/foo /mnt
495 If the NFS service is running on a non-standard port number:
499 $ mount -t nfs -o port=<ganesha-port> <ganesha-host-name>:<ganesha-pseudo_path> <mount-point>
501 .. note:: Only NFS v4.0+ is supported.
506 Checking NFS-Ganesha logs with
508 1) ``cephadm``: The NFS daemons can be listed with:
512 $ ceph orch ps --daemon-type nfs
514 You can via the logs for a specific daemon (e.g., ``nfs.mynfs.0.0.myhost.xkfzal``) on
515 the relevant host with:
519 # cephadm logs --fsid <fsid> --name nfs.mynfs.0.0.myhost.xkfzal
525 $ kubectl logs -n rook-ceph rook-ceph-nfs-<cluster_id>-<node_id> nfs-ganesha
527 The NFS log level can be adjusted using `nfs cluster config set` command (see :ref:`nfs-cluster-set`).
530 .. _nfs-ganesha-config:
533 Manual Ganesha deployment
534 =========================
536 It may be possible to deploy and manage the NFS ganesha daemons manually
537 instead of allowing cephadm or rook to do so.
539 .. note:: Manual configuration is not tested or fully documented; your
540 mileage may vary. If you make this work, please help us by
541 updating this documentation.
546 * The ``mgr/nfs`` module enumerates NFS clusters via the orchestrator API; if NFS is
547 not managed by the orchestrator (e.g., cephadm or rook) then this will not work. It
548 may be possible to create the cluster, mark the cephadm service as 'unmanaged', but this
549 is awkward and not ideal.
554 The following packages are required to enable CephFS and RGW exports with nfs-ganesha:
556 - ``nfs-ganesha``, ``nfs-ganesha-ceph``, ``nfs-ganesha-rados-grace`` and
557 ``nfs-ganesha-rados-urls`` packages (version 3.3 and above)
559 Ganesha Configuration Hierarchy
560 -------------------------------
562 Cephadm and rook start each nfs-ganesha daemon with a minimal
563 `bootstrap` configuration file that pulls from a shared `common`
564 configuration stored in the ``.nfs`` RADOS pool and watches the common
565 config for changes. Each export is written to a separate RADOS object
566 that is referenced by URL from the common config.
570 rados://$pool/$namespace/export-$i rados://$pool/$namespace/userconf-nfs.$cluster_id
571 (export config) (user config)
573 +----------+ +----------+ +----------+ +---------------------------+
575 | export-1 | | export-2 | | export-3 | | userconf-nfs.$cluster_id |
577 +----+-----+ +----+-----+ +-----+----+ +-------------+-------------+
580 +--------------------------------+-------------------------+
584 | | rados://$pool/$namespace/conf-nfs.$svc
585 | conf+nfs.$svc | (common config)
591 +----------------------------------------------+
594 +----------------------------------------------------------------------------------+
596 watch_url | watch_url | watch_url |
598 +--------+-------+ +--------+-------+ +-------+--------+
599 | | | | | | /etc/ganesha/ganesha.conf
600 | nfs.$svc.a | | nfs.$svc.b | | nfs.$svc.c | (bootstrap config)
602 +----------------+ +----------------+ +----------------+
605 .. _NFS-Ganesha NFS Server: https://github.com/nfs-ganesha/nfs-ganesha/wiki
606 .. _NFS-Ganesha Export Sample: https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/export.txt