1 =======================
2 CephFS Exports over NFS
3 =======================
5 CephFS namespaces can be exported over NFS protocol using the `NFS-Ganesha NFS server`_
10 - Latest Ceph file system with mgr enabled
11 - ``nfs-ganesha``, ``nfs-ganesha-ceph``, ``nfs-ganesha-rados-grace`` and
12 ``nfs-ganesha-rados-urls`` packages (version 3.3 and above)
14 Create NFS Ganesha Cluster
15 ==========================
19 $ ceph nfs cluster create <type> <clusterid> [<placement>]
21 This creates a common recovery pool for all NFS Ganesha daemons, new user based on
22 ``clusterid``, and a common NFS Ganesha config RADOS object.
24 .. note:: Since this command also brings up NFS Ganesha daemons using a ceph-mgr
25 orchestrator module (see :doc:`/mgr/orchestrator`) such as "mgr/cephadm", at
26 least one such module must be enabled for it to work.
28 Currently, NFS Ganesha daemon deployed by cephadm listens on the standard
29 port. So only one daemon will be deployed on a host.
31 ``<type>`` signifies the export type, which corresponds to the NFS Ganesha file
32 system abstraction layer (FSAL). Permissible values are ``"cephfs`` or
33 ``rgw``, but currently only ``cephfs`` is supported.
35 ``<clusterid>`` is an arbitrary string by which this NFS Ganesha cluster will be
38 ``<placement>`` is an optional string signifying which hosts should have NFS Ganesha
39 daemon containers running on them and, optionally, the total number of NFS
40 Ganesha daemons on the cluster (should you want to have more than one NFS Ganesha
41 daemon running per node). For example, the following placement string means
42 "deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host)::
46 and this placement specification says to deploy single NFS Ganesha daemon each
47 on nodes host1 and host2 (for a total of two NFS Ganesha daemons in the
52 For more details, refer :ref:`orchestrator-cli-placement-spec` but keep
53 in mind that specifying the placement via a YAML file is not supported.
55 Update NFS Ganesha Cluster
56 ==========================
60 $ ceph nfs cluster update <clusterid> <placement>
62 This updates the deployed cluster according to the placement value.
64 Delete NFS Ganesha Cluster
65 ==========================
69 $ ceph nfs cluster delete <clusterid>
71 This deletes the deployed cluster.
73 List NFS Ganesha Cluster
74 ========================
80 This lists deployed clusters.
82 Show NFS Ganesha Cluster Information
83 ====================================
87 $ ceph nfs cluster info [<clusterid>]
89 This displays ip and port of deployed cluster.
91 .. note:: This will not work with rook backend. Instead expose port with
92 kubectl patch command and fetch the port details with kubectl get services
95 $ kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-<cluster-name>-<node-id>
96 $ kubectl get services -n rook-ceph rook-ceph-nfs-<cluster-name>-<node-id>
98 Set Customized NFS Ganesha Configuration
99 ========================================
103 $ ceph nfs cluster config set <clusterid> -i <config_file>
105 With this the nfs cluster will use the specified config and it will have
106 precedence over default config blocks.
110 1) Changing log level
112 It can be done by adding LOG block in the following way::
120 2) Adding custom export block
122 The following sample block creates a single export. This export will not be
123 managed by `ceph nfs export` interface::
132 Attr_Expiration_Time = 0;
136 Filesystem = "filesystem name";
138 Secret_Access_Key = "secret key";
142 .. note:: User specified in FSAL block should have proper caps for NFS-Ganesha
143 daemons to access ceph cluster. User can be created in following way using
144 `auth get-or-create`::
146 # ceph auth get-or-create client.<user_id> mon 'allow r' osd 'allow rw pool=nfs-ganesha namespace=<nfs_cluster_name>, allow rw tag cephfs data=<fs_name>' mds 'allow rw path=<export_path>'
148 Reset NFS Ganesha Configuration
149 ===============================
153 $ ceph nfs cluster config reset <clusterid>
155 This removes the user defined configuration.
157 .. note:: With a rook deployment, ganesha pods must be explicitly restarted
158 for the new config blocks to be effective.
163 .. warning:: Currently, the volume/nfs interface is not integrated with dashboard. Both
164 dashboard and volume/nfs interface have different export requirements and
165 create exports differently. Management of dashboard created exports is not
170 $ ceph nfs export create cephfs <fsname> <clusterid> <binding> [--readonly] [--path=/path/in/cephfs]
172 This creates export RADOS objects containing the export block, where
174 ``<fsname>`` is the name of the FS volume used by the NFS Ganesha cluster
175 that will serve this export.
177 ``<clusterid>`` is the NFS Ganesha cluster ID.
179 ``<binding>`` is the pseudo root path (must be an absolute path and unique).
180 It specifies the export position within the NFS v4 Pseudo Filesystem.
182 ``<path>`` is the path within cephfs. Valid path should be given and default
183 path is '/'. It need not be unique. Subvolume path can be fetched using:
187 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
194 $ ceph nfs export delete <clusterid> <binding>
196 This deletes an export in an NFS Ganesha cluster, where:
198 ``<clusterid>`` is the NFS Ganesha cluster ID.
200 ``<binding>`` is the pseudo root path (must be an absolute path).
207 $ ceph nfs export ls <clusterid> [--detailed]
209 It lists exports for a cluster, where:
211 ``<clusterid>`` is the NFS Ganesha cluster ID.
213 With the ``--detailed`` option enabled it shows entire export block.
220 $ ceph nfs export get <clusterid> <binding>
222 This displays export block for a cluster based on pseudo root name (binding),
225 ``<clusterid>`` is the NFS Ganesha cluster ID.
227 ``<binding>`` is the pseudo root path (must be an absolute path).
235 $ ceph nfs export update -i <json_file>
237 This updates the cephfs export specified in the json file. Export in json
238 format can be fetched with above get command. For example::
240 $ ceph nfs export get vstart /cephfs > update_cephfs_export.json
241 $ cat update_cephfs_export.json
245 "cluster_id": "vstart",
248 "squash": "no_root_squash",
249 "security_label": true,
258 "user_id": "vstart1",
260 "sec_label_xattr": ""
264 # Here in the fetched export, pseudo and access_type is modified. Then the modified file is passed to update interface
265 $ ceph nfs export update -i update_cephfs_export.json
266 $ cat update_cephfs_export.json
270 "cluster_id": "vstart",
271 "pseudo": "/cephfs_testing",
273 "squash": "no_root_squash",
274 "security_label": true,
283 "user_id": "vstart1",
285 "sec_label_xattr": ""
291 Configuring NFS Ganesha to export CephFS with vstart
292 ====================================================
298 $ MDS=1 MON=1 OSD=3 NFS=1 ../src/vstart.sh -n -d --cephadm
300 This will deploy a single NFS Ganesha daemon using ``vstart.sh``, where
301 the daemon will listen on the default NFS Ganesha port.
303 2) Using test orchestrator
307 $ MDS=1 MON=1 OSD=3 NFS=1 ../src/vstart.sh -n -d
309 Environment variable ``NFS`` is the number of NFS Ganesha daemons to be
310 deployed, each listening on a random port.
312 .. note:: NFS Ganesha packages must be pre-installed for this to work.
317 After the exports are successfully created and NFS Ganesha daemons are no longer in
318 grace period. The exports can be mounted by
322 $ mount -t nfs -o port=<ganesha-port> <ganesha-host-name>:<ganesha-pseudo-path> <mount-point>
324 .. note:: Only NFS v4.0+ is supported.
326 .. _NFS-Ganesha NFS Server: https://github.com/nfs-ganesha/nfs-ganesha/wiki