]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/fs-nfs-exports.rst
update source to Ceph Pacific 16.2.2
[ceph.git] / ceph / doc / cephfs / fs-nfs-exports.rst
1 =======================
2 CephFS Exports over NFS
3 =======================
4
5 CephFS namespaces can be exported over NFS protocol using the `NFS-Ganesha NFS server`_
6
7 Requirements
8 ============
9
10 - Latest Ceph file system with mgr enabled
11 - ``nfs-ganesha``, ``nfs-ganesha-ceph``, ``nfs-ganesha-rados-grace`` and
12 ``nfs-ganesha-rados-urls`` packages (version 3.3 and above)
13
14 Create NFS Ganesha Cluster
15 ==========================
16
17 .. code:: bash
18
19 $ ceph nfs cluster create <type> <clusterid> [<placement>]
20
21 This creates a common recovery pool for all NFS Ganesha daemons, new user based on
22 ``clusterid``, and a common NFS Ganesha config RADOS object.
23
24 .. note:: Since this command also brings up NFS Ganesha daemons using a ceph-mgr
25 orchestrator module (see :doc:`/mgr/orchestrator`) such as "mgr/cephadm", at
26 least one such module must be enabled for it to work.
27
28 Currently, NFS Ganesha daemon deployed by cephadm listens on the standard
29 port. So only one daemon will be deployed on a host.
30
31 ``<type>`` signifies the export type, which corresponds to the NFS Ganesha file
32 system abstraction layer (FSAL). Permissible values are ``"cephfs`` or
33 ``rgw``, but currently only ``cephfs`` is supported.
34
35 ``<clusterid>`` is an arbitrary string by which this NFS Ganesha cluster will be
36 known.
37
38 ``<placement>`` is an optional string signifying which hosts should have NFS Ganesha
39 daemon containers running on them and, optionally, the total number of NFS
40 Ganesha daemons on the cluster (should you want to have more than one NFS Ganesha
41 daemon running per node). For example, the following placement string means
42 "deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host)::
43
44 "host1,host2"
45
46 and this placement specification says to deploy single NFS Ganesha daemon each
47 on nodes host1 and host2 (for a total of two NFS Ganesha daemons in the
48 cluster)::
49
50 "2 host1,host2"
51
52 For more details, refer :ref:`orchestrator-cli-placement-spec` but keep
53 in mind that specifying the placement via a YAML file is not supported.
54
55 Update NFS Ganesha Cluster
56 ==========================
57
58 .. code:: bash
59
60 $ ceph nfs cluster update <clusterid> <placement>
61
62 This updates the deployed cluster according to the placement value.
63
64 Delete NFS Ganesha Cluster
65 ==========================
66
67 .. code:: bash
68
69 $ ceph nfs cluster delete <clusterid>
70
71 This deletes the deployed cluster.
72
73 List NFS Ganesha Cluster
74 ========================
75
76 .. code:: bash
77
78 $ ceph nfs cluster ls
79
80 This lists deployed clusters.
81
82 Show NFS Ganesha Cluster Information
83 ====================================
84
85 .. code:: bash
86
87 $ ceph nfs cluster info [<clusterid>]
88
89 This displays ip and port of deployed cluster.
90
91 .. note:: This will not work with rook backend. Instead expose port with
92 kubectl patch command and fetch the port details with kubectl get services
93 command::
94
95 $ kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-<cluster-name>-<node-id>
96 $ kubectl get services -n rook-ceph rook-ceph-nfs-<cluster-name>-<node-id>
97
98 Set Customized NFS Ganesha Configuration
99 ========================================
100
101 .. code:: bash
102
103 $ ceph nfs cluster config set <clusterid> -i <config_file>
104
105 With this the nfs cluster will use the specified config and it will have
106 precedence over default config blocks.
107
108 Example use cases
109
110 1) Changing log level
111
112 It can be done by adding LOG block in the following way::
113
114 LOG {
115 COMPONENTS {
116 ALL = FULL_DEBUG;
117 }
118 }
119
120 2) Adding custom export block
121
122 The following sample block creates a single export. This export will not be
123 managed by `ceph nfs export` interface::
124
125 EXPORT {
126 Export_Id = 100;
127 Transports = TCP;
128 Path = /;
129 Pseudo = /ceph/;
130 Protocols = 4;
131 Access_Type = RW;
132 Attr_Expiration_Time = 0;
133 Squash = None;
134 FSAL {
135 Name = CEPH;
136 Filesystem = "filesystem name";
137 User_Id = "user id";
138 Secret_Access_Key = "secret key";
139 }
140 }
141
142 .. note:: User specified in FSAL block should have proper caps for NFS-Ganesha
143 daemons to access ceph cluster. User can be created in following way using
144 `auth get-or-create`::
145
146 # ceph auth get-or-create client.<user_id> mon 'allow r' osd 'allow rw pool=nfs-ganesha namespace=<nfs_cluster_name>, allow rw tag cephfs data=<fs_name>' mds 'allow rw path=<export_path>'
147
148 Reset NFS Ganesha Configuration
149 ===============================
150
151 .. code:: bash
152
153 $ ceph nfs cluster config reset <clusterid>
154
155 This removes the user defined configuration.
156
157 .. note:: With a rook deployment, ganesha pods must be explicitly restarted
158 for the new config blocks to be effective.
159
160 Create CephFS Export
161 ====================
162
163 .. warning:: Currently, the volume/nfs interface is not integrated with dashboard. Both
164 dashboard and volume/nfs interface have different export requirements and
165 create exports differently. Management of dashboard created exports is not
166 supported.
167
168 .. code:: bash
169
170 $ ceph nfs export create cephfs <fsname> <clusterid> <binding> [--readonly] [--path=/path/in/cephfs]
171
172 This creates export RADOS objects containing the export block, where
173
174 ``<fsname>`` is the name of the FS volume used by the NFS Ganesha cluster
175 that will serve this export.
176
177 ``<clusterid>`` is the NFS Ganesha cluster ID.
178
179 ``<binding>`` is the pseudo root path (must be an absolute path and unique).
180 It specifies the export position within the NFS v4 Pseudo Filesystem.
181
182 ``<path>`` is the path within cephfs. Valid path should be given and default
183 path is '/'. It need not be unique. Subvolume path can be fetched using:
184
185 .. code::
186
187 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
188
189 Delete CephFS Export
190 ====================
191
192 .. code:: bash
193
194 $ ceph nfs export delete <clusterid> <binding>
195
196 This deletes an export in an NFS Ganesha cluster, where:
197
198 ``<clusterid>`` is the NFS Ganesha cluster ID.
199
200 ``<binding>`` is the pseudo root path (must be an absolute path).
201
202 List CephFS Exports
203 ===================
204
205 .. code:: bash
206
207 $ ceph nfs export ls <clusterid> [--detailed]
208
209 It lists exports for a cluster, where:
210
211 ``<clusterid>`` is the NFS Ganesha cluster ID.
212
213 With the ``--detailed`` option enabled it shows entire export block.
214
215 Get CephFS Export
216 =================
217
218 .. code:: bash
219
220 $ ceph nfs export get <clusterid> <binding>
221
222 This displays export block for a cluster based on pseudo root name (binding),
223 where:
224
225 ``<clusterid>`` is the NFS Ganesha cluster ID.
226
227 ``<binding>`` is the pseudo root path (must be an absolute path).
228
229
230 Update CephFS Export
231 ====================
232
233 .. code:: bash
234
235 $ ceph nfs export update -i <json_file>
236
237 This updates the cephfs export specified in the json file. Export in json
238 format can be fetched with above get command. For example::
239
240 $ ceph nfs export get vstart /cephfs > update_cephfs_export.json
241 $ cat update_cephfs_export.json
242 {
243 "export_id": 1,
244 "path": "/",
245 "cluster_id": "vstart",
246 "pseudo": "/cephfs",
247 "access_type": "RW",
248 "squash": "no_root_squash",
249 "security_label": true,
250 "protocols": [
251 4
252 ],
253 "transports": [
254 "TCP"
255 ],
256 "fsal": {
257 "name": "CEPH",
258 "user_id": "vstart1",
259 "fs_name": "a",
260 "sec_label_xattr": ""
261 },
262 "clients": []
263 }
264 # Here in the fetched export, pseudo and access_type is modified. Then the modified file is passed to update interface
265 $ ceph nfs export update -i update_cephfs_export.json
266 $ cat update_cephfs_export.json
267 {
268 "export_id": 1,
269 "path": "/",
270 "cluster_id": "vstart",
271 "pseudo": "/cephfs_testing",
272 "access_type": "RO",
273 "squash": "no_root_squash",
274 "security_label": true,
275 "protocols": [
276 4
277 ],
278 "transports": [
279 "TCP"
280 ],
281 "fsal": {
282 "name": "CEPH",
283 "user_id": "vstart1",
284 "fs_name": "a",
285 "sec_label_xattr": ""
286 },
287 "clients": []
288 }
289
290
291 Configuring NFS Ganesha to export CephFS with vstart
292 ====================================================
293
294 1) Using ``cephadm``
295
296 .. code:: bash
297
298 $ MDS=1 MON=1 OSD=3 NFS=1 ../src/vstart.sh -n -d --cephadm
299
300 This will deploy a single NFS Ganesha daemon using ``vstart.sh``, where
301 the daemon will listen on the default NFS Ganesha port.
302
303 2) Using test orchestrator
304
305 .. code:: bash
306
307 $ MDS=1 MON=1 OSD=3 NFS=1 ../src/vstart.sh -n -d
308
309 Environment variable ``NFS`` is the number of NFS Ganesha daemons to be
310 deployed, each listening on a random port.
311
312 .. note:: NFS Ganesha packages must be pre-installed for this to work.
313
314 Mount
315 =====
316
317 After the exports are successfully created and NFS Ganesha daemons are no longer in
318 grace period. The exports can be mounted by
319
320 .. code:: bash
321
322 $ mount -t nfs -o port=<ganesha-port> <ganesha-host-name>:<ganesha-pseudo-path> <mount-point>
323
324 .. note:: Only NFS v4.0+ is supported.
325
326 .. _NFS-Ganesha NFS Server: https://github.com/nfs-ganesha/nfs-ganesha/wiki