]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/nfs.rst
import quincy beta 17.1.0
[ceph.git] / ceph / doc / cephfs / nfs.rst
1 .. _cephfs-nfs:
2
3 ===
4 NFS
5 ===
6
7 CephFS namespaces can be exported over NFS protocol using the `NFS-Ganesha NFS
8 server`_. This document provides information on configuring NFS-Ganesha
9 clusters manually. The simplest and preferred way of managing NFS-Ganesha
10 clusters and CephFS exports is using ``ceph nfs ...`` commands. See
11 :doc:`/mgr/nfs` for more details. As the deployment is done using cephadm or
12 rook.
13
14 Requirements
15 ============
16
17 - Ceph file system
18 - ``libcephfs2``, ``nfs-ganesha`` and ``nfs-ganesha-ceph`` packages on NFS
19 server host machine.
20 - NFS-Ganesha server host connected to the Ceph public network
21
22 .. note::
23 It is recommended to use 3.5 or later stable version of NFS-Ganesha
24 packages with pacific (16.2.x) or later stable version of Ceph packages.
25
26 Configuring NFS-Ganesha to export CephFS
27 ========================================
28
29 NFS-Ganesha provides a File System Abstraction Layer (FSAL) to plug in
30 different storage backends. FSAL_CEPH_ is the plugin FSAL for CephFS. For
31 each NFS-Ganesha export, FSAL_CEPH_ uses a libcephfs client to mount the
32 CephFS path that NFS-Ganesha exports.
33
34 Setting up NFS-Ganesha with CephFS, involves setting up NFS-Ganesha's and
35 Ceph's configuration file and CephX access credentials for the Ceph clients
36 created by NFS-Ganesha to access CephFS.
37
38 NFS-Ganesha configuration
39 -------------------------
40
41 Here's a `sample ganesha.conf`_ configured with FSAL_CEPH_. It is suitable
42 for a standalone NFS-Ganesha server, or an active/passive configuration of
43 NFS-Ganesha servers, to be managed by some sort of clustering software
44 (e.g., Pacemaker). Important details about the options are added as comments
45 in the sample conf. There are options to do the following:
46
47 - minimize Ganesha caching wherever possible since the libcephfs clients
48 (of FSAL_CEPH_) also cache aggressively
49
50 - read from Ganesha config files stored in RADOS objects
51
52 - store client recovery data in RADOS OMAP key-value interface
53
54 - mandate NFSv4.1+ access
55
56 - enable read delegations (need at least v13.0.1 ``libcephfs2`` package
57 and v2.6.0 stable ``nfs-ganesha`` and ``nfs-ganesha-ceph`` packages)
58
59 Configuration for libcephfs clients
60 -----------------------------------
61
62 ``ceph.conf`` for libcephfs clients includes a ``[client]`` section with
63 ``mon_host`` option set to let the clients connect to the Ceph cluster's
64 monitors, usually generated via ``ceph config generate-minimal-conf``.
65 For example::
66
67 [client]
68 mon host = [v2:192.168.1.7:3300,v1:192.168.1.7:6789], [v2:192.168.1.8:3300,v1:192.168.1.8:6789], [v2:192.168.1.9:3300,v1:192.168.1.9:6789]
69
70 Mount using NFSv4 clients
71 =========================
72
73 It is preferred to mount the NFS-Ganesha exports using NFSv4.1+ protocols
74 to get the benefit of sessions.
75
76 Conventions for mounting NFS resources are platform-specific. The
77 following conventions work on Linux and some Unix platforms:
78
79 .. code:: bash
80
81 mount -t nfs -o nfsvers=4.1,proto=tcp <ganesha-host-name>:<ganesha-pseudo-path> <mount-point>
82
83
84 .. _FSAL_CEPH: https://github.com/nfs-ganesha/nfs-ganesha/tree/next/src/FSAL/FSAL_CEPH
85 .. _NFS-Ganesha NFS server: https://github.com/nfs-ganesha/nfs-ganesha/wiki
86 .. _sample ganesha.conf: https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf