]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/services/nfs.rst
import ceph 16.2.7
[ceph.git] / ceph / doc / cephadm / services / nfs.rst
1 .. _deploy-cephadm-nfs-ganesha:
2
3 ===========
4 NFS Service
5 ===========
6
7 .. note:: Only the NFSv4 protocol is supported.
8
9 The simplest way to manage NFS is via the ``ceph nfs cluster ...``
10 commands; see :ref:`mgr-nfs`. This document covers how to manage the
11 cephadm services directly, which should only be necessary for unusual NFS
12 configurations.
13
14 Deploying NFS ganesha
15 =====================
16
17 Cephadm deploys NFS Ganesha daemon (or set of daemons). The configuration for
18 NFS is stored in the ``nfs-ganesha`` pool and exports are managed via the
19 ``ceph nfs export ...`` commands and via the dashboard.
20
21 To deploy a NFS Ganesha gateway, run the following command:
22
23 .. prompt:: bash #
24
25 ceph orch apply nfs *<svc_id>* [--port *<port>*] [--placement ...]
26
27 For example, to deploy NFS with a service id of *foo* on the default
28 port 2049 with the default placement of a single daemon:
29
30 .. prompt:: bash #
31
32 ceph orch apply nfs foo
33
34 See :ref:`orchestrator-cli-placement-spec` for the details of the placement
35 specification.
36
37 Service Specification
38 =====================
39
40 Alternatively, an NFS service can be applied using a YAML specification.
41
42 .. code-block:: yaml
43
44 service_type: nfs
45 service_id: mynfs
46 placement:
47 hosts:
48 - host1
49 - host2
50 spec:
51 port: 12345
52
53 In this example, we run the server on the non-default ``port`` of
54 12345 (instead of the default 2049) on ``host1`` and ``host2``.
55
56 The specification can then be applied by running the following command:
57
58 .. prompt:: bash #
59
60 ceph orch apply -i nfs.yaml
61
62 .. _cephadm-ha-nfs:
63
64 High-availability NFS
65 =====================
66
67 Deploying an *ingress* service for an existing *nfs* service will provide:
68
69 * a stable, virtual IP that can be used to access the NFS server
70 * fail-over between hosts if there is a host failure
71 * load distribution across multiple NFS gateways (although this is rarely necessary)
72
73 Ingress for NFS can be deployed for an existing NFS service
74 (``nfs.mynfs`` in this example) with the following specification:
75
76 .. code-block:: yaml
77
78 service_type: ingress
79 service_id: nfs.mynfs
80 placement:
81 count: 2
82 spec:
83 backend_service: nfs.mynfs
84 frontend_port: 2049
85 monitor_port: 9000
86 virtual_ip: 10.0.0.123/24
87
88 A few notes:
89
90 * The *virtual_ip* must include a CIDR prefix length, as in the
91 example above. The virtual IP will normally be configured on the
92 first identified network interface that has an existing IP in the
93 same subnet. You can also specify a *virtual_interface_networks*
94 property to match against IPs in other networks; see
95 :ref:`ingress-virtual-ip` for more information.
96 * The *monitor_port* is used to access the haproxy load status
97 page. The user is ``admin`` by default, but can be modified by
98 via an *admin* property in the spec. If a password is not
99 specified via a *password* property in the spec, the auto-generated password
100 can be found with:
101
102 .. prompt:: bash #
103
104 ceph config-key get mgr/cephadm/ingress.*{svc_id}*/monitor_password
105
106 For example:
107
108 .. prompt:: bash #
109
110 ceph config-key get mgr/cephadm/ingress.nfs.myfoo/monitor_password
111
112 * The backend service (``nfs.mynfs`` in this example) should include
113 a *port* property that is not 2049 to avoid conflicting with the
114 ingress service, which could be placed on the same host(s).
115
116 Further Reading
117 ===============
118
119 * CephFS: :ref:`cephfs-nfs`
120 * MGR: :ref:`mgr-nfs`