]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/add-remove-mds.rst
12e4576a07aa9be31764671800db601dc5f1c19a
[ceph.git] / ceph / doc / cephfs / add-remove-mds.rst
1 .. note::
2 It is highly recommended to use :doc:`/cephadm/index` or another Ceph
3 orchestrator for setting up the ceph cluster. Use this approach only if you
4 are setting up the ceph cluster manually. If one still intends to use the
5 manual way for deploying MDS daemons, :doc:`/cephadm/services/mds/` can
6 also be used.
7
8 ============================
9 Deploying Metadata Servers
10 ============================
11
12 Each CephFS file system requires at least one MDS. The cluster operator will
13 generally use their automated deployment tool to launch required MDS servers as
14 needed. Rook and ansible (via the ceph-ansible playbooks) are recommended
15 tools for doing this. For clarity, we also show the systemd commands here which
16 may be run by the deployment technology if executed on bare-metal.
17
18 See `MDS Config Reference`_ for details on configuring metadata servers.
19
20
21 Provisioning Hardware for an MDS
22 ================================
23
24 The present version of the MDS is single-threaded and CPU-bound for most
25 activities, including responding to client requests. An MDS under the most
26 aggressive client loads uses about 2 to 3 CPU cores. This is due to the other
27 miscellaneous upkeep threads working in tandem.
28
29 Even so, it is recommended that an MDS server be well provisioned with an
30 advanced CPU with sufficient cores. Development is on-going to make better use
31 of available CPU cores in the MDS; it is expected in future versions of Ceph
32 that the MDS server will improve performance by taking advantage of more cores.
33
34 The other dimension to MDS performance is the available RAM for caching. The
35 MDS necessarily manages a distributed and cooperative metadata cache among all
36 clients and other active MDSs. Therefore it is essential to provide the MDS
37 with sufficient RAM to enable faster metadata access and mutation. The default
38 MDS cache size (see also :doc:`/cephfs/cache-configuration`) is 4GB. It is
39 recommended to provision at least 8GB of RAM for the MDS to support this cache
40 size.
41
42 Generally, an MDS serving a large cluster of clients (1000 or more) will use at
43 least 64GB of cache. An MDS with a larger cache is not well explored in the
44 largest known community clusters; there may be diminishing returns where
45 management of such a large cache negatively impacts performance in surprising
46 ways. It would be best to do analysis with expected workloads to determine if
47 provisioning more RAM is worthwhile.
48
49 In a bare-metal cluster, the best practice is to over-provision hardware for
50 the MDS server. Even if a single MDS daemon is unable to fully utilize the
51 hardware, it may be desirable later on to start more active MDS daemons on the
52 same node to fully utilize the available cores and memory. Additionally, it may
53 become clear with workloads on the cluster that performance improves with
54 multiple active MDS on the same node rather than over-provisioning a single
55 MDS.
56
57 Finally, be aware that CephFS is a highly-available file system by supporting
58 standby MDS (see also :ref:`mds-standby`) for rapid failover. To get a real
59 benefit from deploying standbys, it is usually necessary to distribute MDS
60 daemons across at least two nodes in the cluster. Otherwise, a hardware failure
61 on a single node may result in the file system becoming unavailable.
62
63 Co-locating the MDS with other Ceph daemons (hyperconverged) is an effective
64 and recommended way to accomplish this so long as all daemons are configured to
65 use available hardware within certain limits. For the MDS, this generally
66 means limiting its cache size.
67
68
69 Adding an MDS
70 =============
71
72 #. Create an mds directory ``/var/lib/ceph/mds/ceph-${id}``. The daemon only uses this directory to store its keyring.
73
74 #. Create the authentication key, if you use CephX: ::
75
76 $ sudo ceph auth get-or-create mds.${id} mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/ceph/mds/ceph-${id}/keyring
77
78 #. Start the service: ::
79
80 $ sudo systemctl start ceph-mds@${id}
81
82 #. The status of the cluster should show: ::
83
84 mds: ${id}:1 {0=${id}=up:active} 2 up:standby
85
86 #. Optionally, configure the file system the MDS should join (:ref:`mds-join-fs`): ::
87
88 $ ceph config set mds.${id} mds_join_fs ${fs}
89
90
91 Removing an MDS
92 ===============
93
94 If you have a metadata server in your cluster that you'd like to remove, you may use
95 the following method.
96
97 #. (Optionally:) Create a new replacement Metadata Server. If there are no
98 replacement MDS to take over once the MDS is removed, the file system will
99 become unavailable to clients. If that is not desirable, consider adding a
100 metadata server before tearing down the metadata server you would like to
101 take offline.
102
103 #. Stop the MDS to be removed. ::
104
105 $ sudo systemctl stop ceph-mds@${id}
106
107 The MDS will automatically notify the Ceph monitors that it is going down.
108 This enables the monitors to perform instantaneous failover to an available
109 standby, if one exists. It is unnecessary to use administrative commands to
110 effect this failover, e.g. through the use of ``ceph mds fail mds.${id}``.
111
112 #. Remove the ``/var/lib/ceph/mds/ceph-${id}`` directory on the MDS. ::
113
114 $ sudo rm -rf /var/lib/ceph/mds/ceph-${id}
115
116 .. _MDS Config Reference: ../mds-config-ref