]>
Commit | Line | Data |
---|---|---|
11fdf7f2 | 1 | ============================ |
92f5a8d4 | 2 | Deploying Metadata Servers |
11fdf7f2 TL |
3 | ============================ |
4 | ||
92f5a8d4 TL |
5 | Each CephFS file system requires at least one MDS. The cluster operator will |
6 | generally use their automated deployment tool to launch required MDS servers as | |
7 | needed. Rook and ansible (via the ceph-ansible playbooks) are recommended | |
8 | tools for doing this. For clarity, we also show the systemd commands here which | |
9 | may be run by the deployment technology if executed on bare-metal. | |
11fdf7f2 TL |
10 | |
11 | See `MDS Config Reference`_ for details on configuring metadata servers. | |
12 | ||
13 | ||
92f5a8d4 TL |
14 | Provisioning Hardware for an MDS |
15 | ================================ | |
11fdf7f2 | 16 | |
92f5a8d4 | 17 | The present version of the MDS is single-threaded and CPU-bound for most |
9f95a23c TL |
18 | activities, including responding to client requests. An MDS under the most |
19 | aggressive client loads uses about 2 to 3 CPU cores. This is due to the other | |
20 | miscellaneous upkeep threads working in tandem. | |
92f5a8d4 TL |
21 | |
22 | Even so, it is recommended that an MDS server be well provisioned with an | |
23 | advanced CPU with sufficient cores. Development is on-going to make better use | |
24 | of available CPU cores in the MDS; it is expected in future versions of Ceph | |
25 | that the MDS server will improve performance by taking advantage of more cores. | |
26 | ||
27 | The other dimension to MDS performance is the available RAM for caching. The | |
28 | MDS necessarily manages a distributed and cooperative metadata cache among all | |
29 | clients and other active MDSs. Therefore it is essential to provide the MDS | |
9f95a23c TL |
30 | with sufficient RAM to enable faster metadata access and mutation. The default |
31 | MDS cache size (see also :doc:`/cephfs/cache-size-limits`) is 4GB. It is | |
32 | recommended to provision at least 8GB of RAM for the MDS to support this cache | |
33 | size. | |
92f5a8d4 TL |
34 | |
35 | Generally, an MDS serving a large cluster of clients (1000 or more) will use at | |
9f95a23c TL |
36 | least 64GB of cache. An MDS with a larger cache is not well explored in the |
37 | largest known community clusters; there may be diminishing returns where | |
38 | management of such a large cache negatively impacts performance in surprising | |
39 | ways. It would be best to do analysis with expected workloads to determine if | |
40 | provisioning more RAM is worthwhile. | |
92f5a8d4 TL |
41 | |
42 | In a bare-metal cluster, the best practice is to over-provision hardware for | |
43 | the MDS server. Even if a single MDS daemon is unable to fully utilize the | |
44 | hardware, it may be desirable later on to start more active MDS daemons on the | |
45 | same node to fully utilize the available cores and memory. Additionally, it may | |
46 | become clear with workloads on the cluster that performance improves with | |
47 | multiple active MDS on the same node rather than over-provisioning a single | |
48 | MDS. | |
49 | ||
50 | Finally, be aware that CephFS is a highly-available file system by supporting | |
51 | standby MDS (see also :ref:`mds-standby`) for rapid failover. To get a real | |
52 | benefit from deploying standbys, it is usually necessary to distribute MDS | |
53 | daemons across at least two nodes in the cluster. Otherwise, a hardware failure | |
54 | on a single node may result in the file system becoming unavailable. | |
55 | ||
56 | Co-locating the MDS with other Ceph daemons (hyperconverged) is an effective | |
57 | and recommended way to accomplish this so long as all daemons are configured to | |
58 | use available hardware within certain limits. For the MDS, this generally | |
59 | means limiting its cache size. | |
60 | ||
61 | ||
62 | Adding an MDS | |
63 | ============= | |
64 | ||
65 | #. Create an mds data point ``/var/lib/ceph/mds/ceph-${id}``. The daemon only uses this directory to store its keyring. | |
11fdf7f2 | 66 | |
9f95a23c | 67 | #. Create the authentication key, if you use CephX: :: |
11fdf7f2 | 68 | |
92f5a8d4 | 69 | $ sudo ceph auth get-or-create mds.${id} mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/ceph/mds/ceph-${id}/keyring |
11fdf7f2 | 70 | |
9f95a23c | 71 | #. Start the service: :: |
11fdf7f2 | 72 | |
9f95a23c | 73 | $ sudo systemctl start ceph-mds@${id} |
11fdf7f2 | 74 | |
92f5a8d4 | 75 | #. The status of the cluster should show: :: |
11fdf7f2 | 76 | |
92f5a8d4 | 77 | mds: ${id}:1 {0=${id}=up:active} 2 up:standby |
11fdf7f2 | 78 | |
9f95a23c TL |
79 | #. Optionally, configure the file system the MDS should join (:ref:`mds-join-fs`): :: |
80 | ||
81 | $ ceph config set mds.${id} mds_join_fs ${fs} | |
82 | ||
83 | ||
92f5a8d4 TL |
84 | Removing an MDS |
85 | =============== | |
11fdf7f2 TL |
86 | |
87 | If you have a metadata server in your cluster that you'd like to remove, you may use | |
88 | the following method. | |
89 | ||
92f5a8d4 TL |
90 | #. (Optionally:) Create a new replacement Metadata Server. If there are no |
91 | replacement MDS to take over once the MDS is removed, the file system will | |
92 | become unavailable to clients. If that is not desirable, consider adding a | |
93 | metadata server before tearing down the metadata server you would like to | |
94 | take offline. | |
95 | ||
96 | #. Stop the MDS to be removed. :: | |
97 | ||
9f95a23c | 98 | $ sudo systemctl stop ceph-mds@${id} |
11fdf7f2 | 99 | |
92f5a8d4 TL |
100 | The MDS will automatically notify the Ceph monitors that it is going down. |
101 | This enables the monitors to perform instantaneous failover to an available | |
102 | standby, if one exists. It is unnecessary to use administrative commands to | |
103 | effect this failover, e.g. through the use of ``ceph mds fail mds.${id}``. | |
11fdf7f2 | 104 | |
92f5a8d4 | 105 | #. Remove the ``/var/lib/ceph/mds/ceph-${id}`` directory on the MDS. :: |
11fdf7f2 | 106 | |
92f5a8d4 | 107 | $ sudo rm -rf /var/lib/ceph/mds/ceph-${id} |
11fdf7f2 TL |
108 | |
109 | .. _MDS Config Reference: ../mds-config-ref |