2 CephFS Administrative commands
3 ==============================
8 These commands operate on the CephFS filesystems in your Ceph cluster.
9 Note that by default only one filesystem is permitted: to enable
10 creation of multiple filesystems use ``ceph fs flag set enable_multiple true``.
14 fs new <filesystem name> <metadata pool name> <data pool name>
22 fs rm <filesystem name> [--yes-i-really-mean-it]
26 fs reset <filesystem name>
30 fs get <filesystem name>
34 fs set <filesystem name> <var> <val>
38 fs add_data_pool <filesystem name> <pool name/id>
42 fs rm_data_pool <filesystem name> <pool name/id>
50 fs set <fs name> max_file_size <size in bytes>
52 CephFS has a configurable maximum file size, and it's 1TB by default.
53 You may wish to set this limit higher if you expect to store large files
54 in CephFS. It is a 64-bit field.
56 Setting ``max_file_size`` to 0 does not disable the limit. It would
57 simply limit clients to only creating empty files.
60 Maximum file sizes and performance
61 ----------------------------------
63 CephFS enforces the maximum file size limit at the point of appending to
64 files or setting their size. It does not affect how anything is stored.
66 When users create a file of an enormous size (without necessarily
67 writing any data to it), some operations (such as deletes) cause the MDS
68 to have to do a large number of operations to check if any of the RADOS
69 objects within the range that could exist (according to the file size)
72 The ``max_file_size`` setting prevents users from creating files that
73 appear to be eg. exabytes in size, causing load on the MDS as it tries
74 to enumerate the objects during operations like stats or deletes.
77 Taking the cluster down
78 -----------------------
80 Taking a CephFS cluster down is done by reducing the number of ranks to 1,
81 setting the cluster_down flag, and then failing the last rank. For example:
84 ceph fs set <fs_name> max_mds 1
85 ceph mds deactivate <fs_name>:1 # rank 2 of 2
86 ceph status # wait for rank 1 to finish stopping
87 ceph fs set <fs_name> cluster_down true
88 ceph mds fail <fs_name>:0
90 Setting the ``cluster_down`` flag prevents standbys from taking over the failed
96 These commands act on specific mds daemons or ranks.
100 mds fail <gid/name/role>
102 Mark an MDS daemon as failed. This is equivalent to what the cluster
103 would do if an MDS daemon had failed to send a message to the mon
104 for ``mds_beacon_grace`` second. If the daemon was active and a suitable
105 standby is available, using ``mds fail`` will force a failover to the standby.
107 If the MDS daemon was in reality still running, then using ``mds fail``
108 will cause the daemon to restart. If it was active and a standby was
109 available, then the "failed" daemon will return as a standby.
113 mds deactivate <role>
115 Deactivate an MDS, causing it to flush its entire journal to
116 backing RADOS objects and close all open client sessions. Deactivating an MDS
117 is primarily intended for bringing down a rank after reducing the number of
118 active MDS (max_mds). Once the rank is deactivated, the MDS daemon will rejoin the
119 cluster as a standby.
120 ``<role>`` can take one of three forms:
128 Use ``mds deactivate`` in conjunction with adjustments to ``max_mds`` to
129 shrink an MDS cluster. See :doc:`/cephfs/multimds`
133 tell mds.<daemon name>
137 mds metadata <gid/name/role>
153 fs flag set <flag name> <flag val> [<confirmation string>]
155 "flag name" must be one of ['enable_multiple']
157 Some flags require you to confirm your intentions with "--yes-i-really-mean-it"
158 or a similar string they will prompt you with. Consider these actions carefully
159 before proceeding; they are placed on especially dangerous activities.
165 These commands are not required in normal operation, and exist
166 for use in exceptional circumstances. Incorrect use of these
167 commands may cause serious problems, such as an inaccessible
176 mds compat rm_incompat
197 The ``ceph mds set`` command is the deprecated version of ``ceph fs set``,
198 from before there was more than one filesystem per cluster. It operates
199 on whichever filesystem is marked as the default (see ``ceph fs
205 mds dump # replaced by "fs get"
206 mds stop # replaced by "mds deactivate"
207 mds set_max_mds # replaced by "fs set max_mds"
208 mds set # replaced by "fs set"
209 mds cluster_down # replaced by "fs set cluster_down"
210 mds cluster_up # replaced by "fs set cluster_up"
211 mds newfs # replaced by "fs new"
212 mds add_data_pool # replaced by "fs add_data_pool"
213 mds remove_data_pool #replaced by "fs remove_data_pool"