]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/administration.rst
update sources to 12.2.7
[ceph.git] / ceph / doc / cephfs / administration.rst
1
2 CephFS Administrative commands
3 ==============================
4
5 Filesystems
6 -----------
7
8 These commands operate on the CephFS filesystems in your Ceph cluster.
9 Note that by default only one filesystem is permitted: to enable
10 creation of multiple filesystems use ``ceph fs flag set enable_multiple true``.
11
12 ::
13
14 fs new <filesystem name> <metadata pool name> <data pool name>
15
16 ::
17
18 fs ls
19
20 ::
21
22 fs rm <filesystem name> [--yes-i-really-mean-it]
23
24 ::
25
26 fs reset <filesystem name>
27
28 ::
29
30 fs get <filesystem name>
31
32 ::
33
34 fs set <filesystem name> <var> <val>
35
36 ::
37
38 fs add_data_pool <filesystem name> <pool name/id>
39
40 ::
41
42 fs rm_data_pool <filesystem name> <pool name/id>
43
44
45 Settings
46 --------
47
48 ::
49
50 fs set <fs name> max_file_size <size in bytes>
51
52 CephFS has a configurable maximum file size, and it's 1TB by default.
53 You may wish to set this limit higher if you expect to store large files
54 in CephFS. It is a 64-bit field.
55
56 Setting ``max_file_size`` to 0 does not disable the limit. It would
57 simply limit clients to only creating empty files.
58
59
60 Maximum file sizes and performance
61 ----------------------------------
62
63 CephFS enforces the maximum file size limit at the point of appending to
64 files or setting their size. It does not affect how anything is stored.
65
66 When users create a file of an enormous size (without necessarily
67 writing any data to it), some operations (such as deletes) cause the MDS
68 to have to do a large number of operations to check if any of the RADOS
69 objects within the range that could exist (according to the file size)
70 really existed.
71
72 The ``max_file_size`` setting prevents users from creating files that
73 appear to be eg. exabytes in size, causing load on the MDS as it tries
74 to enumerate the objects during operations like stats or deletes.
75
76
77 Taking the cluster down
78 -----------------------
79
80 Taking a CephFS cluster down is done by reducing the number of ranks to 1,
81 setting the cluster_down flag, and then failing the last rank. For example:
82
83 ::
84 ceph fs set <fs_name> max_mds 1
85 ceph mds deactivate <fs_name>:1 # rank 2 of 2
86 ceph status # wait for rank 1 to finish stopping
87 ceph fs set <fs_name> cluster_down true
88 ceph mds fail <fs_name>:0
89
90 Setting the ``cluster_down`` flag prevents standbys from taking over the failed
91 rank.
92
93 Daemons
94 -------
95
96 These commands act on specific mds daemons or ranks.
97
98 ::
99
100 mds fail <gid/name/role>
101
102 Mark an MDS daemon as failed. This is equivalent to what the cluster
103 would do if an MDS daemon had failed to send a message to the mon
104 for ``mds_beacon_grace`` second. If the daemon was active and a suitable
105 standby is available, using ``mds fail`` will force a failover to the standby.
106
107 If the MDS daemon was in reality still running, then using ``mds fail``
108 will cause the daemon to restart. If it was active and a standby was
109 available, then the "failed" daemon will return as a standby.
110
111 ::
112
113 mds deactivate <role>
114
115 Deactivate an MDS, causing it to flush its entire journal to
116 backing RADOS objects and close all open client sessions. Deactivating an MDS
117 is primarily intended for bringing down a rank after reducing the number of
118 active MDS (max_mds). Once the rank is deactivated, the MDS daemon will rejoin the
119 cluster as a standby.
120 ``<role>`` can take one of three forms:
121
122 ::
123
124 <fs_name>:<rank>
125 <fs_id>:<rank>
126 <rank>
127
128 Use ``mds deactivate`` in conjunction with adjustments to ``max_mds`` to
129 shrink an MDS cluster. See :doc:`/cephfs/multimds`
130
131 ::
132
133 tell mds.<daemon name>
134
135 ::
136
137 mds metadata <gid/name/role>
138
139 ::
140
141 mds repaired <role>
142
143
144 Global settings
145 ---------------
146
147 ::
148
149 fs dump
150
151 ::
152
153 fs flag set <flag name> <flag val> [<confirmation string>]
154
155 "flag name" must be one of ['enable_multiple']
156
157 Some flags require you to confirm your intentions with "--yes-i-really-mean-it"
158 or a similar string they will prompt you with. Consider these actions carefully
159 before proceeding; they are placed on especially dangerous activities.
160
161
162 Advanced
163 --------
164
165 These commands are not required in normal operation, and exist
166 for use in exceptional circumstances. Incorrect use of these
167 commands may cause serious problems, such as an inaccessible
168 filesystem.
169
170 ::
171
172 mds compat rm_compat
173
174 ::
175
176 mds compat rm_incompat
177
178 ::
179
180 mds compat show
181
182 ::
183
184 mds getmap
185
186 ::
187
188 mds set_state
189
190 ::
191
192 mds rmfailed
193
194 Legacy
195 ------
196
197 The ``ceph mds set`` command is the deprecated version of ``ceph fs set``,
198 from before there was more than one filesystem per cluster. It operates
199 on whichever filesystem is marked as the default (see ``ceph fs
200 set-default``.)
201
202 ::
203
204 mds stat
205 mds dump # replaced by "fs get"
206 mds stop # replaced by "mds deactivate"
207 mds set_max_mds # replaced by "fs set max_mds"
208 mds set # replaced by "fs set"
209 mds cluster_down # replaced by "fs set cluster_down"
210 mds cluster_up # replaced by "fs set cluster_up"
211 mds newfs # replaced by "fs new"
212 mds add_data_pool # replaced by "fs add_data_pool"
213 mds remove_data_pool #replaced by "fs remove_data_pool"
214