]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/administration.rst
update sources to 12.2.7
[ceph.git] / ceph / doc / cephfs / administration.rst
CommitLineData
7c673cae
FG
1
2CephFS Administrative commands
3==============================
4
5Filesystems
6-----------
7
8These commands operate on the CephFS filesystems in your Ceph cluster.
9Note that by default only one filesystem is permitted: to enable
10creation of multiple filesystems use ``ceph fs flag set enable_multiple true``.
11
12::
13
14 fs new <filesystem name> <metadata pool name> <data pool name>
15
16::
17
18 fs ls
19
20::
21
22 fs rm <filesystem name> [--yes-i-really-mean-it]
23
24::
25
26 fs reset <filesystem name>
27
28::
29
30 fs get <filesystem name>
31
32::
33
34 fs set <filesystem name> <var> <val>
35
36::
37
38 fs add_data_pool <filesystem name> <pool name/id>
39
40::
41
42 fs rm_data_pool <filesystem name> <pool name/id>
43
44
31f18b77
FG
45Settings
46--------
47
48::
49
50 fs set <fs name> max_file_size <size in bytes>
51
52CephFS has a configurable maximum file size, and it's 1TB by default.
53You may wish to set this limit higher if you expect to store large files
54in CephFS. It is a 64-bit field.
55
56Setting ``max_file_size`` to 0 does not disable the limit. It would
57simply limit clients to only creating empty files.
58
59
60Maximum file sizes and performance
61----------------------------------
62
63CephFS enforces the maximum file size limit at the point of appending to
64files or setting their size. It does not affect how anything is stored.
65
66When users create a file of an enormous size (without necessarily
67writing any data to it), some operations (such as deletes) cause the MDS
68to have to do a large number of operations to check if any of the RADOS
69objects within the range that could exist (according to the file size)
70really existed.
71
72The ``max_file_size`` setting prevents users from creating files that
73appear to be eg. exabytes in size, causing load on the MDS as it tries
74to enumerate the objects during operations like stats or deletes.
75
76
28e407b8
AA
77Taking the cluster down
78-----------------------
79
80Taking a CephFS cluster down is done by reducing the number of ranks to 1,
81setting the cluster_down flag, and then failing the last rank. For example:
82
83::
84 ceph fs set <fs_name> max_mds 1
85 ceph mds deactivate <fs_name>:1 # rank 2 of 2
86 ceph status # wait for rank 1 to finish stopping
87 ceph fs set <fs_name> cluster_down true
88 ceph mds fail <fs_name>:0
89
90Setting the ``cluster_down`` flag prevents standbys from taking over the failed
91rank.
92
7c673cae
FG
93Daemons
94-------
95
96These commands act on specific mds daemons or ranks.
97
98::
99
224ce89b 100 mds fail <gid/name/role>
7c673cae 101
224ce89b
WB
102Mark an MDS daemon as failed. This is equivalent to what the cluster
103would do if an MDS daemon had failed to send a message to the mon
104for ``mds_beacon_grace`` second. If the daemon was active and a suitable
105standby is available, using ``mds fail`` will force a failover to the standby.
106
107If the MDS daemon was in reality still running, then using ``mds fail``
108will cause the daemon to restart. If it was active and a standby was
109available, then the "failed" daemon will return as a standby.
7c673cae
FG
110
111::
112
113 mds deactivate <role>
114
224ce89b
WB
115Deactivate an MDS, causing it to flush its entire journal to
116backing RADOS objects and close all open client sessions. Deactivating an MDS
117is primarily intended for bringing down a rank after reducing the number of
c07f9fc5
FG
118active MDS (max_mds). Once the rank is deactivated, the MDS daemon will rejoin the
119cluster as a standby.
120``<role>`` can take one of three forms:
121
122::
123
124 <fs_name>:<rank>
125 <fs_id>:<rank>
126 <rank>
224ce89b
WB
127
128Use ``mds deactivate`` in conjunction with adjustments to ``max_mds`` to
129shrink an MDS cluster. See :doc:`/cephfs/multimds`
130
7c673cae
FG
131::
132
133 tell mds.<daemon name>
134
135::
136
137 mds metadata <gid/name/role>
138
139::
140
141 mds repaired <role>
142
143
144Global settings
145---------------
146
147::
148
149 fs dump
150
151::
152
153 fs flag set <flag name> <flag val> [<confirmation string>]
154
155"flag name" must be one of ['enable_multiple']
156
157Some flags require you to confirm your intentions with "--yes-i-really-mean-it"
158or a similar string they will prompt you with. Consider these actions carefully
159before proceeding; they are placed on especially dangerous activities.
160
161
162Advanced
163--------
164
165These commands are not required in normal operation, and exist
166for use in exceptional circumstances. Incorrect use of these
167commands may cause serious problems, such as an inaccessible
168filesystem.
169
170::
171
172 mds compat rm_compat
173
174::
175
176 mds compat rm_incompat
177
178::
179
180 mds compat show
181
182::
183
184 mds getmap
185
186::
187
188 mds set_state
189
190::
191
192 mds rmfailed
193
194Legacy
195------
196
31f18b77
FG
197The ``ceph mds set`` command is the deprecated version of ``ceph fs set``,
198from before there was more than one filesystem per cluster. It operates
199on whichever filesystem is marked as the default (see ``ceph fs
200set-default``.)
201
7c673cae
FG
202::
203
204 mds stat
205 mds dump # replaced by "fs get"
206 mds stop # replaced by "mds deactivate"
207 mds set_max_mds # replaced by "fs set max_mds"
208 mds set # replaced by "fs set"
209 mds cluster_down # replaced by "fs set cluster_down"
210 mds cluster_up # replaced by "fs set cluster_up"
211 mds newfs # replaced by "fs new"
212 mds add_data_pool # replaced by "fs add_data_pool"
213 mds remove_data_pool #replaced by "fs remove_data_pool"
214