]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/fs-volumes.rst
import 15.2.5
[ceph.git] / ceph / doc / cephfs / fs-volumes.rst
1 .. _fs-volumes-and-subvolumes:
2
3 FS volumes and subvolumes
4 =========================
5
6 A single source of truth for CephFS exports is implemented in the volumes
7 module of the :term:`Ceph Manager` daemon (ceph-mgr). The OpenStack shared
8 file system service (manila_), Ceph Container Storage Interface (CSI_),
9 storage administrators among others can use the common CLI provided by the
10 ceph-mgr volumes module to manage the CephFS exports.
11
12 The ceph-mgr volumes module implements the following file system export
13 abstactions:
14
15 * FS volumes, an abstraction for CephFS file systems
16
17 * FS subvolumes, an abstraction for independent CephFS directory trees
18
19 * FS subvolume groups, an abstraction for a directory level higher than FS
20 subvolumes to effect policies (e.g., :doc:`/cephfs/file-layouts`) across a
21 set of subvolumes
22
23 Some possible use-cases for the export abstractions:
24
25 * FS subvolumes used as manila shares or CSI volumes
26
27 * FS subvolume groups used as manila share groups
28
29 Requirements
30 ------------
31
32 * Nautilus (14.2.x) or a later version of Ceph
33
34 * Cephx client user (see :doc:`/rados/operations/user-management`) with
35 the following minimum capabilities::
36
37 mon 'allow r'
38 mgr 'allow rw'
39
40
41 FS Volumes
42 ----------
43
44 Create a volume using::
45
46 $ ceph fs volume create <vol_name> [<placement>]
47
48 This creates a CephFS file system and its data and metadata pools. It also tries
49 to create MDSes for the filesystem using the enabled ceph-mgr orchestrator
50 module (see :doc:`/mgr/orchestrator`) , e.g., rook.
51
52 Remove a volume using::
53
54 $ ceph fs volume rm <vol_name> [--yes-i-really-mean-it]
55
56 This removes a file system and its data and metadata pools. It also tries to
57 remove MDSes using the enabled ceph-mgr orchestrator module.
58
59 List volumes using::
60
61 $ ceph fs volume ls
62
63 FS Subvolume groups
64 -------------------
65
66 Create a subvolume group using::
67
68 $ ceph fs subvolumegroup create <vol_name> <group_name> [--pool_layout <data_pool_name> --uid <uid> --gid <gid> --mode <octal_mode>]
69
70 The command succeeds even if the subvolume group already exists.
71
72 When creating a subvolume group you can specify its data pool layout (see
73 :doc:`/cephfs/file-layouts`), uid, gid, and file mode in octal numerals. By default, the
74 subvolume group is created with an octal file mode '755', uid '0', gid '0' and data pool
75 layout of its parent directory.
76
77
78 Remove a subvolume group using::
79
80 $ ceph fs subvolumegroup rm <vol_name> <group_name> [--force]
81
82 The removal of a subvolume group fails if it is not empty or non-existent.
83 '--force' flag allows the non-existent subvolume group remove command to succeed.
84
85
86 Fetch the absolute path of a subvolume group using::
87
88 $ ceph fs subvolumegroup getpath <vol_name> <group_name>
89
90 List subvolume groups using::
91
92 $ ceph fs subvolumegroup ls <vol_name>
93
94 Create a snapshot (see :doc:`/cephfs/experimental-features`) of a
95 subvolume group using::
96
97 $ ceph fs subvolumegroup snapshot create <vol_name> <group_name> <snap_name>
98
99 This implicitly snapshots all the subvolumes under the subvolume group.
100
101 Remove a snapshot of a subvolume group using::
102
103 $ ceph fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force]
104
105 Using the '--force' flag allows the command to succeed that would otherwise
106 fail if the snapshot did not exist.
107
108 List snapshots of a subvolume group using::
109
110 $ ceph fs subvolumegroup snapshot ls <vol_name> <group_name>
111
112
113 FS Subvolumes
114 -------------
115
116 Create a subvolume using::
117
118 $ ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes> --group_name <subvol_group_name> --pool_layout <data_pool_name> --uid <uid> --gid <gid> --mode <octal_mode> --namespace-isolated]
119
120
121 The command succeeds even if the subvolume already exists.
122
123 When creating a subvolume you can specify its subvolume group, data pool layout,
124 uid, gid, file mode in octal numerals, and size in bytes. The size of the subvolume is
125 specified by setting a quota on it (see :doc:`/cephfs/quota`). The subvolume can be
126 created in a separate RADOS namespace by specifying --namespace-isolated option. By
127 default a subvolume is created within the default subvolume group, and with an octal file
128 mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of
129 its parent directory and no size limit.
130
131 Remove a subvolume using::
132
133 $ ceph fs subvolume rm <vol_name> <subvol_name> [--group_name <subvol_group_name> --force]
134
135
136 The command removes the subvolume and its contents. It does this in two steps.
137 First, it move the subvolume to a trash folder, and then asynchronously purges
138 its contents.
139
140 The removal of a subvolume fails if it has snapshots, or is non-existent.
141 '--force' flag allows the non-existent subvolume remove command to succeed.
142
143 Resize a subvolume using::
144
145 $ ceph fs subvolume resize <vol_name> <subvol_name> <new_size> [--group_name <subvol_group_name>] [--no_shrink]
146
147 The command resizes the subvolume quota using the size specified by 'new_size'.
148 '--no_shrink' flag prevents the subvolume to shrink below the current used size of the subvolume.
149
150 The subvolume can be resized to an infinite size by passing 'inf' or 'infinite' as the new_size.
151
152 Fetch the absolute path of a subvolume using::
153
154 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
155
156 Fetch the metadata of a subvolume using::
157
158 $ ceph fs subvolume info <vol_name> <subvol_name> [--group_name <subvol_group_name>]
159
160 The output format is json and contains fields as follows.
161
162 * atime: access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
163 * mtime: modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
164 * ctime: change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
165 * uid: uid of subvolume path
166 * gid: gid of subvolume path
167 * mode: mode of subvolume path
168 * mon_addrs: list of monitor addresses
169 * bytes_pcent: quota used in percentage if quota is set, else displays "undefined"
170 * bytes_quota: quota size in bytes if quota is set, else displays "infinite"
171 * bytes_used: current used size of the subvolume in bytes
172 * created_at: time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS"
173 * data_pool: data pool the subvolume belongs to
174 * path: absolute path of a subvolume
175 * type: subvolume type indicating whether it's clone or subvolume
176 * pool_namespace: RADOS namespace of the subvolume
177 * features: features supported by the subvolume
178
179 The subvolume "features" are based on the internal version of the subvolume and is a list containing
180 a subset of the following features,
181
182 * "snapshot-clone": supports cloning using a subvolumes snapshot as the source
183 * "snapshot-autoprotect": supports automatically protecting snapshots, that are active clone sources, from deletion
184
185 List subvolumes using::
186
187 $ ceph fs subvolume ls <vol_name> [--group_name <subvol_group_name>]
188
189 Create a snapshot of a subvolume using::
190
191 $ ceph fs subvolume snapshot create <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
192
193
194 Remove a snapshot of a subvolume using::
195
196 $ ceph fs subvolume snapshot rm <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name> --force]
197
198 Using the '--force' flag allows the command to succeed that would otherwise
199 fail if the snapshot did not exist.
200
201 List snapshots of a subvolume using::
202
203 $ ceph fs subvolume snapshot ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
204
205 Fetch the metadata of a snapshot using::
206
207 $ ceph fs subvolume snapshot info <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
208
209 The output format is json and contains fields as follows.
210
211 * created_at: time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff"
212 * data_pool: data pool the snapshot belongs to
213 * has_pending_clones: "yes" if snapshot clone is in progress otherwise "no"
214 * size: snapshot size in bytes
215
216 Cloning Snapshots
217 -----------------
218
219 Subvolumes can be created by cloning subvolume snapshots. Cloning is an asynchronous operation involving copying
220 data from a snapshot to a subvolume. Due to this bulk copy nature, cloning is currently inefficient for very huge
221 data sets.
222
223 .. note:: Removing a snapshot (source subvolume) would fail if there are pending or in progress clone operations.
224
225 Protecting snapshots prior to cloning was a pre-requisite in the Nautilus release, and the commands to protect/unprotect
226 snapshots were introduced for this purpose. This pre-requisite, and hence the commands to protect/unprotect, is being
227 deprecated in mainline CephFS, and may be removed from a future release.
228
229 The commands being deprecated are::
230
231 $ ceph fs subvolume snapshot protect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
232 $ ceph fs subvolume snapshot unprotect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
233
234 .. note:: Using the above commands would not result in an error, but they serve no useful function.
235
236 .. note:: Use subvolume info command to fetch subvolume metadata regarding supported "features" to help decide if protect/unprotect of snapshots is required, based on the "snapshot-autoprotect" feature availability.
237
238 To initiate a clone operation use::
239
240 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name>
241
242 If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified as per::
243
244 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --group_name <subvol_group_name>
245
246 Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use::
247
248 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --target_group_name <subvol_group_name>
249
250 Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use::
251
252 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --pool_layout <pool_layout>
253
254 To check the status of a clone operation use::
255
256 $ ceph fs clone status <vol_name> <clone_name> [--group_name <group_name>]
257
258 A clone can be in one of the following states:
259
260 #. `pending` : Clone operation has not started
261 #. `in-progress` : Clone operation is in progress
262 #. `complete` : Clone operation has successfully finished
263 #. `failed` : Clone operation has failed
264
265 Sample output from an `in-progress` clone operation::
266
267 $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
268 $ ceph fs clone status cephfs clone1
269 {
270 "status": {
271 "state": "in-progress",
272 "source": {
273 "volume": "cephfs",
274 "subvolume": "subvol1",
275 "snapshot": "snap1"
276 }
277 }
278 }
279
280 (NOTE: since `subvol1` is in default group, `source` section in `clone status` does not include group name)
281
282 .. note:: Cloned subvolumes are accessible only after the clone operation has successfully completed.
283
284 For a successful clone operation, `clone status` would look like so::
285
286 $ ceph fs clone status cephfs clone1
287 {
288 "status": {
289 "state": "complete"
290 }
291 }
292
293 or `failed` state when clone is unsuccessful.
294
295 On failure of a clone operation, the partial clone needs to be deleted and the clone operation needs to be retriggered.
296 To delete a partial clone use::
297
298 $ ceph fs subvolume rm <vol_name> <clone_name> [--group_name <group_name>] --force
299
300 .. note:: Cloning only synchronizes directories, regular files and symbolic links. Also, inode timestamps (access and
301 modification times) are synchronized upto seconds granularity.
302
303 An `in-progress` or a `pending` clone operation can be canceled. To cancel a clone operation use the `clone cancel` command::
304
305 $ ceph fs clone cancel <vol_name> <clone_name> [--group_name <group_name>]
306
307 On successful cancelation, the cloned subvolume is moved to `canceled` state::
308
309 $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
310 $ ceph fs clone cancel cephfs clone1
311 $ ceph fs clone status cephfs clone1
312 {
313 "status": {
314 "state": "canceled",
315 "source": {
316 "volume": "cephfs",
317 "subvolume": "subvol1",
318 "snapshot": "snap1"
319 }
320 }
321 }
322
323 .. note:: The canceled cloned can be deleted by using --force option in `fs subvolume rm` command.
324
325 .. _manila: https://github.com/openstack/manila
326 .. _CSI: https://github.com/ceph/ceph-csi