1 .. _fs-volumes-and-subvolumes:
3 FS volumes and subvolumes
4 =========================
6 A single source of truth for CephFS exports is implemented in the volumes
7 module of the :term:`Ceph Manager` daemon (ceph-mgr). The OpenStack shared
8 file system service (manila_), Ceph Container Storage Interface (CSI_),
9 storage administrators among others can use the common CLI provided by the
10 ceph-mgr volumes module to manage the CephFS exports.
12 The ceph-mgr volumes module implements the following file system export
15 * FS volumes, an abstraction for CephFS file systems
17 * FS subvolumes, an abstraction for independent CephFS directory trees
19 * FS subvolume groups, an abstraction for a directory level higher than FS
20 subvolumes to effect policies (e.g., :doc:`/cephfs/file-layouts`) across a
23 Some possible use-cases for the export abstractions:
25 * FS subvolumes used as manila shares or CSI volumes
27 * FS subvolume groups used as manila share groups
32 * Nautilus (14.2.x) or a later version of Ceph
34 * Cephx client user (see :doc:`/rados/operations/user-management`) with
35 the following minimum capabilities::
44 Create a volume using::
46 $ ceph fs volume create <vol_name> [<placement>]
48 This creates a CephFS file system and its data and metadata pools. It can also
49 try to create MDSes for the filesystem using the enabled ceph-mgr orchestrator
50 module (see :doc:`/mgr/orchestrator`), e.g. rook.
52 <vol_name> is the volume name (an arbitrary string), and
54 <placement> is an optional string signifying which hosts should have NFS Ganesha
55 daemon containers running on them and, optionally, the total number of NFS
56 Ganesha daemons the cluster (should you want to have more than one NFS Ganesha
57 daemon running per node). For example, the following placement string means
58 "deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host):
62 and this placement specification says to deploy two NFS Ganesha daemons each
63 on nodes host1 and host2 (for a total of four NFS Ganesha daemons in the
68 For more details on placement specification refer to the `orchestrator doc
69 <https://docs.ceph.com/docs/master/mgr/orchestrator/#placement-specification>`_
70 but keep in mind that specifying the placement via a YAML file is not supported.
72 Remove a volume using::
74 $ ceph fs volume rm <vol_name> [--yes-i-really-mean-it]
76 This removes a file system and its data and metadata pools. It also tries to
77 remove MDSes using the enabled ceph-mgr orchestrator module.
86 Create a subvolume group using::
88 $ ceph fs subvolumegroup create <vol_name> <group_name> [--pool_layout <data_pool_name>] [--uid <uid>] [--gid <gid>] [--mode <octal_mode>]
90 The command succeeds even if the subvolume group already exists.
92 When creating a subvolume group you can specify its data pool layout (see
93 :doc:`/cephfs/file-layouts`), uid, gid, and file mode in octal numerals. By default, the
94 subvolume group is created with an octal file mode '755', uid '0', gid '0' and data pool
95 layout of its parent directory.
98 Remove a subvolume group using::
100 $ ceph fs subvolumegroup rm <vol_name> <group_name> [--force]
102 The removal of a subvolume group fails if it is not empty or non-existent.
103 '--force' flag allows the non-existent subvolume group remove command to succeed.
106 Fetch the absolute path of a subvolume group using::
108 $ ceph fs subvolumegroup getpath <vol_name> <group_name>
110 List subvolume groups using::
112 $ ceph fs subvolumegroup ls <vol_name>
114 .. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group
115 snapshots can still be listed and deleted)
117 Remove a snapshot of a subvolume group using::
119 $ ceph fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force]
121 Using the '--force' flag allows the command to succeed that would otherwise
122 fail if the snapshot did not exist.
124 List snapshots of a subvolume group using::
126 $ ceph fs subvolumegroup snapshot ls <vol_name> <group_name>
132 Create a subvolume using::
134 $ ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes>] [--group_name <subvol_group_name>] [--pool_layout <data_pool_name>] [--uid <uid>] [--gid <gid>] [--mode <octal_mode>] [--namespace-isolated]
137 The command succeeds even if the subvolume already exists.
139 When creating a subvolume you can specify its subvolume group, data pool layout,
140 uid, gid, file mode in octal numerals, and size in bytes. The size of the subvolume is
141 specified by setting a quota on it (see :doc:`/cephfs/quota`). The subvolume can be
142 created in a separate RADOS namespace by specifying --namespace-isolated option. By
143 default a subvolume is created within the default subvolume group, and with an octal file
144 mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of
145 its parent directory and no size limit.
147 Remove a subvolume using::
149 $ ceph fs subvolume rm <vol_name> <subvol_name> [--group_name <subvol_group_name>] [--force] [--retain-snapshots]
152 The command removes the subvolume and its contents. It does this in two steps.
153 First, it moves the subvolume to a trash folder, and then asynchronously purges
156 The removal of a subvolume fails if it has snapshots, or is non-existent.
157 '--force' flag allows the non-existent subvolume remove command to succeed.
159 A subvolume can be removed retaining existing snapshots of the subvolume using the
160 '--retain-snapshots' option. If snapshots are retained, the subvolume is considered
161 empty for all operations not involving the retained snapshots.
163 .. note:: Snapshot retained subvolumes can be recreated using 'ceph fs subvolume create'
165 .. note:: Retained snapshots can be used as a clone source to recreate the subvolume, or clone to a newer subvolume.
167 Resize a subvolume using::
169 $ ceph fs subvolume resize <vol_name> <subvol_name> <new_size> [--group_name <subvol_group_name>] [--no_shrink]
171 The command resizes the subvolume quota using the size specified by 'new_size'.
172 '--no_shrink' flag prevents the subvolume to shrink below the current used size of the subvolume.
174 The subvolume can be resized to an infinite size by passing 'inf' or 'infinite' as the new_size.
176 Fetch the absolute path of a subvolume using::
178 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
180 Fetch the metadata of a subvolume using::
182 $ ceph fs subvolume info <vol_name> <subvol_name> [--group_name <subvol_group_name>]
184 The output format is json and contains fields as follows.
186 * atime: access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
187 * mtime: modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
188 * ctime: change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
189 * uid: uid of subvolume path
190 * gid: gid of subvolume path
191 * mode: mode of subvolume path
192 * mon_addrs: list of monitor addresses
193 * bytes_pcent: quota used in percentage if quota is set, else displays "undefined"
194 * bytes_quota: quota size in bytes if quota is set, else displays "infinite"
195 * bytes_used: current used size of the subvolume in bytes
196 * created_at: time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS"
197 * data_pool: data pool the subvolume belongs to
198 * path: absolute path of a subvolume
199 * type: subvolume type indicating whether it's clone or subvolume
200 * pool_namespace: RADOS namespace of the subvolume
201 * features: features supported by the subvolume
202 * state: current state of the subvolume
204 If a subvolume has been removed retaining its snapshots, the output only contains fields as follows.
206 * type: subvolume type indicating whether it's clone or subvolume
207 * features: features supported by the subvolume
208 * state: current state of the subvolume
210 The subvolume "features" are based on the internal version of the subvolume and is a list containing
211 a subset of the following features,
213 * "snapshot-clone": supports cloning using a subvolumes snapshot as the source
214 * "snapshot-autoprotect": supports automatically protecting snapshots, that are active clone sources, from deletion
215 * "snapshot-retention": supports removing subvolume contents, retaining any existing snapshots
217 The subvolume "state" is based on the current state of the subvolume and contains one of the following values.
219 * "complete": subvolume is ready for all operations
220 * "snapshot-retained": subvolume is removed but its snapshots are retained
222 List subvolumes using::
224 $ ceph fs subvolume ls <vol_name> [--group_name <subvol_group_name>]
226 .. note:: subvolumes that are removed but have snapshots retained, are also listed.
228 Create a snapshot of a subvolume using::
230 $ ceph fs subvolume snapshot create <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
233 Remove a snapshot of a subvolume using::
235 $ ceph fs subvolume snapshot rm <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>] [--force]
237 Using the '--force' flag allows the command to succeed that would otherwise
238 fail if the snapshot did not exist.
240 .. note:: if the last snapshot within a snapshot retained subvolume is removed, the subvolume is also removed
242 List snapshots of a subvolume using::
244 $ ceph fs subvolume snapshot ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
246 Fetch the metadata of a snapshot using::
248 $ ceph fs subvolume snapshot info <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
250 The output format is json and contains fields as follows.
252 * created_at: time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff"
253 * data_pool: data pool the snapshot belongs to
254 * has_pending_clones: "yes" if snapshot clone is in progress otherwise "no"
255 * size: snapshot size in bytes
260 Subvolumes can be created by cloning subvolume snapshots. Cloning is an asynchronous operation involving copying
261 data from a snapshot to a subvolume. Due to this bulk copy nature, cloning is currently inefficient for very huge
264 .. note:: Removing a snapshot (source subvolume) would fail if there are pending or in progress clone operations.
266 Protecting snapshots prior to cloning was a pre-requisite in the Nautilus release, and the commands to protect/unprotect
267 snapshots were introduced for this purpose. This pre-requisite, and hence the commands to protect/unprotect, is being
268 deprecated in mainline CephFS, and may be removed from a future release.
270 The commands being deprecated are::
272 $ ceph fs subvolume snapshot protect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
273 $ ceph fs subvolume snapshot unprotect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
275 .. note:: Using the above commands would not result in an error, but they serve no useful function.
277 .. note:: Use subvolume info command to fetch subvolume metadata regarding supported "features" to help decide if protect/unprotect of snapshots is required, based on the "snapshot-autoprotect" feature availability.
279 To initiate a clone operation use::
281 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name>
283 If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified as per::
285 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --group_name <subvol_group_name>
287 Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use::
289 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --target_group_name <subvol_group_name>
291 Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use::
293 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --pool_layout <pool_layout>
295 Configure maximum number of concurrent clones. The default is set to 4::
297 $ ceph config set mgr mgr/volumes/max_concurrent_clones <value>
299 To check the status of a clone operation use::
301 $ ceph fs clone status <vol_name> <clone_name> [--group_name <group_name>]
303 A clone can be in one of the following states:
305 #. `pending` : Clone operation has not started
306 #. `in-progress` : Clone operation is in progress
307 #. `complete` : Clone operation has successfully finished
308 #. `failed` : Clone operation has failed
310 Sample output from an `in-progress` clone operation::
312 $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
313 $ ceph fs clone status cephfs clone1
316 "state": "in-progress",
319 "subvolume": "subvol1",
325 (NOTE: since `subvol1` is in default group, `source` section in `clone status` does not include group name)
327 .. note:: Cloned subvolumes are accessible only after the clone operation has successfully completed.
329 For a successful clone operation, `clone status` would look like so::
331 $ ceph fs clone status cephfs clone1
338 or `failed` state when clone is unsuccessful.
340 On failure of a clone operation, the partial clone needs to be deleted and the clone operation needs to be retriggered.
341 To delete a partial clone use::
343 $ ceph fs subvolume rm <vol_name> <clone_name> [--group_name <group_name>] --force
345 .. note:: Cloning only synchronizes directories, regular files and symbolic links. Also, inode timestamps (access and
346 modification times) are synchronized upto seconds granularity.
348 An `in-progress` or a `pending` clone operation can be canceled. To cancel a clone operation use the `clone cancel` command::
350 $ ceph fs clone cancel <vol_name> <clone_name> [--group_name <group_name>]
352 On successful cancelation, the cloned subvolume is moved to `canceled` state::
354 $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
355 $ ceph fs clone cancel cephfs clone1
356 $ ceph fs clone status cephfs clone1
362 "subvolume": "subvol1",
368 .. note:: The canceled cloned can be deleted by using --force option in `fs subvolume rm` command.
370 .. _manila: https://github.com/openstack/manila
371 .. _CSI: https://github.com/ceph/ceph-csi