]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/fs-volumes.rst
import 15.2.2 octopus source
[ceph.git] / ceph / doc / cephfs / fs-volumes.rst
CommitLineData
eafe8130
TL
1.. _fs-volumes-and-subvolumes:
2
3FS volumes and subvolumes
4=========================
5
6A single source of truth for CephFS exports is implemented in the volumes
7module of the :term:`Ceph Manager` daemon (ceph-mgr). The OpenStack shared
9f95a23c 8file system service (manila_), Ceph Container Storage Interface (CSI_),
eafe8130
TL
9storage administrators among others can use the common CLI provided by the
10ceph-mgr volumes module to manage the CephFS exports.
11
12The ceph-mgr volumes module implements the following file system export
13abstactions:
14
15* FS volumes, an abstraction for CephFS file systems
16
17* FS subvolumes, an abstraction for independent CephFS directory trees
18
19* FS subvolume groups, an abstraction for a directory level higher than FS
20 subvolumes to effect policies (e.g., :doc:`/cephfs/file-layouts`) across a
21 set of subvolumes
22
23Some possible use-cases for the export abstractions:
24
25* FS subvolumes used as manila shares or CSI volumes
26
27* FS subvolume groups used as manila share groups
28
29Requirements
30------------
31
32* Nautilus (14.2.x) or a later version of Ceph
33
34* Cephx client user (see :doc:`/rados/operations/user-management`) with
35 the following minimum capabilities::
36
37 mon 'allow r'
38 mgr 'allow rw'
39
40
41FS Volumes
42----------
43
44Create a volume using::
45
9f95a23c 46 $ ceph fs volume create <vol_name> [<placement>]
eafe8130 47
9f95a23c
TL
48This creates a CephFS file system and its data and metadata pools. It also tries
49to create MDSes for the filesystem using the enabled ceph-mgr orchestrator
50module (see :doc:`/mgr/orchestrator`) , e.g., rook.
eafe8130
TL
51
52Remove a volume using::
53
54 $ ceph fs volume rm <vol_name> [--yes-i-really-mean-it]
55
56This removes a file system and its data and metadata pools. It also tries to
57remove MDSes using the enabled ceph-mgr orchestrator module.
58
59List volumes using::
60
61 $ ceph fs volume ls
62
63FS Subvolume groups
64-------------------
65
66Create a subvolume group using::
67
92f5a8d4 68 $ ceph fs subvolumegroup create <vol_name> <group_name> [--pool_layout <data_pool_name> --uid <uid> --gid <gid> --mode <octal_mode>]
eafe8130
TL
69
70The command succeeds even if the subvolume group already exists.
71
72When creating a subvolume group you can specify its data pool layout (see
92f5a8d4
TL
73:doc:`/cephfs/file-layouts`), uid, gid, and file mode in octal numerals. By default, the
74subvolume group is created with an octal file mode '755', uid '0', gid '0' and data pool
75layout of its parent directory.
eafe8130
TL
76
77
78Remove a subvolume group using::
79
80 $ ceph fs subvolumegroup rm <vol_name> <group_name> [--force]
81
9f95a23c
TL
82The removal of a subvolume group fails if it is not empty or non-existent.
83'--force' flag allows the non-existent subvolume group remove command to succeed.
eafe8130
TL
84
85
86Fetch the absolute path of a subvolume group using::
87
88 $ ceph fs subvolumegroup getpath <vol_name> <group_name>
89
9f95a23c
TL
90List subvolume groups using::
91
92 $ ceph fs subvolumegroup ls <vol_name>
93
eafe8130
TL
94Create a snapshot (see :doc:`/cephfs/experimental-features`) of a
95subvolume group using::
96
97 $ ceph fs subvolumegroup snapshot create <vol_name> <group_name> <snap_name>
98
99This implicitly snapshots all the subvolumes under the subvolume group.
100
101Remove a snapshot of a subvolume group using::
102
103 $ ceph fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force]
104
105Using the '--force' flag allows the command to succeed that would otherwise
106fail if the snapshot did not exist.
107
9f95a23c
TL
108List snapshots of a subvolume group using::
109
110 $ ceph fs subvolumegroup snapshot ls <vol_name> <group_name>
111
eafe8130
TL
112
113FS Subvolumes
114-------------
115
116Create a subvolume using::
117
92f5a8d4 118 $ ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes> --group_name <subvol_group_name> --pool_layout <data_pool_name> --uid <uid> --gid <gid> --mode <octal_mode>]
eafe8130
TL
119
120
121The command succeeds even if the subvolume already exists.
122
123When creating a subvolume you can specify its subvolume group, data pool layout,
92f5a8d4 124uid, gid, file mode in octal numerals, and size in bytes. The size of the subvolume is
eafe8130
TL
125specified by setting a quota on it (see :doc:`/cephfs/quota`). By default a
126subvolume is created within the default subvolume group, and with an octal file
92f5a8d4
TL
127mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of
128its parent directory and no size limit.
eafe8130 129
9f95a23c 130Remove a subvolume using::
eafe8130
TL
131
132 $ ceph fs subvolume rm <vol_name> <subvol_name> [--group_name <subvol_group_name> --force]
133
134
135The command removes the subvolume and its contents. It does this in two steps.
136First, it move the subvolume to a trash folder, and then asynchronously purges
137its contents.
138
139The removal of a subvolume fails if it has snapshots, or is non-existent.
9f95a23c 140'--force' flag allows the non-existent subvolume remove command to succeed.
eafe8130 141
92f5a8d4
TL
142Resize a subvolume using::
143
144 $ ceph fs subvolume resize <vol_name> <subvol_name> <new_size> [--group_name <subvol_group_name>] [--no_shrink]
145
146The command resizes the subvolume quota using the size specified by 'new_size'.
147'--no_shrink' flag prevents the subvolume to shrink below the current used size of the subvolume.
148
149The subvolume can be resized to an infinite size by passing 'inf' or 'infinite' as the new_size.
eafe8130
TL
150
151Fetch the absolute path of a subvolume using::
152
153 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
154
1911f103
TL
155Fetch the metadata of a subvolume using::
156
157 $ ceph fs subvolume info <vol_name> <subvol_name> [--group_name <subvol_group_name>]
158
159The output format is json and contains fields as follows.
160
161* atime: access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
162* mtime: modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
163* ctime: change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
164* uid: uid of subvolume path
165* gid: gid of subvolume path
166* mode: mode of subvolume path
167* mon_addrs: list of monitor addresses
168* bytes_pcent: quota used in percentage if quota is set, else displays "undefined"
169* bytes_quota: quota size in bytes if quota is set, else displays "infinite"
170* bytes_used: current used size of the subvolume in bytes
171* created_at: time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS"
172* data_pool: data pool the subvolume belongs to
173* path: absolute path of a subvolume
174* type: subvolume type indicating whether it's clone or subvolume
175
9f95a23c
TL
176List subvolumes using::
177
178 $ ceph fs subvolume ls <vol_name> [--group_name <subvol_group_name>]
eafe8130
TL
179
180Create a snapshot of a subvolume using::
181
182 $ ceph fs subvolume snapshot create <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
183
184
185Remove a snapshot of a subvolume using::
186
187 $ ceph fs subvolume snapshot rm <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name> --force]
188
189Using the '--force' flag allows the command to succeed that would otherwise
190fail if the snapshot did not exist.
191
9f95a23c
TL
192List snapshots of a subvolume using::
193
194 $ ceph fs subvolume snapshot ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
195
196Cloning Snapshots
197-----------------
198
199Subvolumes can be created by cloning subvolume snapshots. Cloning is an asynchronous operation involving copying
200data from a snapshot to a subvolume. Due to this bulk copy nature, cloning is currently inefficient for very huge
201data sets.
202
203Before starting a clone operation, the snapshot should be protected. Protecting a snapshot ensures that the snapshot
204cannot be deleted when a clone operation is in progress. Snapshots can be protected using::
205
206 $ ceph fs subvolume snapshot protect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
207
208To initiate a clone operation use::
209
210 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name>
211
212If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified as per::
213
214 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --group_name <subvol_group_name>
215
216Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use::
217
218 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --target_group_name <subvol_group_name>
219
220Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use::
221
222 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --pool_layout <pool_layout>
223
224To check the status of a clone operation use::
225
226 $ ceph fs clone status <vol_name> <clone_name> [--group_name <group_name>]
227
228A clone can be in one of the following states:
229
230#. `pending` : Clone operation has not started
231#. `in-progress` : Clone operation is in progress
232#. `complete` : Clone operation has sucessfully finished
233#. `failed` : Clone operation has failed
234
235Sample output from an `in-progress` clone operation::
236
237 $ ceph fs subvolume snapshot protect cephfs subvol1 snap1
238 $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
239 $ ceph fs clone status cephfs clone1
240 {
241 "status": {
242 "state": "in-progress",
243 "source": {
244 "volume": "cephfs",
245 "subvolume": "subvol1",
246 "snapshot": "snap1"
247 }
248 }
249 }
250
251(NOTE: since `subvol1` is in default group, `source` section in `clone status` does not include group name)
252
253.. note:: Cloned subvolumes are accessible only after the clone operation has successfully completed.
254
255For a successsful clone operation, `clone status` would look like so::
256
257 $ ceph fs clone status cephfs clone1
258 {
259 "status": {
260 "state": "complete"
261 }
262 }
263
264or `failed` state when clone is unsuccessful.
265
266On failure of a clone operation, the partial clone needs to be deleted and the clone operation needs to be retriggered.
267To delete a partial clone use::
268
269 $ ceph fs subvolume rm <vol_name> <clone_name> [--group_name <group_name>] --force
270
271When no clone operations are in progress or scheduled, the snaphot can be unprotected. To unprotect a snapshot use::
272
273 $ ceph fs subvolume snapshot unprotect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
274
275Note that unprotecting a snapshot would fail if there are pending or in progress clone operations. Also note that,
276only unprotected snapshots can be removed. This guarantees that a snapshot cannot be deleted when clones are pending
277(or in progress).
278
279.. note:: Cloning only synchronizes directories, regular files and symbolic links. Also, inode timestamps (access and
280 modification times) are synchronized upto seconds granularity.
281
282An `in-progress` or a `pending` clone operation can be canceled. To cancel a clone operation use the `clone cancel` command::
283
284 $ ceph fs clone cancel <vol_name> <clone_name> [--group_name <group_name>]
285
286On successful cancelation, the cloned subvolume is moved to `canceled` state::
287
288 $ ceph fs subvolume snapshot protect cephfs subvol1 snap1
289 $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
290 $ ceph fs clone cancel cephfs clone1
291 $ ceph fs clone status cephfs clone1
292 {
293 "status": {
294 "state": "canceled",
295 "source": {
296 "volume": "cephfs",
297 "subvolume": "subvol1",
298 "snapshot": "snap1"
299 }
300 }
301 }
302
303.. note:: The canceled cloned can be deleted by using --force option in `fs subvolume rm` command.
304
eafe8130
TL
305.. _manila: https://github.com/openstack/manila
306.. _CSI: https://github.com/ceph/ceph-csi