]>
Commit | Line | Data |
---|---|---|
eafe8130 TL |
1 | .. _fs-volumes-and-subvolumes: |
2 | ||
3 | FS volumes and subvolumes | |
4 | ========================= | |
5 | ||
6 | A single source of truth for CephFS exports is implemented in the volumes | |
7 | module of the :term:`Ceph Manager` daemon (ceph-mgr). The OpenStack shared | |
9f95a23c | 8 | file system service (manila_), Ceph Container Storage Interface (CSI_), |
eafe8130 TL |
9 | storage administrators among others can use the common CLI provided by the |
10 | ceph-mgr volumes module to manage the CephFS exports. | |
11 | ||
12 | The ceph-mgr volumes module implements the following file system export | |
13 | abstactions: | |
14 | ||
15 | * FS volumes, an abstraction for CephFS file systems | |
16 | ||
17 | * FS subvolumes, an abstraction for independent CephFS directory trees | |
18 | ||
19 | * FS subvolume groups, an abstraction for a directory level higher than FS | |
20 | subvolumes to effect policies (e.g., :doc:`/cephfs/file-layouts`) across a | |
21 | set of subvolumes | |
22 | ||
23 | Some possible use-cases for the export abstractions: | |
24 | ||
25 | * FS subvolumes used as manila shares or CSI volumes | |
26 | ||
27 | * FS subvolume groups used as manila share groups | |
28 | ||
29 | Requirements | |
30 | ------------ | |
31 | ||
32 | * Nautilus (14.2.x) or a later version of Ceph | |
33 | ||
34 | * Cephx client user (see :doc:`/rados/operations/user-management`) with | |
35 | the following minimum capabilities:: | |
36 | ||
37 | mon 'allow r' | |
38 | mgr 'allow rw' | |
39 | ||
40 | ||
41 | FS Volumes | |
42 | ---------- | |
43 | ||
44 | Create a volume using:: | |
45 | ||
9f95a23c | 46 | $ ceph fs volume create <vol_name> [<placement>] |
eafe8130 | 47 | |
9f95a23c TL |
48 | This creates a CephFS file system and its data and metadata pools. It also tries |
49 | to create MDSes for the filesystem using the enabled ceph-mgr orchestrator | |
50 | module (see :doc:`/mgr/orchestrator`) , e.g., rook. | |
eafe8130 TL |
51 | |
52 | Remove a volume using:: | |
53 | ||
54 | $ ceph fs volume rm <vol_name> [--yes-i-really-mean-it] | |
55 | ||
56 | This removes a file system and its data and metadata pools. It also tries to | |
57 | remove MDSes using the enabled ceph-mgr orchestrator module. | |
58 | ||
59 | List volumes using:: | |
60 | ||
61 | $ ceph fs volume ls | |
62 | ||
63 | FS Subvolume groups | |
64 | ------------------- | |
65 | ||
66 | Create a subvolume group using:: | |
67 | ||
92f5a8d4 | 68 | $ ceph fs subvolumegroup create <vol_name> <group_name> [--pool_layout <data_pool_name> --uid <uid> --gid <gid> --mode <octal_mode>] |
eafe8130 TL |
69 | |
70 | The command succeeds even if the subvolume group already exists. | |
71 | ||
72 | When creating a subvolume group you can specify its data pool layout (see | |
92f5a8d4 TL |
73 | :doc:`/cephfs/file-layouts`), uid, gid, and file mode in octal numerals. By default, the |
74 | subvolume group is created with an octal file mode '755', uid '0', gid '0' and data pool | |
75 | layout of its parent directory. | |
eafe8130 TL |
76 | |
77 | ||
78 | Remove a subvolume group using:: | |
79 | ||
80 | $ ceph fs subvolumegroup rm <vol_name> <group_name> [--force] | |
81 | ||
9f95a23c TL |
82 | The removal of a subvolume group fails if it is not empty or non-existent. |
83 | '--force' flag allows the non-existent subvolume group remove command to succeed. | |
eafe8130 TL |
84 | |
85 | ||
86 | Fetch the absolute path of a subvolume group using:: | |
87 | ||
88 | $ ceph fs subvolumegroup getpath <vol_name> <group_name> | |
89 | ||
9f95a23c TL |
90 | List subvolume groups using:: |
91 | ||
92 | $ ceph fs subvolumegroup ls <vol_name> | |
93 | ||
eafe8130 TL |
94 | Create a snapshot (see :doc:`/cephfs/experimental-features`) of a |
95 | subvolume group using:: | |
96 | ||
97 | $ ceph fs subvolumegroup snapshot create <vol_name> <group_name> <snap_name> | |
98 | ||
99 | This implicitly snapshots all the subvolumes under the subvolume group. | |
100 | ||
101 | Remove a snapshot of a subvolume group using:: | |
102 | ||
103 | $ ceph fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force] | |
104 | ||
105 | Using the '--force' flag allows the command to succeed that would otherwise | |
106 | fail if the snapshot did not exist. | |
107 | ||
9f95a23c TL |
108 | List snapshots of a subvolume group using:: |
109 | ||
110 | $ ceph fs subvolumegroup snapshot ls <vol_name> <group_name> | |
111 | ||
eafe8130 TL |
112 | |
113 | FS Subvolumes | |
114 | ------------- | |
115 | ||
116 | Create a subvolume using:: | |
117 | ||
92f5a8d4 | 118 | $ ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes> --group_name <subvol_group_name> --pool_layout <data_pool_name> --uid <uid> --gid <gid> --mode <octal_mode>] |
eafe8130 TL |
119 | |
120 | ||
121 | The command succeeds even if the subvolume already exists. | |
122 | ||
123 | When creating a subvolume you can specify its subvolume group, data pool layout, | |
92f5a8d4 | 124 | uid, gid, file mode in octal numerals, and size in bytes. The size of the subvolume is |
eafe8130 TL |
125 | specified by setting a quota on it (see :doc:`/cephfs/quota`). By default a |
126 | subvolume is created within the default subvolume group, and with an octal file | |
92f5a8d4 TL |
127 | mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of |
128 | its parent directory and no size limit. | |
eafe8130 | 129 | |
9f95a23c | 130 | Remove a subvolume using:: |
eafe8130 TL |
131 | |
132 | $ ceph fs subvolume rm <vol_name> <subvol_name> [--group_name <subvol_group_name> --force] | |
133 | ||
134 | ||
135 | The command removes the subvolume and its contents. It does this in two steps. | |
136 | First, it move the subvolume to a trash folder, and then asynchronously purges | |
137 | its contents. | |
138 | ||
139 | The removal of a subvolume fails if it has snapshots, or is non-existent. | |
9f95a23c | 140 | '--force' flag allows the non-existent subvolume remove command to succeed. |
eafe8130 | 141 | |
92f5a8d4 TL |
142 | Resize a subvolume using:: |
143 | ||
144 | $ ceph fs subvolume resize <vol_name> <subvol_name> <new_size> [--group_name <subvol_group_name>] [--no_shrink] | |
145 | ||
146 | The command resizes the subvolume quota using the size specified by 'new_size'. | |
147 | '--no_shrink' flag prevents the subvolume to shrink below the current used size of the subvolume. | |
148 | ||
149 | The subvolume can be resized to an infinite size by passing 'inf' or 'infinite' as the new_size. | |
eafe8130 TL |
150 | |
151 | Fetch the absolute path of a subvolume using:: | |
152 | ||
153 | $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>] | |
154 | ||
9f95a23c TL |
155 | List subvolumes using:: |
156 | ||
157 | $ ceph fs subvolume ls <vol_name> [--group_name <subvol_group_name>] | |
eafe8130 TL |
158 | |
159 | Create a snapshot of a subvolume using:: | |
160 | ||
161 | $ ceph fs subvolume snapshot create <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>] | |
162 | ||
163 | ||
164 | Remove a snapshot of a subvolume using:: | |
165 | ||
166 | $ ceph fs subvolume snapshot rm <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name> --force] | |
167 | ||
168 | Using the '--force' flag allows the command to succeed that would otherwise | |
169 | fail if the snapshot did not exist. | |
170 | ||
9f95a23c TL |
171 | List snapshots of a subvolume using:: |
172 | ||
173 | $ ceph fs subvolume snapshot ls <vol_name> <subvol_name> [--group_name <subvol_group_name>] | |
174 | ||
175 | Cloning Snapshots | |
176 | ----------------- | |
177 | ||
178 | Subvolumes can be created by cloning subvolume snapshots. Cloning is an asynchronous operation involving copying | |
179 | data from a snapshot to a subvolume. Due to this bulk copy nature, cloning is currently inefficient for very huge | |
180 | data sets. | |
181 | ||
182 | Before starting a clone operation, the snapshot should be protected. Protecting a snapshot ensures that the snapshot | |
183 | cannot be deleted when a clone operation is in progress. Snapshots can be protected using:: | |
184 | ||
185 | $ ceph fs subvolume snapshot protect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>] | |
186 | ||
187 | To initiate a clone operation use:: | |
188 | ||
189 | $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> | |
190 | ||
191 | If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified as per:: | |
192 | ||
193 | $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --group_name <subvol_group_name> | |
194 | ||
195 | Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use:: | |
196 | ||
197 | $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --target_group_name <subvol_group_name> | |
198 | ||
199 | Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use:: | |
200 | ||
201 | $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --pool_layout <pool_layout> | |
202 | ||
203 | To check the status of a clone operation use:: | |
204 | ||
205 | $ ceph fs clone status <vol_name> <clone_name> [--group_name <group_name>] | |
206 | ||
207 | A clone can be in one of the following states: | |
208 | ||
209 | #. `pending` : Clone operation has not started | |
210 | #. `in-progress` : Clone operation is in progress | |
211 | #. `complete` : Clone operation has sucessfully finished | |
212 | #. `failed` : Clone operation has failed | |
213 | ||
214 | Sample output from an `in-progress` clone operation:: | |
215 | ||
216 | $ ceph fs subvolume snapshot protect cephfs subvol1 snap1 | |
217 | $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1 | |
218 | $ ceph fs clone status cephfs clone1 | |
219 | { | |
220 | "status": { | |
221 | "state": "in-progress", | |
222 | "source": { | |
223 | "volume": "cephfs", | |
224 | "subvolume": "subvol1", | |
225 | "snapshot": "snap1" | |
226 | } | |
227 | } | |
228 | } | |
229 | ||
230 | (NOTE: since `subvol1` is in default group, `source` section in `clone status` does not include group name) | |
231 | ||
232 | .. note:: Cloned subvolumes are accessible only after the clone operation has successfully completed. | |
233 | ||
234 | For a successsful clone operation, `clone status` would look like so:: | |
235 | ||
236 | $ ceph fs clone status cephfs clone1 | |
237 | { | |
238 | "status": { | |
239 | "state": "complete" | |
240 | } | |
241 | } | |
242 | ||
243 | or `failed` state when clone is unsuccessful. | |
244 | ||
245 | On failure of a clone operation, the partial clone needs to be deleted and the clone operation needs to be retriggered. | |
246 | To delete a partial clone use:: | |
247 | ||
248 | $ ceph fs subvolume rm <vol_name> <clone_name> [--group_name <group_name>] --force | |
249 | ||
250 | When no clone operations are in progress or scheduled, the snaphot can be unprotected. To unprotect a snapshot use:: | |
251 | ||
252 | $ ceph fs subvolume snapshot unprotect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>] | |
253 | ||
254 | Note that unprotecting a snapshot would fail if there are pending or in progress clone operations. Also note that, | |
255 | only unprotected snapshots can be removed. This guarantees that a snapshot cannot be deleted when clones are pending | |
256 | (or in progress). | |
257 | ||
258 | .. note:: Cloning only synchronizes directories, regular files and symbolic links. Also, inode timestamps (access and | |
259 | modification times) are synchronized upto seconds granularity. | |
260 | ||
261 | An `in-progress` or a `pending` clone operation can be canceled. To cancel a clone operation use the `clone cancel` command:: | |
262 | ||
263 | $ ceph fs clone cancel <vol_name> <clone_name> [--group_name <group_name>] | |
264 | ||
265 | On successful cancelation, the cloned subvolume is moved to `canceled` state:: | |
266 | ||
267 | $ ceph fs subvolume snapshot protect cephfs subvol1 snap1 | |
268 | $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1 | |
269 | $ ceph fs clone cancel cephfs clone1 | |
270 | $ ceph fs clone status cephfs clone1 | |
271 | { | |
272 | "status": { | |
273 | "state": "canceled", | |
274 | "source": { | |
275 | "volume": "cephfs", | |
276 | "subvolume": "subvol1", | |
277 | "snapshot": "snap1" | |
278 | } | |
279 | } | |
280 | } | |
281 | ||
282 | .. note:: The canceled cloned can be deleted by using --force option in `fs subvolume rm` command. | |
283 | ||
eafe8130 TL |
284 | .. _manila: https://github.com/openstack/manila |
285 | .. _CSI: https://github.com/ceph/ceph-csi |