]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/fs-volumes.rst
f52f0589e2d4580771392753c1edbc81dc36a59d
[ceph.git] / ceph / doc / cephfs / fs-volumes.rst
1 .. _fs-volumes-and-subvolumes:
2
3 FS volumes and subvolumes
4 =========================
5
6 A single source of truth for CephFS exports is implemented in the volumes
7 module of the :term:`Ceph Manager` daemon (ceph-mgr). The OpenStack shared
8 file system service (manila_), Ceph Container Storage Interface (CSI_),
9 storage administrators among others can use the common CLI provided by the
10 ceph-mgr volumes module to manage the CephFS exports.
11
12 The ceph-mgr volumes module implements the following file system export
13 abstractions:
14
15 * FS volumes, an abstraction for CephFS file systems
16
17 * FS subvolumes, an abstraction for independent CephFS directory trees
18
19 * FS subvolume groups, an abstraction for a directory level higher than FS
20 subvolumes to effect policies (e.g., :doc:`/cephfs/file-layouts`) across a
21 set of subvolumes
22
23 Some possible use-cases for the export abstractions:
24
25 * FS subvolumes used as manila shares or CSI volumes
26
27 * FS subvolume groups used as manila share groups
28
29 Requirements
30 ------------
31
32 * Nautilus (14.2.x) or a later version of Ceph
33
34 * Cephx client user (see :doc:`/rados/operations/user-management`) with
35 the following minimum capabilities::
36
37 mon 'allow r'
38 mgr 'allow rw'
39
40
41 FS Volumes
42 ----------
43
44 Create a volume using::
45
46 $ ceph fs volume create <vol_name> [<placement>]
47
48 This creates a CephFS file system and its data and metadata pools. It can also
49 try to create MDSes for the filesystem using the enabled ceph-mgr orchestrator
50 module (see :doc:`/mgr/orchestrator`), e.g. rook.
51
52 <vol_name> is the volume name (an arbitrary string), and
53
54 <placement> is an optional string signifying which hosts should have NFS Ganesha
55 daemon containers running on them and, optionally, the total number of NFS
56 Ganesha daemons the cluster (should you want to have more than one NFS Ganesha
57 daemon running per node). For example, the following placement string means
58 "deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host):
59
60 "host1,host2"
61
62 and this placement specification says to deploy two NFS Ganesha daemons each
63 on nodes host1 and host2 (for a total of four NFS Ganesha daemons in the
64 cluster):
65
66 "4 host1,host2"
67
68 For more details on placement specification refer to the :ref:`orchestrator-cli-service-spec`,
69 but keep in mind that specifying the placement via a YAML file is not supported.
70
71 Remove a volume using::
72
73 $ ceph fs volume rm <vol_name> [--yes-i-really-mean-it]
74
75 This removes a file system and its data and metadata pools. It also tries to
76 remove MDSes using the enabled ceph-mgr orchestrator module.
77
78 List volumes using::
79
80 $ ceph fs volume ls
81
82 Rename a volume using::
83
84 $ ceph fs volume rename <vol_name> <new_vol_name> [--yes-i-really-mean-it]
85
86 Renaming a volume can be an expensive operation. It does the following:
87
88 - renames the orchestrator managed MDS service to match the <new_vol_name>.
89 This involves launching a MDS service with <new_vol_name> and bringing down
90 the MDS service with <vol_name>.
91 - renames the file system matching <vol_name> to <new_vol_name>
92 - changes the application tags on the data and metadata pools of the file system
93 to <new_vol_name>
94 - renames the metadata and data pools of the file system.
95
96 The CephX IDs authorized to <vol_name> need to be reauthorized to <new_vol_name>. Any
97 on-going operations of the clients using these IDs may be disrupted. Mirroring is
98 expected to be disabled on the volume.
99
100 Fetch the information of a CephFS volume using::
101
102 $ ceph fs volume info vol_name
103 {
104 "mon_addrs": [
105 "192.168.1.7:40977"
106 ],
107 "pending_subvolume_deletions": 0,
108 "pools": {
109 "data": [
110 {
111 "avail": 106288709632,
112 "name": "cephfs.vol_name.data",
113 "used": 4096
114 }
115 ],
116 "metadata": [
117 {
118 "avail": 106288709632,
119 "name": "cephfs.vol_name.meta",
120 "used": 155648
121 }
122 ]
123 },
124 "used_size": 0
125 }
126
127 The output format is json and contains fields as follows.
128
129 * pools: Attributes of data and metadata pools
130 * avail: The amount of free space available in bytes
131 * used: The amount of storage consumed in bytes
132 * name: Name of the pool
133 * mon_addrs: List of monitor addresses
134 * used_size: Current used size of the CephFS volume in bytes
135 * pending_subvolume_deletions: Number of subvolumes pending deletion
136
137 FS Subvolume groups
138 -------------------
139
140 Create a subvolume group using::
141
142 $ ceph fs subvolumegroup create <vol_name> <group_name> [--size <size_in_bytes>] [--pool_layout <data_pool_name>] [--uid <uid>] [--gid <gid>] [--mode <octal_mode>]
143
144 The command succeeds even if the subvolume group already exists.
145
146 When creating a subvolume group you can specify its data pool layout (see
147 :doc:`/cephfs/file-layouts`), uid, gid, file mode in octal numerals and
148 size in bytes. The size of the subvolume group is specified by setting
149 a quota on it (see :doc:`/cephfs/quota`). By default, the subvolume group
150 is created with an octal file mode '755', uid '0', gid '0' and data pool
151 layout of its parent directory.
152
153
154 Remove a subvolume group using::
155
156 $ ceph fs subvolumegroup rm <vol_name> <group_name> [--force]
157
158 The removal of a subvolume group fails if it is not empty or non-existent.
159 '--force' flag allows the non-existent subvolume group remove command to succeed.
160
161
162 Fetch the absolute path of a subvolume group using::
163
164 $ ceph fs subvolumegroup getpath <vol_name> <group_name>
165
166 List subvolume groups using::
167
168 $ ceph fs subvolumegroup ls <vol_name>
169
170 .. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group
171 snapshots can still be listed and deleted)
172
173 Fetch the metadata of a subvolume group using::
174
175 $ ceph fs subvolumegroup info <vol_name> <group_name>
176
177 The output format is json and contains fields as follows.
178
179 * atime: access time of subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
180 * mtime: modification time of subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
181 * ctime: change time of subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
182 * uid: uid of subvolume group path
183 * gid: gid of subvolume group path
184 * mode: mode of subvolume group path
185 * mon_addrs: list of monitor addresses
186 * bytes_pcent: quota used in percentage if quota is set, else displays "undefined"
187 * bytes_quota: quota size in bytes if quota is set, else displays "infinite"
188 * bytes_used: current used size of the subvolume group in bytes
189 * created_at: time of creation of subvolume group in the format "YYYY-MM-DD HH:MM:SS"
190 * data_pool: data pool the subvolume group belongs to
191
192 Check the presence of any subvolume group using::
193
194 $ ceph fs subvolumegroup exist <vol_name>
195
196 The strings returned by the 'exist' command:
197 * "subvolumegroup exists": if any subvolumegroup is present
198 * "no subvolumegroup exists": if no subvolumegroup is present
199
200 .. note:: It checks for the presence of custom groups and not the default one. To validate the emptiness of the volume, subvolumegroup existence check alone is not sufficient. The subvolume existence also needs to be checked as there might be subvolumes in the default group.
201
202 Resize a subvolume group using::
203
204 $ ceph fs subvolumegroup resize <vol_name> <group_name> <new_size> [--no_shrink]
205
206 The command resizes the subvolume group quota using the size specified by 'new_size'.
207 The '--no_shrink' flag prevents the subvolume group to shrink below the current used
208 size of the subvolume group.
209
210 The subvolume group can be resized to an infinite size by passing 'inf' or 'infinite'
211 as the new_size.
212
213 Remove a snapshot of a subvolume group using::
214
215 $ ceph fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force]
216
217 Using the '--force' flag allows the command to succeed that would otherwise
218 fail if the snapshot did not exist.
219
220 List snapshots of a subvolume group using::
221
222 $ ceph fs subvolumegroup snapshot ls <vol_name> <group_name>
223
224
225 FS Subvolumes
226 -------------
227
228 Create a subvolume using::
229
230 $ ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes>] [--group_name <subvol_group_name>] [--pool_layout <data_pool_name>] [--uid <uid>] [--gid <gid>] [--mode <octal_mode>] [--namespace-isolated]
231
232
233 The command succeeds even if the subvolume already exists.
234
235 When creating a subvolume you can specify its subvolume group, data pool layout,
236 uid, gid, file mode in octal numerals, and size in bytes. The size of the subvolume is
237 specified by setting a quota on it (see :doc:`/cephfs/quota`). The subvolume can be
238 created in a separate RADOS namespace by specifying --namespace-isolated option. By
239 default a subvolume is created within the default subvolume group, and with an octal file
240 mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of
241 its parent directory and no size limit.
242
243 Remove a subvolume using::
244
245 $ ceph fs subvolume rm <vol_name> <subvol_name> [--group_name <subvol_group_name>] [--force] [--retain-snapshots]
246
247
248 The command removes the subvolume and its contents. It does this in two steps.
249 First, it moves the subvolume to a trash folder, and then asynchronously purges
250 its contents.
251
252 The removal of a subvolume fails if it has snapshots, or is non-existent.
253 '--force' flag allows the non-existent subvolume remove command to succeed.
254
255 A subvolume can be removed retaining existing snapshots of the subvolume using the
256 '--retain-snapshots' option. If snapshots are retained, the subvolume is considered
257 empty for all operations not involving the retained snapshots.
258
259 .. note:: Snapshot retained subvolumes can be recreated using 'ceph fs subvolume create'
260
261 .. note:: Retained snapshots can be used as a clone source to recreate the subvolume, or clone to a newer subvolume.
262
263 Resize a subvolume using::
264
265 $ ceph fs subvolume resize <vol_name> <subvol_name> <new_size> [--group_name <subvol_group_name>] [--no_shrink]
266
267 The command resizes the subvolume quota using the size specified by 'new_size'.
268 '--no_shrink' flag prevents the subvolume to shrink below the current used size of the subvolume.
269
270 The subvolume can be resized to an infinite size by passing 'inf' or 'infinite' as the new_size.
271
272 Authorize cephx auth IDs, the read/read-write access to fs subvolumes::
273
274 $ ceph fs subvolume authorize <vol_name> <sub_name> <auth_id> [--group_name=<group_name>] [--access_level=<access_level>]
275
276 The 'access_level' takes 'r' or 'rw' as value.
277
278 Deauthorize cephx auth IDs, the read/read-write access to fs subvolumes::
279
280 $ ceph fs subvolume deauthorize <vol_name> <sub_name> <auth_id> [--group_name=<group_name>]
281
282 List cephx auth IDs authorized to access fs subvolume::
283
284 $ ceph fs subvolume authorized_list <vol_name> <sub_name> [--group_name=<group_name>]
285
286 Evict fs clients based on auth ID and subvolume mounted::
287
288 $ ceph fs subvolume evict <vol_name> <sub_name> <auth_id> [--group_name=<group_name>]
289
290 Fetch the absolute path of a subvolume using::
291
292 $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
293
294 Fetch the information of a subvolume using::
295
296 $ ceph fs subvolume info <vol_name> <subvol_name> [--group_name <subvol_group_name>]
297
298 The output format is json and contains fields as follows.
299
300 * atime: access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
301 * mtime: modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
302 * ctime: change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
303 * uid: uid of subvolume path
304 * gid: gid of subvolume path
305 * mode: mode of subvolume path
306 * mon_addrs: list of monitor addresses
307 * bytes_pcent: quota used in percentage if quota is set, else displays "undefined"
308 * bytes_quota: quota size in bytes if quota is set, else displays "infinite"
309 * bytes_used: current used size of the subvolume in bytes
310 * created_at: time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS"
311 * data_pool: data pool the subvolume belongs to
312 * path: absolute path of a subvolume
313 * type: subvolume type indicating whether it's clone or subvolume
314 * pool_namespace: RADOS namespace of the subvolume
315 * features: features supported by the subvolume
316 * state: current state of the subvolume
317
318 If a subvolume has been removed retaining its snapshots, the output only contains fields as follows.
319
320 * type: subvolume type indicating whether it's clone or subvolume
321 * features: features supported by the subvolume
322 * state: current state of the subvolume
323
324 The subvolume "features" are based on the internal version of the subvolume and is a list containing
325 a subset of the following features,
326
327 * "snapshot-clone": supports cloning using a subvolumes snapshot as the source
328 * "snapshot-autoprotect": supports automatically protecting snapshots, that are active clone sources, from deletion
329 * "snapshot-retention": supports removing subvolume contents, retaining any existing snapshots
330
331 The subvolume "state" is based on the current state of the subvolume and contains one of the following values.
332
333 * "complete": subvolume is ready for all operations
334 * "snapshot-retained": subvolume is removed but its snapshots are retained
335
336 List subvolumes using::
337
338 $ ceph fs subvolume ls <vol_name> [--group_name <subvol_group_name>]
339
340 .. note:: subvolumes that are removed but have snapshots retained, are also listed.
341
342 Check the presence of any subvolume using::
343
344 $ ceph fs subvolume exist <vol_name> [--group_name <subvol_group_name>]
345
346 The strings returned by the 'exist' command:
347 * "subvolume exists": if any subvolume of given group_name is present
348 * "no subvolume exists": if no subvolume of given group_name is present
349
350 Set custom metadata on the subvolume as a key-value pair using::
351
352 $ ceph fs subvolume metadata set <vol_name> <subvol_name> <key_name> <value> [--group_name <subvol_group_name>]
353
354 .. note:: If the key_name already exists then the old value will get replaced by the new value.
355
356 .. note:: key_name and value should be a string of ASCII characters (as specified in python's string.printable). key_name is case-insensitive and always stored in lower case.
357
358 .. note:: Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
359
360 Get custom metadata set on the subvolume using the metadata key::
361
362 $ ceph fs subvolume metadata get <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>]
363
364 List custom metadata (key-value pairs) set on the subvolume using::
365
366 $ ceph fs subvolume metadata ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
367
368 Remove custom metadata set on the subvolume using the metadata key::
369
370 $ ceph fs subvolume metadata rm <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>] [--force]
371
372 Using the '--force' flag allows the command to succeed that would otherwise
373 fail if the metadata key did not exist.
374
375 Create a snapshot of a subvolume using::
376
377 $ ceph fs subvolume snapshot create <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
378
379
380 Remove a snapshot of a subvolume using::
381
382 $ ceph fs subvolume snapshot rm <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>] [--force]
383
384 Using the '--force' flag allows the command to succeed that would otherwise
385 fail if the snapshot did not exist.
386
387 .. note:: if the last snapshot within a snapshot retained subvolume is removed, the subvolume is also removed
388
389 List snapshots of a subvolume using::
390
391 $ ceph fs subvolume snapshot ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
392
393 Fetch the information of a snapshot using::
394
395 $ ceph fs subvolume snapshot info <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
396
397 The output format is json and contains fields as follows.
398
399 * created_at: time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff"
400 * data_pool: data pool the snapshot belongs to
401 * has_pending_clones: "yes" if snapshot clone is in progress otherwise "no"
402 * pending_clones: list of in progress or pending clones and their target group if exist otherwise this field is not shown
403 * orphan_clones_count: count of orphan clones if snapshot has orphan clones otherwise this field is not shown
404
405 Sample output if snapshot clones are in progress or pending state::
406
407 $ ceph fs subvolume snapshot info cephfs subvol snap
408 {
409 "created_at": "2022-06-14 13:54:58.618769",
410 "data_pool": "cephfs.cephfs.data",
411 "has_pending_clones": "yes",
412 "pending_clones": [
413 {
414 "name": "clone_1",
415 "target_group": "target_subvol_group"
416 },
417 {
418 "name": "clone_2"
419 },
420 {
421 "name": "clone_3",
422 "target_group": "target_subvol_group"
423 }
424 ]
425 }
426
427 Sample output if no snapshot clone is in progress or pending state::
428
429 $ ceph fs subvolume snapshot info cephfs subvol snap
430 {
431 "created_at": "2022-06-14 13:54:58.618769",
432 "data_pool": "cephfs.cephfs.data",
433 "has_pending_clones": "no"
434 }
435
436 Set custom metadata on the snapshot as a key-value pair using::
437
438 $ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
439
440 .. note:: If the key_name already exists then the old value will get replaced by the new value.
441
442 .. note:: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
443
444 .. note:: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
445
446 Get custom metadata set on the snapshot using the metadata key::
447
448 $ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
449
450 List custom metadata (key-value pairs) set on the snapshot using::
451
452 $ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
453
454 Remove custom metadata set on the snapshot using the metadata key::
455
456 $ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
457
458 Using the '--force' flag allows the command to succeed that would otherwise
459 fail if the metadata key did not exist.
460
461 Cloning Snapshots
462 -----------------
463
464 Subvolumes can be created by cloning subvolume snapshots. Cloning is an asynchronous operation involving copying
465 data from a snapshot to a subvolume. Due to this bulk copy nature, cloning is currently inefficient for very huge
466 data sets.
467
468 .. note:: Removing a snapshot (source subvolume) would fail if there are pending or in progress clone operations.
469
470 Protecting snapshots prior to cloning was a pre-requisite in the Nautilus release, and the commands to protect/unprotect
471 snapshots were introduced for this purpose. This pre-requisite, and hence the commands to protect/unprotect, is being
472 deprecated in mainline CephFS, and may be removed from a future release.
473
474 The commands being deprecated are:
475 $ ceph fs subvolume snapshot protect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
476 $ ceph fs subvolume snapshot unprotect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
477
478 .. note:: Using the above commands would not result in an error, but they serve no useful function.
479
480 .. note:: Use subvolume info command to fetch subvolume metadata regarding supported "features" to help decide if protect/unprotect of snapshots is required, based on the "snapshot-autoprotect" feature availability.
481
482 To initiate a clone operation use::
483
484 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name>
485
486 If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified as per::
487
488 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --group_name <subvol_group_name>
489
490 Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use::
491
492 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --target_group_name <subvol_group_name>
493
494 Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use::
495
496 $ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --pool_layout <pool_layout>
497
498 Configure maximum number of concurrent clones. The default is set to 4::
499
500 $ ceph config set mgr mgr/volumes/max_concurrent_clones <value>
501
502 To check the status of a clone operation use::
503
504 $ ceph fs clone status <vol_name> <clone_name> [--group_name <group_name>]
505
506 A clone can be in one of the following states:
507
508 #. `pending` : Clone operation has not started
509 #. `in-progress` : Clone operation is in progress
510 #. `complete` : Clone operation has successfully finished
511 #. `failed` : Clone operation has failed
512 #. `canceled` : Clone operation is cancelled by user
513
514 The reason for a clone failure is shown as below:
515
516 #. `errno` : error number
517 #. `error_msg` : failure error string
518
519 Sample output of an `in-progress` clone operation::
520
521 $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
522 $ ceph fs clone status cephfs clone1
523 {
524 "status": {
525 "state": "in-progress",
526 "source": {
527 "volume": "cephfs",
528 "subvolume": "subvol1",
529 "snapshot": "snap1"
530 }
531 }
532 }
533
534 .. note:: The `failure` section will be shown only if the clone is in failed or cancelled state
535
536 Sample output of a `failed` clone operation::
537
538 $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
539 $ ceph fs clone status cephfs clone1
540 {
541 "status": {
542 "state": "failed",
543 "source": {
544 "volume": "cephfs",
545 "subvolume": "subvol1",
546 "snapshot": "snap1"
547 "size": "104857600"
548 },
549 "failure": {
550 "errno": "122",
551 "errstr": "Disk quota exceeded"
552 }
553 }
554 }
555
556 (NOTE: since `subvol1` is in default group, `source` section in `clone status` does not include group name)
557
558 .. note:: Cloned subvolumes are accessible only after the clone operation has successfully completed.
559
560 For a successful clone operation, `clone status` would look like so::
561
562 $ ceph fs clone status cephfs clone1
563 {
564 "status": {
565 "state": "complete"
566 }
567 }
568
569 or `failed` state when clone is unsuccessful.
570
571 On failure of a clone operation, the partial clone needs to be deleted and the clone operation needs to be retriggered.
572 To delete a partial clone use::
573
574 $ ceph fs subvolume rm <vol_name> <clone_name> [--group_name <group_name>] --force
575
576 .. note:: Cloning only synchronizes directories, regular files and symbolic links. Also, inode timestamps (access and
577 modification times) are synchronized up to seconds granularity.
578
579 An `in-progress` or a `pending` clone operation can be canceled. To cancel a clone operation use the `clone cancel` command::
580
581 $ ceph fs clone cancel <vol_name> <clone_name> [--group_name <group_name>]
582
583 On successful cancellation, the cloned subvolume is moved to `canceled` state::
584
585 $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
586 $ ceph fs clone cancel cephfs clone1
587 $ ceph fs clone status cephfs clone1
588 {
589 "status": {
590 "state": "canceled",
591 "source": {
592 "volume": "cephfs",
593 "subvolume": "subvol1",
594 "snapshot": "snap1"
595 }
596 }
597 }
598
599 .. note:: The canceled cloned can be deleted by using --force option in `fs subvolume rm` command.
600
601
602 .. _subvol-pinning:
603
604 Pinning Subvolumes and Subvolume Groups
605 ---------------------------------------
606
607
608 Subvolumes and subvolume groups can be automatically pinned to ranks according
609 to policies. This can help distribute load across MDS ranks in predictable and
610 stable ways. Review :ref:`cephfs-pinning` and :ref:`cephfs-ephemeral-pinning`
611 for details on how pinning works.
612
613 Pinning is configured by::
614
615 $ ceph fs subvolumegroup pin <vol_name> <group_name> <pin_type> <pin_setting>
616
617 or for subvolumes::
618
619 $ ceph fs subvolume pin <vol_name> <group_name> <pin_type> <pin_setting>
620
621 Typically you will want to set subvolume group pins. The ``pin_type`` may be
622 one of ``export``, ``distributed``, or ``random``. The ``pin_setting``
623 corresponds to the extended attributed "value" as in the pinning documentation
624 referenced above.
625
626 So, for example, setting a distributed pinning strategy on a subvolume group::
627
628 $ ceph fs subvolumegroup pin cephfilesystem-a csi distributed 1
629
630 Will enable distributed subtree partitioning policy for the "csi" subvolume
631 group. This will cause every subvolume within the group to be automatically
632 pinned to one of the available ranks on the file system.
633
634
635 .. _manila: https://github.com/openstack/manila
636 .. _CSI: https://github.com/ceph/ceph-csi