]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/experimental-features.rst
import ceph nautilus 14.2.2
[ceph.git] / ceph / doc / cephfs / experimental-features.rst
1
2 Experimental Features
3 =====================
4
5 CephFS includes a number of experimental features which are not fully stabilized
6 or qualified for users to turn on in real deployments. We generally do our best
7 to clearly demarcate these and fence them off so they cannot be used by mistake.
8
9 Some of these features are closer to being done than others, though. We describe
10 each of them with an approximation of how risky they are and briefly describe
11 what is required to enable them. Note that doing so will *irrevocably* flag maps
12 in the monitor as having once enabled this flag to improve debugging and
13 support processes.
14
15 Inline data
16 -----------
17 By default, all CephFS file data is stored in RADOS objects. The inline data
18 feature enables small files (generally <2KB) to be stored in the inode
19 and served out of the MDS. This may improve small-file performance but increases
20 load on the MDS. It is not sufficiently tested to support at this time, although
21 failures within it are unlikely to make non-inlined data inaccessible
22
23 Inline data has always been off by default and requires setting
24 the ``inline_data`` flag.
25
26 Mantle: Programmable Metadata Load Balancer
27 -------------------------------------------
28
29 Mantle is a programmable metadata balancer built into the MDS. The idea is to
30 protect the mechanisms for balancing load (migration, replication,
31 fragmentation) but stub out the balancing policies using Lua. For details, see
32 :doc:`/cephfs/mantle`.
33
34 Snapshots
35 ---------
36 Like multiple active MDSes, CephFS is designed from the ground up to support
37 snapshotting of arbitrary directories. There are no known bugs at the time of
38 writing, but there is insufficient testing to provide stability guarantees and
39 every expansion of testing has generally revealed new issues. If you do enable
40 snapshots and experience failure, manual intervention will be needed.
41
42 Snapshots are known not to work properly with multiple filesystems (below) in
43 some cases. Specifically, if you share a pool for multiple FSes and delete
44 a snapshot in one FS, expect to lose snapshotted file data in any other FS using
45 snapshots. See the :doc:`/dev/cephfs-snapshots` page for more information.
46
47 For somewhat obscure implementation reasons, the kernel client only supports up
48 to 400 snapshots (http://tracker.ceph.com/issues/21420).
49
50 Snapshotting was blocked off with the ``allow_new_snaps`` flag prior to Mimic.
51
52 Multiple filesystems within a Ceph cluster
53 ------------------------------------------
54 Code was merged prior to the Jewel release which enables administrators
55 to create multiple independent CephFS filesystems within a single Ceph cluster.
56 These independent filesystems have their own set of active MDSes, cluster maps,
57 and data. But the feature required extensive changes to data structures which
58 are not yet fully qualified, and has security implications which are not all
59 apparent nor resolved.
60
61 There are no known bugs, but any failures which do result from having multiple
62 active filesystems in your cluster will require manual intervention and, so far,
63 will not have been experienced by anybody else -- knowledgeable help will be
64 extremely limited. You also probably do not have the security or isolation
65 guarantees you want or think you have upon doing so.
66
67 Note that snapshots and multiple filesystems are *not* tested in combination
68 and may not work together; see above.
69
70 Multiple filesystems were available starting in the Jewel release candidates
71 but must be turned on via the ``enable_multiple`` flag until declared stable.
72
73 LazyIO
74 ------
75 LazyIO relaxes POSIX semantics. Buffered reads/writes are allowed even when a
76 file is opened by multiple applications on multiple clients. Applications are
77 responsible for managing cache coherency themselves.
78
79 Previously experimental features
80 ================================
81
82 Directory Fragmentation
83 -----------------------
84
85 Directory fragmentation was considered experimental prior to the *Luminous*
86 (12.2.x). It is now enabled by default on new filesystems. To enable directory
87 fragmentation on filesystems created with older versions of Ceph, set
88 the ``allow_dirfrags`` flag on the filesystem:
89
90 ::
91
92 ceph fs set <filesystem name> allow_dirfrags 1
93
94 Multiple active metadata servers
95 --------------------------------
96
97 Prior to the *Luminous* (12.2.x) release, running multiple active metadata
98 servers within a single filesystem was considered experimental. Creating
99 multiple active metadata servers is now permitted by default on new
100 filesystems.
101
102 Filesystems created with older versions of Ceph still require explicitly
103 enabling multiple active metadata servers as follows:
104
105 ::
106
107 ceph fs set <filesystem name> allow_multimds 1
108
109 Note that the default size of the active mds cluster (``max_mds``) is
110 still set to 1 initially.
111