]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/experimental-features.rst
bdfa998a99e2cfbe99e24f404e30964fe8e7e6e7
[ceph.git] / ceph / doc / cephfs / experimental-features.rst
1
2 Experimental Features
3 =====================
4
5 CephFS includes a number of experimental features which are not fully stabilized
6 or qualified for users to turn on in real deployments. We generally do our best
7 to clearly demarcate these and fence them off so they can't be used by mistake.
8
9 Some of these features are closer to being done than others, though. We describe
10 each of them with an approximation of how risky they are and briefly describe
11 what is required to enable them. Note that doing so will *irrevocably* flag maps
12 in the monitor as having once enabled this flag to improve debugging and
13 support processes.
14
15 Inline data
16 -----------
17 By default, all CephFS file data is stored in RADOS objects. The inline data
18 feature enables small files (generally <2KB) to be stored in the inode
19 and served out of the MDS. This may improve small-file performance but increases
20 load on the MDS. It is not sufficiently tested to support at this time, although
21 failures within it are unlikely to make non-inlined data inaccessible
22
23 Inline data has always been off by default and requires setting
24 the "inline_data" flag.
25
26 Multi-MDS filesystem clusters
27 -----------------------------
28 CephFS has been designed from the ground up to support fragmenting the metadata
29 hierarchy across multiple active metadata servers, to allow horizontal scaling
30 to arbitrary throughput requirements. Unfortunately, doing so requires a lot
31 more working code than having a single MDS which is authoritative over the
32 entire filesystem namespace.
33
34 Multiple active MDSes are generally stable under trivial workloads, but often
35 break in the presence of any failure, and do not have enough testing to offer
36 any stability guarantees. If a filesystem with multiple active MDSes does
37 experience failure, it will require (generally extensive) manual intervention.
38 There are serious known bugs.
39
40 Multi-MDS filesystems have always required explicitly increasing the "max_mds"
41 value and have been further protected with the "allow_multimds" flag for Jewel.
42
43 Mantle: Programmable Metadata Load Balancer
44 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
45
46 Mantle is a programmable metadata balancer built into the MDS. The idea is to
47 protect the mechanisms for balancing load (migration, replication,
48 fragmentation) but stub out the balancing policies using Lua. For details, see
49 :doc:`/cephfs/mantle`.
50
51 Snapshots
52 ---------
53 Like multiple active MDSes, CephFS is designed from the ground up to support
54 snapshotting of arbitrary directories. There are no known bugs at the time of
55 writing, but there is insufficient testing to provide stability guarantees and
56 every expansion of testing has generally revealed new issues. If you do enable
57 snapshots and experience failure, manual intervention will be needed.
58
59 Snapshots are known not to work properly with multiple filesystems (below) in
60 some cases. Specifically, if you share a pool for multiple FSes and delete
61 a snapshot in one FS, expect to lose snapshotted file data in any other FS using
62 snapshots. See the :doc:`/dev/cephfs-snapshots` page for more information.
63
64 Snapshots are known not to work with multi-MDS filesystems.
65
66 Snapshotting was blocked off with the "allow_new_snaps" flag prior to Firefly.
67
68 Multiple filesystems within a Ceph cluster
69 ------------------------------------------
70 Code was merged prior to the Jewel release which enables administrators
71 to create multiple independent CephFS filesystems within a single Ceph cluster.
72 These independent filesystems have their own set of active MDSes, cluster maps,
73 and data. But the feature required extensive changes to data structures which
74 are not yet fully qualified, and has security implications which are not all
75 apparent nor resolved.
76
77 There are no known bugs, but any failures which do result from having multiple
78 active filesystems in your cluster will require manual intervention and, so far,
79 will not have been experienced by anybody else -- knowledgeable help will be
80 extremely limited. You also probably do not have the security or isolation
81 guarantees you want or think you have upon doing so.
82
83 Note that snapshots and multiple filesystems are *not* tested in combination
84 and may not work together; see above.
85
86 Multiple filesystems were available starting in the Jewel release candidates
87 but were protected behind the "enable_multiple" flag before the final release.
88
89
90 Previously experimental features
91 ================================
92
93 Directory Fragmentation
94 -----------------------
95
96 Directory fragmentation was considered experimental prior to the *Luminous*
97 (12.2.x). It is now enabled by default on new filesystems. To enable directory
98 fragmentation on filesystems created with older versions of Ceph, set
99 the ``allow_dirfrags`` flag on the filesystem:
100
101 ::
102
103 ceph fs set <filesystem name> allow_dirfrags
104