]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | |
2 | Experimental Features | |
3 | ===================== | |
4 | ||
5 | CephFS includes a number of experimental features which are not fully stabilized | |
6 | or qualified for users to turn on in real deployments. We generally do our best | |
7 | to clearly demarcate these and fence them off so they can't be used by mistake. | |
8 | ||
9 | Some of these features are closer to being done than others, though. We describe | |
10 | each of them with an approximation of how risky they are and briefly describe | |
11 | what is required to enable them. Note that doing so will *irrevocably* flag maps | |
12 | in the monitor as having once enabled this flag to improve debugging and | |
13 | support processes. | |
14 | ||
15 | Inline data | |
16 | ----------- | |
17 | By default, all CephFS file data is stored in RADOS objects. The inline data | |
18 | feature enables small files (generally <2KB) to be stored in the inode | |
19 | and served out of the MDS. This may improve small-file performance but increases | |
20 | load on the MDS. It is not sufficiently tested to support at this time, although | |
21 | failures within it are unlikely to make non-inlined data inaccessible | |
22 | ||
23 | Inline data has always been off by default and requires setting | |
24 | the "inline_data" flag. | |
25 | ||
224ce89b | 26 | |
7c673cae FG |
27 | |
28 | Mantle: Programmable Metadata Load Balancer | |
29 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
30 | ||
31 | Mantle is a programmable metadata balancer built into the MDS. The idea is to | |
32 | protect the mechanisms for balancing load (migration, replication, | |
33 | fragmentation) but stub out the balancing policies using Lua. For details, see | |
34 | :doc:`/cephfs/mantle`. | |
35 | ||
36 | Snapshots | |
37 | --------- | |
38 | Like multiple active MDSes, CephFS is designed from the ground up to support | |
39 | snapshotting of arbitrary directories. There are no known bugs at the time of | |
40 | writing, but there is insufficient testing to provide stability guarantees and | |
41 | every expansion of testing has generally revealed new issues. If you do enable | |
42 | snapshots and experience failure, manual intervention will be needed. | |
43 | ||
44 | Snapshots are known not to work properly with multiple filesystems (below) in | |
45 | some cases. Specifically, if you share a pool for multiple FSes and delete | |
46 | a snapshot in one FS, expect to lose snapshotted file data in any other FS using | |
47 | snapshots. See the :doc:`/dev/cephfs-snapshots` page for more information. | |
48 | ||
49 | Snapshots are known not to work with multi-MDS filesystems. | |
50 | ||
51 | Snapshotting was blocked off with the "allow_new_snaps" flag prior to Firefly. | |
52 | ||
53 | Multiple filesystems within a Ceph cluster | |
54 | ------------------------------------------ | |
55 | Code was merged prior to the Jewel release which enables administrators | |
56 | to create multiple independent CephFS filesystems within a single Ceph cluster. | |
57 | These independent filesystems have their own set of active MDSes, cluster maps, | |
58 | and data. But the feature required extensive changes to data structures which | |
59 | are not yet fully qualified, and has security implications which are not all | |
60 | apparent nor resolved. | |
61 | ||
62 | There are no known bugs, but any failures which do result from having multiple | |
63 | active filesystems in your cluster will require manual intervention and, so far, | |
64 | will not have been experienced by anybody else -- knowledgeable help will be | |
65 | extremely limited. You also probably do not have the security or isolation | |
66 | guarantees you want or think you have upon doing so. | |
67 | ||
68 | Note that snapshots and multiple filesystems are *not* tested in combination | |
69 | and may not work together; see above. | |
70 | ||
71 | Multiple filesystems were available starting in the Jewel release candidates | |
72 | but were protected behind the "enable_multiple" flag before the final release. | |
73 | ||
74 | ||
75 | Previously experimental features | |
76 | ================================ | |
77 | ||
78 | Directory Fragmentation | |
79 | ----------------------- | |
80 | ||
81 | Directory fragmentation was considered experimental prior to the *Luminous* | |
82 | (12.2.x). It is now enabled by default on new filesystems. To enable directory | |
83 | fragmentation on filesystems created with older versions of Ceph, set | |
84 | the ``allow_dirfrags`` flag on the filesystem: | |
85 | ||
86 | :: | |
87 | ||
88 | ceph fs set <filesystem name> allow_dirfrags | |
89 | ||
224ce89b WB |
90 | Multiple active metadata servers |
91 | -------------------------------- | |
92 | ||
93 | Prior to the *Luminous* (12.2.x) release, running multiple active metadata | |
94 | servers within a single filesystem was considered experimental. Creating | |
95 | multiple active metadata servers is now permitted by default on new | |
96 | filesystems. | |
97 | ||
98 | Filesystems created with older versions of Ceph still require explicitly | |
99 | enabling multiple active metadata servers as follows: | |
100 | ||
101 | :: | |
102 | ||
103 | ceph fs set <filesystem name> allow_multimds | |
104 | ||
105 | Note that the default size of the active mds cluster (``max_mds``) is | |
106 | still set to 1 initially. | |
107 |