]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/multifs.rst
import ceph 16.2.6
[ceph.git] / ceph / doc / cephfs / multifs.rst
CommitLineData
f67539c2
TL
1.. _cephfs-multifs:
2
3Multiple Ceph File Systems
4==========================
5
6
7Beginning with the Pacific release, multiple file system support is stable
8and ready to use. This functionality allows configuring separate file systems
9with full data separation on separate pools.
10
11Existing clusters must set a flag to enable multiple file systems::
12
13 ceph fs flag set enable_multiple true
14
15New Ceph clusters automatically set this.
16
17
18Creating a new Ceph File System
19-------------------------------
20
21The new ``volumes`` plugin interface (see: :doc:`/cephfs/fs-volumes`) automates
22most of the work of configuring a new file system. The "volume" concept is
23simply a new file system. This can be done via::
24
25 ceph fs volume create <fs_name>
26
27Ceph will create the new pools and automate the deployment of new MDS to
28support the new file system. The deployment technology used, e.g. cephadm, will
29also configure the MDS affinity (see: :ref:`mds-join-fs`) of new MDS daemons to
30operate the new file system.
31
32
33Securing access
34---------------
35
36The ``fs authorize`` command allows configuring the client's access to a
37particular file system. See also in :ref:`fs-authorize-multifs`. The client will
38only have visibility of authorized file systems and the Monitors/MDS will
39reject access to clients without authorization.
40
41
42Other Notes
43-----------
44
45Multiple file systems do not share pools. This is particularly important for
46snapshots but also because no measures are in place to prevent duplicate
47inodes. The Ceph commands prevent this dangerous configuration.
48
49Each file system has its own set of MDS ranks. Consequently, each new file
50system requires more MDS daemons to operate and increases operational costs.
51This can be useful for increasing metadata throughput by application or user
52base but also adds cost to the creation of a file system. Generally, a single
53file system with subtree pinning is a better choice for isolating load between
54applications.