7 The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
8 top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
9 a state-of-the-art, multi-use, highly available, and performant file store for
10 a variety of applications, including traditional use-cases like shared home
11 directories, HPC scratch space, and distributed workflow shared storage.
13 CephFS achieves these goals through the use of some novel architectural
14 choices. Notably, file metadata is stored in a separate RADOS pool from file
15 data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
16 which may scale to support higher throughput metadata workloads. Clients of
17 the file system have direct access to RADOS for reading and writing file data
18 blocks. For this reason, workloads may linearly scale with the size of the
19 underlying RADOS object store; that is, there is no gateway or broker mediating
22 Access to data is coordinated through the cluster of MDS which serve as
23 authorities for the state of the distributed metadata cache cooperatively
24 maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
25 into a series of efficient writes to a journal on RADOS; no metadata state is
26 stored locally by the MDS. This model allows for coherent and rapid
27 collaboration between clients within the context of a POSIX file system.
29 .. image:: cephfs-architecture.svg
31 CephFS is the subject of numerous academic papers for its novel designs and
32 contributions to file system research. It is the oldest storage interface in
33 Ceph and was once the primary use-case for RADOS. Now it is joined by two
34 other storage interfaces to form a modern unified storage system: RBD (Ceph
35 Block Devices) and RGW (Ceph Object Storage Gateway).
38 Getting Started with CephFS
39 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
41 For most deployments of Ceph, setting up a CephFS file system is as simple as:
45 ceph fs volume create <fs name>
47 The Ceph `Orchestrator`_ will automatically create and configure MDS for
48 your file system if the back-end deployment technology supports it (see
49 `Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
52 Finally, to mount CephFS on your client nodes, see `Mount CephFS:
53 Prerequisites`_ page. Additionally, a command-line shell utility is available
54 for interactive access or scripting via the `cephfs-shell`_.
56 .. _Orchestrator: ../mgr/orchestrator
57 .. _deploy MDS manually as needed: add-remove-mds
58 .. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
59 .. _Mount CephFS\: Prerequisites: mount-prerequisites
60 .. _cephfs-shell: cephfs-shell
78 Create a CephFS file system <createfs>
79 Administrative commands <administration>
80 Creating Multiple File Systems <multifs>
81 Provision/Add/Remove MDS(s) <add-remove-mds>
82 MDS failover and standby configuration <standby>
83 MDS Cache Configuration <cache-configuration>
84 MDS Configuration Settings <mds-config-ref>
85 Manual: ceph-mds <../../man/8/ceph-mds>
87 Application best practices <app-best-practices>
88 FS volume and subvolumes <fs-volumes>
90 Health messages <health-messages>
91 Upgrading old file systems <upgrading>
92 CephFS Top Utility <cephfs-top>
93 Scheduled Snapshots <snap-schedule>
94 CephFS Snapshot Mirroring <cephfs-mirroring>
111 Client Configuration Settings <client-config-ref>
112 Client Authentication <client-auth>
113 Mount CephFS: Prerequisites <mount-prerequisites>
114 Mount CephFS using Kernel Driver <mount-using-kernel-driver>
115 Mount CephFS using FUSE <mount-using-fuse>
116 Mount CephFS on Windows <ceph-dokan>
117 Use the CephFS Shell <cephfs-shell>
118 Supported Features of Kernel Driver <kernel-features>
119 Manual: ceph-fuse <../../man/8/ceph-fuse>
120 Manual: mount.ceph <../../man/8/mount.ceph>
121 Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
139 MDS States <mds-states>
140 POSIX compatibility <posix>
141 MDS Journaling <mds-journaling>
142 File layouts <file-layouts>
143 Distributed Metadata Cache <mdcache>
144 Dynamic Metadata Management in CephFS <dynamic-metadata-management>
145 CephFS IO Path <cephfs-io-path>
147 Directory fragmentation <dirfrags>
148 Multiple active MDS daemons <multimds>
155 Troubleshooting and Disaster Recovery
156 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
165 Client eviction <eviction>
166 Scrubbing the File System <scrub>
167 Handling full file systems <full>
168 Metadata repair <disaster-recovery-experts>
169 Troubleshooting <troubleshooting>
170 Disaster recovery <disaster-recovery>
171 cephfs-journal-tool <cephfs-journal-tool>
189 Journaler Configuration <journaler>
190 Client's Capabilities <capabilities>
191 Java and Python bindings <api/index>
210 Experimental Features <experimental-features>