7 The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
8 top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
9 a state-of-the-art, multi-use, highly available, and performant file store for
10 a variety of applications, including traditional use-cases like shared home
11 directories, HPC scratch space, and distributed workflow shared storage.
13 CephFS achieves these goals through the use of some novel architectural
14 choices. Notably, file metadata is stored in a separate RADOS pool from file
15 data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
16 which may scale to support higher throughput metadata workloads. Clients of
17 the file system have direct access to RADOS for reading and writing file data
18 blocks. For this reason, workloads may linearly scale with the size of the
19 underlying RADOS object store; that is, there is no gateway or broker mediating
22 Access to data is coordinated through the cluster of MDS which serve as
23 authorities for the state of the distributed metadata cache cooperatively
24 maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
25 into a series of efficient writes to a journal on RADOS; no metadata state is
26 stored locally by the MDS. This model allows for coherent and rapid
27 collaboration between clients within the context of a POSIX file system.
29 .. image:: cephfs-architecture.svg
31 CephFS is the subject of numerous academic papers for its novel designs and
32 contributions to file system research. It is the oldest storage interface in
33 Ceph and was once the primary use-case for RADOS. Now it is joined by two
34 other storage interfaces to form a modern unified storage system: RBD (Ceph
35 Block Devices) and RGW (Ceph Object Storage Gateway).
38 Getting Started with CephFS
39 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
41 For most deployments of Ceph, setting up a CephFS file system is as simple as:
45 ceph fs volume create <fs name>
47 The Ceph `Orchestrator`_ will automatically create and configure MDS for
48 your file system if the back-end deployment technology supports it (see
49 `Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
52 Finally, to mount CephFS on your client nodes, see `Mount CephFS:
53 Prerequisites`_ page. Additionally, a command-line shell utility is available
54 for interactive access or scripting via the `cephfs-shell`_.
56 .. _Orchestrator: ../mgr/orchestrator
57 .. _deploy MDS manually as needed: add-remove-mds
58 .. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
59 .. _Mount CephFS\: Prerequisites: mount-prerequisites
60 .. _cephfs-shell: cephfs-shell
78 Create a CephFS file system <createfs>
79 Administrative commands <administration>
80 Creating Multiple File Systems <multifs>
81 Provision/Add/Remove MDS(s) <add-remove-mds>
82 MDS failover and standby configuration <standby>
83 MDS Cache Configuration <cache-configuration>
84 MDS Configuration Settings <mds-config-ref>
85 Manual: ceph-mds <../../man/8/ceph-mds>
87 Export over NFS with volume nfs interface <fs-nfs-exports>
88 Application best practices <app-best-practices>
89 FS volume and subvolumes <fs-volumes>
91 Health messages <health-messages>
92 Upgrading old file systems <upgrading>
93 CephFS Top Utility <cephfs-top>
94 Scheduled Snapshots <snap-schedule>
95 CephFS Snapshot Mirroring <cephfs-mirroring>
112 Client Configuration Settings <client-config-ref>
113 Client Authentication <client-auth>
114 Mount CephFS: Prerequisites <mount-prerequisites>
115 Mount CephFS using Kernel Driver <mount-using-kernel-driver>
116 Mount CephFS using FUSE <mount-using-fuse>
117 Mount CephFS on Windows <ceph-dokan>
118 Use the CephFS Shell <cephfs-shell>
119 Supported Features of Kernel Driver <kernel-features>
120 Manual: ceph-fuse <../../man/8/ceph-fuse>
121 Manual: mount.ceph <../../man/8/mount.ceph>
122 Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
140 MDS States <mds-states>
141 POSIX compatibility <posix>
142 MDS Journaling <mds-journaling>
143 File layouts <file-layouts>
144 Distributed Metadata Cache <mdcache>
145 Dynamic Metadata Management in CephFS <dynamic-metadata-management>
146 CephFS IO Path <cephfs-io-path>
148 Directory fragmentation <dirfrags>
149 Multiple active MDS daemons <multimds>
156 Troubleshooting and Disaster Recovery
157 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
166 Client eviction <eviction>
167 Scrubbing the File System <scrub>
168 Handling full file systems <full>
169 Metadata repair <disaster-recovery-experts>
170 Troubleshooting <troubleshooting>
171 Disaster recovery <disaster-recovery>
172 cephfs-journal-tool <cephfs-journal-tool>
190 Journaler Configuration <journaler>
191 Client's Capabilities <capabilities>
192 Java and Python bindings <api/index>
211 Experimental Features <experimental-features>