-.. _ceph-filesystem:
+.. _ceph-file-system:
=================
- Ceph Filesystem
+ Ceph File System
=================
-The Ceph Filesystem (CephFS) is a POSIX-compliant filesystem that uses
-a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph
-Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3
-and Swift APIs, or native bindings (librados).
+The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
+top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
+a state-of-the-art, multi-use, highly available, and performant file store for
+a variety of applications, including traditional use-cases like shared home
+directories, HPC scratch space, and distributed workflow shared storage.
-.. note:: If you are evaluating CephFS for the first time, please review
- the best practices for deployment: :doc:`/cephfs/best-practices`
+CephFS achieves these goals through the use of some novel architectural
+choices. Notably, file metadata is stored in a separate RADOS pool from file
+data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
+which may scale to support higher throughput metadata workloads. Clients of
+the file system have direct access to RADOS for reading and writing file data
+blocks. For this reason, workloads may linearly scale with the size of the
+underlying RADOS object store; that is, there is no gateway or broker mediating
+data I/O for clients.
-.. ditaa::
- +-----------------------+ +------------------------+
- | | | CephFS FUSE |
- | | +------------------------+
- | |
- | | +------------------------+
- | CephFS Kernel Object | | CephFS Library |
- | | +------------------------+
- | |
- | | +------------------------+
- | | | librados |
- +-----------------------+ +------------------------+
+Access to data is coordinated through the cluster of MDS which serve as
+authorities for the state of the distributed metadata cache cooperatively
+maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
+into a series of efficient writes to a journal on RADOS; no metadata state is
+stored locally by the MDS. This model allows for coherent and rapid
+collaboration between clients within the context of a POSIX file system.
- +---------------+ +---------------+ +---------------+
- | OSDs | | MDSs | | Monitors |
- +---------------+ +---------------+ +---------------+
+.. image:: cephfs-architecture.svg
+CephFS is the subject of numerous academic papers for its novel designs and
+contributions to file system research. It is the oldest storage interface in
+Ceph and was once the primary use-case for RADOS. Now it is joined by two
+other storage interfaces to form a modern unified storage system: RBD (Ceph
+Block Devices) and RGW (Ceph Object Storage Gateway).
-Using CephFS
-============
-Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in
-your Ceph Storage Cluster.
+Getting Started with CephFS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+For most deployments of Ceph, setting up a CephFS file system is as simple as:
+
+.. code:: bash
+
+ ceph fs volume create <fs name>
+
+The Ceph `Orchestrator`_ will automatically create and configure MDS for
+your file system if the back-end deployment technology supports it (see
+`Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
+as needed`_.
+
+Finally, to mount CephFS on your client nodes, see `Mount CephFS:
+Prerequisites`_ page. Additionally, a command-line shell utility is available
+for interactive access or scripting via the `cephfs-shell`_.
+
+.. _Orchestrator: ../mgr/orchestrator
+.. _deploy MDS manually as needed: add-remove-mds
+.. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
+.. _Mount CephFS\: Prerequisites: mount-prerequisites
+.. _cephfs-shell: cephfs-shell
.. raw:: html
- <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
- <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Metadata Server</h3>
+ <!---
+
+Administration
+^^^^^^^^^^^^^^
-To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at
-least one :term:`Ceph Metadata Server` running.
+.. raw:: html
+ --->
.. toctree::
- :maxdepth: 1
+ :maxdepth: 1
+ :hidden:
- Add/Remove MDS(s) <add-remove-mds>
- MDS states <mds-states>
- MDS failover and standby configuration <standby>
- MDS Configuration Settings <mds-config-ref>
- Client Configuration Settings <client-config-ref>
- Journaler Configuration <journaler>
- Manpage ceph-mds <../../man/8/ceph-mds>
+ Create a CephFS file system <createfs>
+ Administrative commands <administration>
+ Creating Multiple File Systems <multifs>
+ Provision/Add/Remove MDS(s) <add-remove-mds>
+ MDS failover and standby configuration <standby>
+ MDS Cache Configuration <cache-configuration>
+ MDS Configuration Settings <mds-config-ref>
+ Manual: ceph-mds <../../man/8/ceph-mds>
+ Export over NFS <nfs>
+ Export over NFS with volume nfs interface <fs-nfs-exports>
+ Application best practices <app-best-practices>
+ FS volume and subvolumes <fs-volumes>
+ CephFS Quotas <quota>
+ Health messages <health-messages>
+ Upgrading old file systems <upgrading>
+ CephFS Top Utility <cephfs-top>
+ Scheduled Snapshots <snap-schedule>
+ CephFS Snapshot Mirroring <cephfs-mirroring>
+
+.. raw:: html
-.. raw:: html
+ <!---
- </td><td><h3>Step 2: Mount CephFS</h3>
+Mounting CephFS
+^^^^^^^^^^^^^^^
-Once you have a healthy Ceph Storage Cluster with at least
-one Ceph Metadata Server, you may create and mount your Ceph Filesystem.
-Ensure that your client has network connectivity and the proper
-authentication keyring.
+.. raw:: html
+
+ --->
.. toctree::
- :maxdepth: 1
+ :maxdepth: 1
+ :hidden:
+
+ Client Configuration Settings <client-config-ref>
+ Client Authentication <client-auth>
+ Mount CephFS: Prerequisites <mount-prerequisites>
+ Mount CephFS using Kernel Driver <mount-using-kernel-driver>
+ Mount CephFS using FUSE <mount-using-fuse>
+ Mount CephFS on Windows <ceph-dokan>
+ Use the CephFS Shell <cephfs-shell>
+ Supported Features of Kernel Driver <kernel-features>
+ Manual: ceph-fuse <../../man/8/ceph-fuse>
+ Manual: mount.ceph <../../man/8/mount.ceph>
+ Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
+
- Create CephFS <createfs>
- Mount CephFS <kernel>
- Mount CephFS as FUSE <fuse>
- Mount CephFS in fstab <fstab>
- Use the CephFS Shell <cephfs-shell>
- Supported Features of Kernel Driver <kernel-features>
- Manpage ceph-fuse <../../man/8/ceph-fuse>
- Manpage mount.ceph <../../man/8/mount.ceph>
- Manpage mount.fuse.ceph <../../man/8/mount.fuse.ceph>
+.. raw:: html
+
+ <!---
+CephFS Concepts
+^^^^^^^^^^^^^^^
-.. raw:: html
+.. raw:: html
- </td><td><h3>Additional Details</h3>
+ --->
.. toctree::
- :maxdepth: 1
+ :maxdepth: 1
+ :hidden:
- Deployment best practices <best-practices>
MDS States <mds-states>
- Administrative commands <administration>
- Understanding MDS Cache Size Limits <cache-size-limits>
POSIX compatibility <posix>
- Experimental Features <experimental-features>
- CephFS Quotas <quota>
- Using Ceph with Hadoop <hadoop>
- cephfs-journal-tool <cephfs-journal-tool>
+ MDS Journaling <mds-journaling>
File layouts <file-layouts>
- Client eviction <eviction>
- Handling full filesystems <full>
- Health messages <health-messages>
- Troubleshooting <troubleshooting>
- Disaster recovery <disaster-recovery>
- Client authentication <client-auth>
- Upgrading old filesystems <upgrading>
- Configuring directory fragmentation <dirfrags>
- Configuring multiple active MDS daemons <multimds>
- Export over NFS <nfs>
- Application best practices <app-best-practices>
- Scrub <scrub>
+ Distributed Metadata Cache <mdcache>
+ Dynamic Metadata Management in CephFS <dynamic-metadata-management>
+ CephFS IO Path <cephfs-io-path>
+ LazyIO <lazyio>
+ Directory fragmentation <dirfrags>
+ Multiple active MDS daemons <multimds>
+
+
+.. raw:: html
+
+ <!---
+
+Troubleshooting and Disaster Recovery
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. raw:: html
+
+ --->
.. toctree::
:hidden:
- Advanced: Metadata repair <disaster-recovery-experts>
+ Client eviction <eviction>
+ Scrubbing the File System <scrub>
+ Handling full file systems <full>
+ Metadata repair <disaster-recovery-experts>
+ Troubleshooting <troubleshooting>
+ Disaster recovery <disaster-recovery>
+ cephfs-journal-tool <cephfs-journal-tool>
+
.. raw:: html
- </td></tr></tbody></table>
+ <!---
-For developers
-==============
+Developer Guides
+^^^^^^^^^^^^^^^^
+
+.. raw:: html
+
+ --->
.. toctree::
- :maxdepth: 1
+ :maxdepth: 1
+ :hidden:
+ Journaler Configuration <journaler>
Client's Capabilities <capabilities>
- libcephfs <../../api/libcephfs-java/>
+ Java and Python bindings <api/index>
Mantle <mantle>
+
+.. raw:: html
+
+ <!---
+
+Additional Details
+^^^^^^^^^^^^^^^^^^
+
+.. raw:: html
+
+ --->
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ Experimental Features <experimental-features>