]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/index.rst
import 15.2.0 Octopus source
[ceph.git] / ceph / doc / cephfs / index.rst
CommitLineData
9f95a23c 1.. _ceph-file-system:
91327a77 2
7c673cae 3=================
9f95a23c 4 Ceph File System
7c673cae
FG
5=================
6
9f95a23c
TL
7The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
8top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
9a state-of-the-art, multi-use, highly available, and performant file store for
10a variety of applications, including traditional use-cases like shared home
11directories, HPC scratch space, and distributed workflow shared storage.
7c673cae 12
9f95a23c
TL
13CephFS achieves these goals through the use of some novel architectural
14choices. Notably, file metadata is stored in a separate RADOS pool from file
15data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
16which may scale to support higher throughput metadata workloads. Clients of
17the file system have direct access to RADOS for reading and writing file data
18blocks. For this reason, workloads may linearly scale with the size of the
19underlying RADOS object store; that is, there is no gateway or broker mediating
20data I/O for clients.
7c673cae 21
9f95a23c
TL
22Access to data is coordinated through the cluster of MDS which serve as
23authorities for the state of the distributed metadata cache cooperatively
24maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
25into a series of efficient writes to a journal on RADOS; no metadata state is
26stored locally by the MDS. This model allows for coherent and rapid
27collaboration between clients within the context of a POSIX file system.
7c673cae 28
9f95a23c 29.. image:: cephfs-architecture.svg
7c673cae 30
9f95a23c
TL
31CephFS is the subject of numerous academic papers for its novel designs and
32contributions to file system research. It is the oldest storage interface in
33Ceph and was once the primary use-case for RADOS. Now it is joined by two
34other storage interfaces to form a modern unified storage system: RBD (Ceph
35Block Devices) and RGW (Ceph Object Storage Gateway).
7c673cae 36
7c673cae 37
9f95a23c
TL
38Getting Started with CephFS
39^^^^^^^^^^^^^^^^^^^^^^^^^^^
7c673cae 40
9f95a23c
TL
41For most deployments of Ceph, setting up a CephFS file system is as simple as:
42
43.. code:: bash
44
45 ceph fs volume create <fs name>
46
47The Ceph `Orchestrator`_ will automatically create and configure MDS for
48your file system if the back-end deployment technology supports it (see
49`Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
50as needed`_.
51
52Finally, to mount CephFS on your client nodes, see `Mount CephFS:
53Prerequisites`_ page. Additionally, a command-line shell utility is available
54for interactive access or scripting via the `cephfs-shell`_.
55
56.. _Orchestrator: ../mgr/orchestrator
57.. _deploy MDS manually as needed: add-remove-mds
58.. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
59.. _Mount CephFS\: Prerequisites: mount-prerequisites
60.. _cephfs-shell: cephfs-shell
7c673cae
FG
61
62
63.. raw:: html
64
9f95a23c 65 <!---
7c673cae 66
9f95a23c
TL
67Administration
68^^^^^^^^^^^^^^
7c673cae 69
9f95a23c
TL
70.. raw:: html
71
72 --->
7c673cae
FG
73
74.. toctree::
9f95a23c
TL
75 :maxdepth: 1
76 :hidden:
7c673cae 77
9f95a23c
TL
78 Create a CephFS file system <createfs>
79 Administrative commands <administration>
92f5a8d4 80 Provision/Add/Remove MDS(s) <add-remove-mds>
9f95a23c
TL
81 MDS failover and standby configuration <standby>
82 MDS Cache Size Limits <cache-size-limits>
83 MDS Configuration Settings <mds-config-ref>
84 Manual: ceph-mds <../../man/8/ceph-mds>
85 Export over NFS <nfs>
86 Application best practices <app-best-practices>
87 FS volume and subvolumes <fs-volumes>
88 CephFS Quotas <quota>
89 Health messages <health-messages>
90 Upgrading old file systems <upgrading>
91
7c673cae 92
9f95a23c 93.. raw:: html
7c673cae 94
9f95a23c 95 <!---
7c673cae 96
9f95a23c
TL
97Mounting CephFS
98^^^^^^^^^^^^^^^
99
100.. raw:: html
101
102 --->
7c673cae
FG
103
104.. toctree::
9f95a23c
TL
105 :maxdepth: 1
106 :hidden:
107
108 Client Configuration Settings <client-config-ref>
109 Client Authentication <client-auth>
110 Mount CephFS: Prerequisites <mount-prerequisites>
111 Mount CephFS using Kernel Driver <mount-using-kernel-driver>
112 Mount CephFS using FUSE <mount-using-fuse>
113 Use the CephFS Shell <cephfs-shell>
114 Supported Features of Kernel Driver <kernel-features>
115 Manual: ceph-fuse <../../man/8/ceph-fuse>
116 Manual: mount.ceph <../../man/8/mount.ceph>
117 Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
7c673cae 118
7c673cae 119
9f95a23c
TL
120.. raw:: html
121
122 <!---
7c673cae 123
9f95a23c
TL
124CephFS Concepts
125^^^^^^^^^^^^^^^
126
127.. raw:: html
7c673cae 128
9f95a23c 129 --->
7c673cae
FG
130
131.. toctree::
9f95a23c
TL
132 :maxdepth: 1
133 :hidden:
7c673cae 134
11fdf7f2 135 MDS States <mds-states>
7c673cae 136 POSIX compatibility <posix>
9f95a23c 137 MDS Journaling <mds-journaling>
7c673cae 138 File layouts <file-layouts>
9f95a23c
TL
139 Distributed Metadata Cache <mdcache>
140 Dynamic Metadata Management in CephFS <dynamic-metadata-management>
141 CephFS IO Path <cephfs-io-path>
81eedcae 142 LazyIO <lazyio>
9f95a23c
TL
143 Directory fragmentation <dirfrags>
144 Multiple active MDS daemons <multimds>
145
146
147.. raw:: html
148
149 <!---
150
151Troubleshooting and Disaster Recovery
152^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
153
154.. raw:: html
155
156 --->
11fdf7f2
TL
157
158.. toctree::
159 :hidden:
160
9f95a23c
TL
161 Client eviction <eviction>
162 Scrubbing the File System <scrub>
163 Handling full file systems <full>
164 Metadata repair <disaster-recovery-experts>
165 Troubleshooting <troubleshooting>
166 Disaster recovery <disaster-recovery>
167 cephfs-journal-tool <cephfs-journal-tool>
168
7c673cae
FG
169
170.. raw:: html
171
9f95a23c
TL
172 <!---
173
174Developer Guides
175^^^^^^^^^^^^^^^^
176
177.. raw:: html
7c673cae 178
9f95a23c 179 --->
7c673cae
FG
180
181.. toctree::
9f95a23c
TL
182 :maxdepth: 1
183 :hidden:
7c673cae 184
9f95a23c 185 Journaler Configuration <journaler>
7c673cae 186 Client's Capabilities <capabilities>
9f95a23c 187 libcephfs for Java <../../api/libcephfs-java/>
7c673cae
FG
188 Mantle <mantle>
189
9f95a23c
TL
190
191.. raw:: html
192
193 <!---
194
195Additional Details
196^^^^^^^^^^^^^^^^^^
197
198.. raw:: html
199
200 --->
201
202.. toctree::
203 :maxdepth: 1
204 :hidden:
205
206 Experimental Features <experimental-features>