]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/index.rst
import 15.2.5
[ceph.git] / ceph / doc / cephfs / index.rst
1 .. _ceph-file-system:
2
3 =================
4 Ceph File System
5 =================
6
7 The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
8 top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
9 a state-of-the-art, multi-use, highly available, and performant file store for
10 a variety of applications, including traditional use-cases like shared home
11 directories, HPC scratch space, and distributed workflow shared storage.
12
13 CephFS achieves these goals through the use of some novel architectural
14 choices. Notably, file metadata is stored in a separate RADOS pool from file
15 data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
16 which may scale to support higher throughput metadata workloads. Clients of
17 the file system have direct access to RADOS for reading and writing file data
18 blocks. For this reason, workloads may linearly scale with the size of the
19 underlying RADOS object store; that is, there is no gateway or broker mediating
20 data I/O for clients.
21
22 Access to data is coordinated through the cluster of MDS which serve as
23 authorities for the state of the distributed metadata cache cooperatively
24 maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
25 into a series of efficient writes to a journal on RADOS; no metadata state is
26 stored locally by the MDS. This model allows for coherent and rapid
27 collaboration between clients within the context of a POSIX file system.
28
29 .. image:: cephfs-architecture.svg
30
31 CephFS is the subject of numerous academic papers for its novel designs and
32 contributions to file system research. It is the oldest storage interface in
33 Ceph and was once the primary use-case for RADOS. Now it is joined by two
34 other storage interfaces to form a modern unified storage system: RBD (Ceph
35 Block Devices) and RGW (Ceph Object Storage Gateway).
36
37
38 Getting Started with CephFS
39 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
40
41 For most deployments of Ceph, setting up a CephFS file system is as simple as:
42
43 .. code:: bash
44
45 ceph fs volume create <fs name>
46
47 The Ceph `Orchestrator`_ will automatically create and configure MDS for
48 your file system if the back-end deployment technology supports it (see
49 `Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
50 as needed`_.
51
52 Finally, to mount CephFS on your client nodes, see `Mount CephFS:
53 Prerequisites`_ page. Additionally, a command-line shell utility is available
54 for interactive access or scripting via the `cephfs-shell`_.
55
56 .. _Orchestrator: ../mgr/orchestrator
57 .. _deploy MDS manually as needed: add-remove-mds
58 .. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
59 .. _Mount CephFS\: Prerequisites: mount-prerequisites
60 .. _cephfs-shell: cephfs-shell
61
62
63 .. raw:: html
64
65 <!---
66
67 Administration
68 ^^^^^^^^^^^^^^
69
70 .. raw:: html
71
72 --->
73
74 .. toctree::
75 :maxdepth: 1
76 :hidden:
77
78 Create a CephFS file system <createfs>
79 Administrative commands <administration>
80 Provision/Add/Remove MDS(s) <add-remove-mds>
81 MDS failover and standby configuration <standby>
82 MDS Cache Size Limits <cache-size-limits>
83 MDS Configuration Settings <mds-config-ref>
84 Manual: ceph-mds <../../man/8/ceph-mds>
85 Export over NFS <nfs>
86 Export over NFS with volume nfs interface <fs-nfs-exports>
87 Application best practices <app-best-practices>
88 FS volume and subvolumes <fs-volumes>
89 CephFS Quotas <quota>
90 Health messages <health-messages>
91 Upgrading old file systems <upgrading>
92
93
94 .. raw:: html
95
96 <!---
97
98 Mounting CephFS
99 ^^^^^^^^^^^^^^^
100
101 .. raw:: html
102
103 --->
104
105 .. toctree::
106 :maxdepth: 1
107 :hidden:
108
109 Client Configuration Settings <client-config-ref>
110 Client Authentication <client-auth>
111 Mount CephFS: Prerequisites <mount-prerequisites>
112 Mount CephFS using Kernel Driver <mount-using-kernel-driver>
113 Mount CephFS using FUSE <mount-using-fuse>
114 Use the CephFS Shell <cephfs-shell>
115 Supported Features of Kernel Driver <kernel-features>
116 Manual: ceph-fuse <../../man/8/ceph-fuse>
117 Manual: mount.ceph <../../man/8/mount.ceph>
118 Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
119
120
121 .. raw:: html
122
123 <!---
124
125 CephFS Concepts
126 ^^^^^^^^^^^^^^^
127
128 .. raw:: html
129
130 --->
131
132 .. toctree::
133 :maxdepth: 1
134 :hidden:
135
136 MDS States <mds-states>
137 POSIX compatibility <posix>
138 MDS Journaling <mds-journaling>
139 File layouts <file-layouts>
140 Distributed Metadata Cache <mdcache>
141 Dynamic Metadata Management in CephFS <dynamic-metadata-management>
142 CephFS IO Path <cephfs-io-path>
143 LazyIO <lazyio>
144 Directory fragmentation <dirfrags>
145 Multiple active MDS daemons <multimds>
146
147
148 .. raw:: html
149
150 <!---
151
152 Troubleshooting and Disaster Recovery
153 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
154
155 .. raw:: html
156
157 --->
158
159 .. toctree::
160 :hidden:
161
162 Client eviction <eviction>
163 Scrubbing the File System <scrub>
164 Handling full file systems <full>
165 Metadata repair <disaster-recovery-experts>
166 Troubleshooting <troubleshooting>
167 Disaster recovery <disaster-recovery>
168 cephfs-journal-tool <cephfs-journal-tool>
169
170
171 .. raw:: html
172
173 <!---
174
175 Developer Guides
176 ^^^^^^^^^^^^^^^^
177
178 .. raw:: html
179
180 --->
181
182 .. toctree::
183 :maxdepth: 1
184 :hidden:
185
186 Journaler Configuration <journaler>
187 Client's Capabilities <capabilities>
188 libcephfs for Java <../../api/libcephfs-java/>
189 Mantle <mantle>
190
191
192 .. raw:: html
193
194 <!---
195
196 Additional Details
197 ^^^^^^^^^^^^^^^^^^
198
199 .. raw:: html
200
201 --->
202
203 .. toctree::
204 :maxdepth: 1
205 :hidden:
206
207 Experimental Features <experimental-features>