]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/index.rst
import ceph 16.2.7
[ceph.git] / ceph / doc / cephfs / index.rst
1 .. _ceph-file-system:
2
3 =================
4 Ceph File System
5 =================
6
7 The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
8 top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
9 a state-of-the-art, multi-use, highly available, and performant file store for
10 a variety of applications, including traditional use-cases like shared home
11 directories, HPC scratch space, and distributed workflow shared storage.
12
13 CephFS achieves these goals through the use of some novel architectural
14 choices. Notably, file metadata is stored in a separate RADOS pool from file
15 data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
16 which may scale to support higher throughput metadata workloads. Clients of
17 the file system have direct access to RADOS for reading and writing file data
18 blocks. For this reason, workloads may linearly scale with the size of the
19 underlying RADOS object store; that is, there is no gateway or broker mediating
20 data I/O for clients.
21
22 Access to data is coordinated through the cluster of MDS which serve as
23 authorities for the state of the distributed metadata cache cooperatively
24 maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
25 into a series of efficient writes to a journal on RADOS; no metadata state is
26 stored locally by the MDS. This model allows for coherent and rapid
27 collaboration between clients within the context of a POSIX file system.
28
29 .. image:: cephfs-architecture.svg
30
31 CephFS is the subject of numerous academic papers for its novel designs and
32 contributions to file system research. It is the oldest storage interface in
33 Ceph and was once the primary use-case for RADOS. Now it is joined by two
34 other storage interfaces to form a modern unified storage system: RBD (Ceph
35 Block Devices) and RGW (Ceph Object Storage Gateway).
36
37
38 Getting Started with CephFS
39 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
40
41 For most deployments of Ceph, setting up a CephFS file system is as simple as:
42
43 .. code:: bash
44
45 ceph fs volume create <fs name>
46
47 The Ceph `Orchestrator`_ will automatically create and configure MDS for
48 your file system if the back-end deployment technology supports it (see
49 `Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
50 as needed`_.
51
52 Finally, to mount CephFS on your client nodes, see `Mount CephFS:
53 Prerequisites`_ page. Additionally, a command-line shell utility is available
54 for interactive access or scripting via the `cephfs-shell`_.
55
56 .. _Orchestrator: ../mgr/orchestrator
57 .. _deploy MDS manually as needed: add-remove-mds
58 .. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
59 .. _Mount CephFS\: Prerequisites: mount-prerequisites
60 .. _cephfs-shell: cephfs-shell
61
62
63 .. raw:: html
64
65 <!---
66
67 Administration
68 ^^^^^^^^^^^^^^
69
70 .. raw:: html
71
72 --->
73
74 .. toctree::
75 :maxdepth: 1
76 :hidden:
77
78 Create a CephFS file system <createfs>
79 Administrative commands <administration>
80 Creating Multiple File Systems <multifs>
81 Provision/Add/Remove MDS(s) <add-remove-mds>
82 MDS failover and standby configuration <standby>
83 MDS Cache Configuration <cache-configuration>
84 MDS Configuration Settings <mds-config-ref>
85 Manual: ceph-mds <../../man/8/ceph-mds>
86 Export over NFS <nfs>
87 Application best practices <app-best-practices>
88 FS volume and subvolumes <fs-volumes>
89 CephFS Quotas <quota>
90 Health messages <health-messages>
91 Upgrading old file systems <upgrading>
92 CephFS Top Utility <cephfs-top>
93 Scheduled Snapshots <snap-schedule>
94 CephFS Snapshot Mirroring <cephfs-mirroring>
95
96 .. raw:: html
97
98 <!---
99
100 Mounting CephFS
101 ^^^^^^^^^^^^^^^
102
103 .. raw:: html
104
105 --->
106
107 .. toctree::
108 :maxdepth: 1
109 :hidden:
110
111 Client Configuration Settings <client-config-ref>
112 Client Authentication <client-auth>
113 Mount CephFS: Prerequisites <mount-prerequisites>
114 Mount CephFS using Kernel Driver <mount-using-kernel-driver>
115 Mount CephFS using FUSE <mount-using-fuse>
116 Mount CephFS on Windows <ceph-dokan>
117 Use the CephFS Shell <cephfs-shell>
118 Supported Features of Kernel Driver <kernel-features>
119 Manual: ceph-fuse <../../man/8/ceph-fuse>
120 Manual: mount.ceph <../../man/8/mount.ceph>
121 Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
122
123
124 .. raw:: html
125
126 <!---
127
128 CephFS Concepts
129 ^^^^^^^^^^^^^^^
130
131 .. raw:: html
132
133 --->
134
135 .. toctree::
136 :maxdepth: 1
137 :hidden:
138
139 MDS States <mds-states>
140 POSIX compatibility <posix>
141 MDS Journaling <mds-journaling>
142 File layouts <file-layouts>
143 Distributed Metadata Cache <mdcache>
144 Dynamic Metadata Management in CephFS <dynamic-metadata-management>
145 CephFS IO Path <cephfs-io-path>
146 LazyIO <lazyio>
147 Directory fragmentation <dirfrags>
148 Multiple active MDS daemons <multimds>
149
150
151 .. raw:: html
152
153 <!---
154
155 Troubleshooting and Disaster Recovery
156 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
157
158 .. raw:: html
159
160 --->
161
162 .. toctree::
163 :hidden:
164
165 Client eviction <eviction>
166 Scrubbing the File System <scrub>
167 Handling full file systems <full>
168 Metadata repair <disaster-recovery-experts>
169 Troubleshooting <troubleshooting>
170 Disaster recovery <disaster-recovery>
171 cephfs-journal-tool <cephfs-journal-tool>
172
173
174 .. raw:: html
175
176 <!---
177
178 Developer Guides
179 ^^^^^^^^^^^^^^^^
180
181 .. raw:: html
182
183 --->
184
185 .. toctree::
186 :maxdepth: 1
187 :hidden:
188
189 Journaler Configuration <journaler>
190 Client's Capabilities <capabilities>
191 Java and Python bindings <api/index>
192 Mantle <mantle>
193
194
195 .. raw:: html
196
197 <!---
198
199 Additional Details
200 ^^^^^^^^^^^^^^^^^^
201
202 .. raw:: html
203
204 --->
205
206 .. toctree::
207 :maxdepth: 1
208 :hidden:
209
210 Experimental Features <experimental-features>