]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/index.rst
import ceph pacific 16.2.5
[ceph.git] / ceph / doc / cephfs / index.rst
1 .. _ceph-file-system:
2
3 =================
4 Ceph File System
5 =================
6
7 The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
8 top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
9 a state-of-the-art, multi-use, highly available, and performant file store for
10 a variety of applications, including traditional use-cases like shared home
11 directories, HPC scratch space, and distributed workflow shared storage.
12
13 CephFS achieves these goals through the use of some novel architectural
14 choices. Notably, file metadata is stored in a separate RADOS pool from file
15 data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
16 which may scale to support higher throughput metadata workloads. Clients of
17 the file system have direct access to RADOS for reading and writing file data
18 blocks. For this reason, workloads may linearly scale with the size of the
19 underlying RADOS object store; that is, there is no gateway or broker mediating
20 data I/O for clients.
21
22 Access to data is coordinated through the cluster of MDS which serve as
23 authorities for the state of the distributed metadata cache cooperatively
24 maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
25 into a series of efficient writes to a journal on RADOS; no metadata state is
26 stored locally by the MDS. This model allows for coherent and rapid
27 collaboration between clients within the context of a POSIX file system.
28
29 .. image:: cephfs-architecture.svg
30
31 CephFS is the subject of numerous academic papers for its novel designs and
32 contributions to file system research. It is the oldest storage interface in
33 Ceph and was once the primary use-case for RADOS. Now it is joined by two
34 other storage interfaces to form a modern unified storage system: RBD (Ceph
35 Block Devices) and RGW (Ceph Object Storage Gateway).
36
37
38 Getting Started with CephFS
39 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
40
41 For most deployments of Ceph, setting up a CephFS file system is as simple as:
42
43 .. code:: bash
44
45 ceph fs volume create <fs name>
46
47 The Ceph `Orchestrator`_ will automatically create and configure MDS for
48 your file system if the back-end deployment technology supports it (see
49 `Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
50 as needed`_.
51
52 Finally, to mount CephFS on your client nodes, see `Mount CephFS:
53 Prerequisites`_ page. Additionally, a command-line shell utility is available
54 for interactive access or scripting via the `cephfs-shell`_.
55
56 .. _Orchestrator: ../mgr/orchestrator
57 .. _deploy MDS manually as needed: add-remove-mds
58 .. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
59 .. _Mount CephFS\: Prerequisites: mount-prerequisites
60 .. _cephfs-shell: cephfs-shell
61
62
63 .. raw:: html
64
65 <!---
66
67 Administration
68 ^^^^^^^^^^^^^^
69
70 .. raw:: html
71
72 --->
73
74 .. toctree::
75 :maxdepth: 1
76 :hidden:
77
78 Create a CephFS file system <createfs>
79 Administrative commands <administration>
80 Creating Multiple File Systems <multifs>
81 Provision/Add/Remove MDS(s) <add-remove-mds>
82 MDS failover and standby configuration <standby>
83 MDS Cache Configuration <cache-configuration>
84 MDS Configuration Settings <mds-config-ref>
85 Manual: ceph-mds <../../man/8/ceph-mds>
86 Export over NFS <nfs>
87 Export over NFS with volume nfs interface <fs-nfs-exports>
88 Application best practices <app-best-practices>
89 FS volume and subvolumes <fs-volumes>
90 CephFS Quotas <quota>
91 Health messages <health-messages>
92 Upgrading old file systems <upgrading>
93 CephFS Top Utility <cephfs-top>
94 Scheduled Snapshots <snap-schedule>
95 CephFS Snapshot Mirroring <cephfs-mirroring>
96
97 .. raw:: html
98
99 <!---
100
101 Mounting CephFS
102 ^^^^^^^^^^^^^^^
103
104 .. raw:: html
105
106 --->
107
108 .. toctree::
109 :maxdepth: 1
110 :hidden:
111
112 Client Configuration Settings <client-config-ref>
113 Client Authentication <client-auth>
114 Mount CephFS: Prerequisites <mount-prerequisites>
115 Mount CephFS using Kernel Driver <mount-using-kernel-driver>
116 Mount CephFS using FUSE <mount-using-fuse>
117 Mount CephFS on Windows <ceph-dokan>
118 Use the CephFS Shell <cephfs-shell>
119 Supported Features of Kernel Driver <kernel-features>
120 Manual: ceph-fuse <../../man/8/ceph-fuse>
121 Manual: mount.ceph <../../man/8/mount.ceph>
122 Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
123
124
125 .. raw:: html
126
127 <!---
128
129 CephFS Concepts
130 ^^^^^^^^^^^^^^^
131
132 .. raw:: html
133
134 --->
135
136 .. toctree::
137 :maxdepth: 1
138 :hidden:
139
140 MDS States <mds-states>
141 POSIX compatibility <posix>
142 MDS Journaling <mds-journaling>
143 File layouts <file-layouts>
144 Distributed Metadata Cache <mdcache>
145 Dynamic Metadata Management in CephFS <dynamic-metadata-management>
146 CephFS IO Path <cephfs-io-path>
147 LazyIO <lazyio>
148 Directory fragmentation <dirfrags>
149 Multiple active MDS daemons <multimds>
150
151
152 .. raw:: html
153
154 <!---
155
156 Troubleshooting and Disaster Recovery
157 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
158
159 .. raw:: html
160
161 --->
162
163 .. toctree::
164 :hidden:
165
166 Client eviction <eviction>
167 Scrubbing the File System <scrub>
168 Handling full file systems <full>
169 Metadata repair <disaster-recovery-experts>
170 Troubleshooting <troubleshooting>
171 Disaster recovery <disaster-recovery>
172 cephfs-journal-tool <cephfs-journal-tool>
173
174
175 .. raw:: html
176
177 <!---
178
179 Developer Guides
180 ^^^^^^^^^^^^^^^^
181
182 .. raw:: html
183
184 --->
185
186 .. toctree::
187 :maxdepth: 1
188 :hidden:
189
190 Journaler Configuration <journaler>
191 Client's Capabilities <capabilities>
192 Java and Python bindings <api/index>
193 Mantle <mantle>
194
195
196 .. raw:: html
197
198 <!---
199
200 Additional Details
201 ^^^^^^^^^^^^^^^^^^
202
203 .. raw:: html
204
205 --->
206
207 .. toctree::
208 :maxdepth: 1
209 :hidden:
210
211 Experimental Features <experimental-features>