]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/index.rst
update ceph source to reef 18.1.2
[ceph.git] / ceph / doc / cephfs / index.rst
1 .. _ceph-file-system:
2
3 =================
4 Ceph File System
5 =================
6
7 The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
8 top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
9 a state-of-the-art, multi-use, highly available, and performant file store for
10 a variety of applications, including traditional use-cases like shared home
11 directories, HPC scratch space, and distributed workflow shared storage.
12
13 CephFS achieves these goals through the use of some novel architectural
14 choices. Notably, file metadata is stored in a separate RADOS pool from file
15 data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
16 which may scale to support higher throughput metadata workloads. Clients of
17 the file system have direct access to RADOS for reading and writing file data
18 blocks. For this reason, workloads may linearly scale with the size of the
19 underlying RADOS object store; that is, there is no gateway or broker mediating
20 data I/O for clients.
21
22 Access to data is coordinated through the cluster of MDS which serve as
23 authorities for the state of the distributed metadata cache cooperatively
24 maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
25 into a series of efficient writes to a journal on RADOS; no metadata state is
26 stored locally by the MDS. This model allows for coherent and rapid
27 collaboration between clients within the context of a POSIX file system.
28
29 .. image:: cephfs-architecture.svg
30
31 CephFS is the subject of numerous academic papers for its novel designs and
32 contributions to file system research. It is the oldest storage interface in
33 Ceph and was once the primary use-case for RADOS. Now it is joined by two
34 other storage interfaces to form a modern unified storage system: RBD (Ceph
35 Block Devices) and RGW (Ceph Object Storage Gateway).
36
37
38 Getting Started with CephFS
39 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
40
41 For most deployments of Ceph, setting up your first CephFS file system is as simple as:
42
43 .. prompt:: bash
44
45 # Create a CephFS volume named (for example) "cephfs":
46 ceph fs volume create cephfs
47
48 The Ceph `Orchestrator`_ will automatically create and configure MDS for
49 your file system if the back-end deployment technology supports it (see
50 `Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
51 as needed`_. You can also `create other CephFS volumes`_.
52
53 Finally, to mount CephFS on your client nodes, see `Mount CephFS:
54 Prerequisites`_ page. Additionally, a command-line shell utility is available
55 for interactive access or scripting via the `cephfs-shell`_.
56
57 .. _Orchestrator: ../mgr/orchestrator
58 .. _deploy MDS manually as needed: add-remove-mds
59 .. _create other CephFS volumes: fs-volumes
60 .. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
61 .. _Mount CephFS\: Prerequisites: mount-prerequisites
62 .. _cephfs-shell: ../man/8/cephfs-shell
63
64
65 .. raw:: html
66
67 <!---
68
69 Administration
70 ^^^^^^^^^^^^^^
71
72 .. raw:: html
73
74 --->
75
76 .. toctree::
77 :maxdepth: 1
78 :hidden:
79
80 Create a CephFS file system <createfs>
81 Administrative commands <administration>
82 Creating Multiple File Systems <multifs>
83 Provision/Add/Remove MDS(s) <add-remove-mds>
84 MDS failover and standby configuration <standby>
85 MDS Cache Configuration <cache-configuration>
86 MDS Configuration Settings <mds-config-ref>
87 Manual: ceph-mds <../../man/8/ceph-mds>
88 Export over NFS <nfs>
89 Application best practices <app-best-practices>
90 FS volume and subvolumes <fs-volumes>
91 CephFS Quotas <quota>
92 Health messages <health-messages>
93 Upgrading old file systems <upgrading>
94 CephFS Top Utility <cephfs-top>
95 Scheduled Snapshots <snap-schedule>
96 CephFS Snapshot Mirroring <cephfs-mirroring>
97
98 .. raw:: html
99
100 <!---
101
102 Mounting CephFS
103 ^^^^^^^^^^^^^^^
104
105 .. raw:: html
106
107 --->
108
109 .. toctree::
110 :maxdepth: 1
111 :hidden:
112
113 Client Configuration Settings <client-config-ref>
114 Client Authentication <client-auth>
115 Mount CephFS: Prerequisites <mount-prerequisites>
116 Mount CephFS using Kernel Driver <mount-using-kernel-driver>
117 Mount CephFS using FUSE <mount-using-fuse>
118 Mount CephFS on Windows <ceph-dokan>
119 Use the CephFS Shell <../../man/8/cephfs-shell>
120 Supported Features of Kernel Driver <kernel-features>
121 Manual: ceph-fuse <../../man/8/ceph-fuse>
122 Manual: mount.ceph <../../man/8/mount.ceph>
123 Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
124
125
126 .. raw:: html
127
128 <!---
129
130 CephFS Concepts
131 ^^^^^^^^^^^^^^^
132
133 .. raw:: html
134
135 --->
136
137 .. toctree::
138 :maxdepth: 1
139 :hidden:
140
141 MDS States <mds-states>
142 POSIX compatibility <posix>
143 MDS Journaling <mds-journaling>
144 File layouts <file-layouts>
145 Distributed Metadata Cache <mdcache>
146 Dynamic Metadata Management in CephFS <dynamic-metadata-management>
147 CephFS IO Path <cephfs-io-path>
148 LazyIO <lazyio>
149 Directory fragmentation <dirfrags>
150 Multiple active MDS daemons <multimds>
151
152
153 .. raw:: html
154
155 <!---
156
157 Troubleshooting and Disaster Recovery
158 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
159
160 .. raw:: html
161
162 --->
163
164 .. toctree::
165 :hidden:
166
167 Client eviction <eviction>
168 Scrubbing the File System <scrub>
169 Handling full file systems <full>
170 Metadata repair <disaster-recovery-experts>
171 Troubleshooting <troubleshooting>
172 Disaster recovery <disaster-recovery>
173 cephfs-journal-tool <cephfs-journal-tool>
174 Recovering file system after monitor store loss <recover-fs-after-mon-store-loss>
175
176
177 .. raw:: html
178
179 <!---
180
181 Developer Guides
182 ^^^^^^^^^^^^^^^^
183
184 .. raw:: html
185
186 --->
187
188 .. toctree::
189 :maxdepth: 1
190 :hidden:
191
192 Journaler Configuration <journaler>
193 Client's Capabilities <capabilities>
194 Java and Python bindings <api/index>
195 Mantle <mantle>
196
197
198 .. raw:: html
199
200 <!---
201
202 Additional Details
203 ^^^^^^^^^^^^^^^^^^
204
205 .. raw:: html
206
207 --->
208
209 .. toctree::
210 :maxdepth: 1
211 :hidden:
212
213 Experimental Features <experimental-features>