]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/index.rst
update ceph source to reef 18.1.2
[ceph.git] / ceph / doc / cephfs / index.rst
CommitLineData
9f95a23c 1.. _ceph-file-system:
91327a77 2
7c673cae 3=================
9f95a23c 4 Ceph File System
7c673cae
FG
5=================
6
9f95a23c
TL
7The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
8top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
9a state-of-the-art, multi-use, highly available, and performant file store for
10a variety of applications, including traditional use-cases like shared home
11directories, HPC scratch space, and distributed workflow shared storage.
7c673cae 12
9f95a23c
TL
13CephFS achieves these goals through the use of some novel architectural
14choices. Notably, file metadata is stored in a separate RADOS pool from file
15data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
16which may scale to support higher throughput metadata workloads. Clients of
17the file system have direct access to RADOS for reading and writing file data
18blocks. For this reason, workloads may linearly scale with the size of the
19underlying RADOS object store; that is, there is no gateway or broker mediating
20data I/O for clients.
7c673cae 21
9f95a23c
TL
22Access to data is coordinated through the cluster of MDS which serve as
23authorities for the state of the distributed metadata cache cooperatively
24maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
25into a series of efficient writes to a journal on RADOS; no metadata state is
26stored locally by the MDS. This model allows for coherent and rapid
27collaboration between clients within the context of a POSIX file system.
7c673cae 28
9f95a23c 29.. image:: cephfs-architecture.svg
7c673cae 30
9f95a23c
TL
31CephFS is the subject of numerous academic papers for its novel designs and
32contributions to file system research. It is the oldest storage interface in
33Ceph and was once the primary use-case for RADOS. Now it is joined by two
34other storage interfaces to form a modern unified storage system: RBD (Ceph
35Block Devices) and RGW (Ceph Object Storage Gateway).
7c673cae 36
7c673cae 37
9f95a23c
TL
38Getting Started with CephFS
39^^^^^^^^^^^^^^^^^^^^^^^^^^^
7c673cae 40
1e59de90 41For most deployments of Ceph, setting up your first CephFS file system is as simple as:
9f95a23c 42
1e59de90 43.. prompt:: bash
9f95a23c 44
1e59de90
TL
45 # Create a CephFS volume named (for example) "cephfs":
46 ceph fs volume create cephfs
9f95a23c
TL
47
48The Ceph `Orchestrator`_ will automatically create and configure MDS for
49your file system if the back-end deployment technology supports it (see
50`Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
1e59de90 51as needed`_. You can also `create other CephFS volumes`_.
9f95a23c
TL
52
53Finally, to mount CephFS on your client nodes, see `Mount CephFS:
54Prerequisites`_ page. Additionally, a command-line shell utility is available
55for interactive access or scripting via the `cephfs-shell`_.
56
57.. _Orchestrator: ../mgr/orchestrator
58.. _deploy MDS manually as needed: add-remove-mds
1e59de90 59.. _create other CephFS volumes: fs-volumes
9f95a23c
TL
60.. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
61.. _Mount CephFS\: Prerequisites: mount-prerequisites
20effc67 62.. _cephfs-shell: ../man/8/cephfs-shell
7c673cae
FG
63
64
65.. raw:: html
66
9f95a23c 67 <!---
7c673cae 68
9f95a23c
TL
69Administration
70^^^^^^^^^^^^^^
7c673cae 71
9f95a23c
TL
72.. raw:: html
73
74 --->
7c673cae
FG
75
76.. toctree::
9f95a23c
TL
77 :maxdepth: 1
78 :hidden:
7c673cae 79
9f95a23c
TL
80 Create a CephFS file system <createfs>
81 Administrative commands <administration>
f67539c2
TL
82 Creating Multiple File Systems <multifs>
83 Provision/Add/Remove MDS(s) <add-remove-mds>
9f95a23c 84 MDS failover and standby configuration <standby>
adb31ebb 85 MDS Cache Configuration <cache-configuration>
9f95a23c
TL
86 MDS Configuration Settings <mds-config-ref>
87 Manual: ceph-mds <../../man/8/ceph-mds>
88 Export over NFS <nfs>
89 Application best practices <app-best-practices>
90 FS volume and subvolumes <fs-volumes>
91 CephFS Quotas <quota>
92 Health messages <health-messages>
93 Upgrading old file systems <upgrading>
f67539c2
TL
94 CephFS Top Utility <cephfs-top>
95 Scheduled Snapshots <snap-schedule>
b3b6e05e 96 CephFS Snapshot Mirroring <cephfs-mirroring>
7c673cae 97
9f95a23c 98.. raw:: html
7c673cae 99
9f95a23c 100 <!---
7c673cae 101
9f95a23c
TL
102Mounting CephFS
103^^^^^^^^^^^^^^^
104
105.. raw:: html
106
107 --->
7c673cae
FG
108
109.. toctree::
9f95a23c
TL
110 :maxdepth: 1
111 :hidden:
112
113 Client Configuration Settings <client-config-ref>
114 Client Authentication <client-auth>
115 Mount CephFS: Prerequisites <mount-prerequisites>
116 Mount CephFS using Kernel Driver <mount-using-kernel-driver>
117 Mount CephFS using FUSE <mount-using-fuse>
f67539c2 118 Mount CephFS on Windows <ceph-dokan>
20effc67 119 Use the CephFS Shell <../../man/8/cephfs-shell>
9f95a23c
TL
120 Supported Features of Kernel Driver <kernel-features>
121 Manual: ceph-fuse <../../man/8/ceph-fuse>
122 Manual: mount.ceph <../../man/8/mount.ceph>
123 Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph>
7c673cae 124
7c673cae 125
9f95a23c
TL
126.. raw:: html
127
128 <!---
7c673cae 129
9f95a23c
TL
130CephFS Concepts
131^^^^^^^^^^^^^^^
132
133.. raw:: html
7c673cae 134
9f95a23c 135 --->
7c673cae
FG
136
137.. toctree::
9f95a23c
TL
138 :maxdepth: 1
139 :hidden:
7c673cae 140
11fdf7f2 141 MDS States <mds-states>
7c673cae 142 POSIX compatibility <posix>
9f95a23c 143 MDS Journaling <mds-journaling>
7c673cae 144 File layouts <file-layouts>
9f95a23c
TL
145 Distributed Metadata Cache <mdcache>
146 Dynamic Metadata Management in CephFS <dynamic-metadata-management>
147 CephFS IO Path <cephfs-io-path>
81eedcae 148 LazyIO <lazyio>
9f95a23c
TL
149 Directory fragmentation <dirfrags>
150 Multiple active MDS daemons <multimds>
151
152
153.. raw:: html
154
155 <!---
156
157Troubleshooting and Disaster Recovery
158^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
159
160.. raw:: html
161
162 --->
11fdf7f2
TL
163
164.. toctree::
165 :hidden:
166
9f95a23c
TL
167 Client eviction <eviction>
168 Scrubbing the File System <scrub>
169 Handling full file systems <full>
170 Metadata repair <disaster-recovery-experts>
171 Troubleshooting <troubleshooting>
172 Disaster recovery <disaster-recovery>
173 cephfs-journal-tool <cephfs-journal-tool>
20effc67 174 Recovering file system after monitor store loss <recover-fs-after-mon-store-loss>
9f95a23c 175
7c673cae
FG
176
177.. raw:: html
178
9f95a23c
TL
179 <!---
180
181Developer Guides
182^^^^^^^^^^^^^^^^
183
184.. raw:: html
7c673cae 185
9f95a23c 186 --->
7c673cae
FG
187
188.. toctree::
9f95a23c
TL
189 :maxdepth: 1
190 :hidden:
7c673cae 191
9f95a23c 192 Journaler Configuration <journaler>
7c673cae 193 Client's Capabilities <capabilities>
f67539c2 194 Java and Python bindings <api/index>
7c673cae
FG
195 Mantle <mantle>
196
9f95a23c
TL
197
198.. raw:: html
199
200 <!---
201
202Additional Details
203^^^^^^^^^^^^^^^^^^
204
205.. raw:: html
206
207 --->
208
209.. toctree::
210 :maxdepth: 1
211 :hidden:
212
213 Experimental Features <experimental-features>