]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/index.rst
update sources to v12.2.3
[ceph.git] / ceph / doc / cephfs / index.rst
CommitLineData
7c673cae
FG
1=================
2 Ceph Filesystem
3=================
4
5The :term:`Ceph Filesystem` (Ceph FS) is a POSIX-compliant filesystem that uses
6a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph
7Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3
8and Swift APIs, or native bindings (librados).
9
10.. note:: If you are evaluating CephFS for the first time, please review
11 the best practices for deployment: :doc:`/cephfs/best-practices`
12
13.. ditaa::
14 +-----------------------+ +------------------------+
15 | | | CephFS FUSE |
16 | | +------------------------+
17 | |
18 | | +------------------------+
19 | CephFS Kernel Object | | CephFS Library |
20 | | +------------------------+
21 | |
22 | | +------------------------+
23 | | | librados |
24 +-----------------------+ +------------------------+
25
26 +---------------+ +---------------+ +---------------+
27 | OSDs | | MDSs | | Monitors |
28 +---------------+ +---------------+ +---------------+
29
30
31Using CephFS
32============
33
34Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in
35your Ceph Storage Cluster.
36
37
38
39.. raw:: html
40
41 <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
42 <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Metadata Server</h3>
43
44To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at
45least one :term:`Ceph Metadata Server` running.
46
47
48.. toctree::
49 :maxdepth: 1
50
51 Add/Remove MDS(s) <../../rados/deployment/ceph-deploy-mds>
52 MDS failover and standby configuration <standby>
53 MDS Configuration Settings <mds-config-ref>
54 Client Configuration Settings <client-config-ref>
55 Journaler Configuration <journaler>
56 Manpage ceph-mds <../../man/8/ceph-mds>
57
58.. raw:: html
59
60 </td><td><h3>Step 2: Mount CephFS</h3>
61
62Once you have a healthy Ceph Storage Cluster with at least
63one Ceph Metadata Server, you may create and mount your Ceph Filesystem.
b32b8144 64Ensure that your client has network connectivity and the proper
7c673cae
FG
65authentication keyring.
66
67.. toctree::
68 :maxdepth: 1
69
70 Create CephFS <createfs>
71 Mount CephFS <kernel>
72 Mount CephFS as FUSE <fuse>
73 Mount CephFS in fstab <fstab>
74 Manpage ceph-fuse <../../man/8/ceph-fuse>
75 Manpage mount.ceph <../../man/8/mount.ceph>
b32b8144 76 Manpage mount.fuse.ceph <../../man/8/mount.fuse.ceph>
7c673cae
FG
77
78
79.. raw:: html
80
81 </td><td><h3>Additional Details</h3>
82
83.. toctree::
84 :maxdepth: 1
85
86 Deployment best practices <best-practices>
87 Administrative commands <administration>
88 POSIX compatibility <posix>
89 Experimental Features <experimental-features>
90 CephFS Quotas <quota>
91 Using Ceph with Hadoop <hadoop>
92 cephfs-journal-tool <cephfs-journal-tool>
93 File layouts <file-layouts>
94 Client eviction <eviction>
95 Handling full filesystems <full>
96 Health messages <health-messages>
97 Troubleshooting <troubleshooting>
98 Disaster recovery <disaster-recovery>
99 Client authentication <client-auth>
100 Upgrading old filesystems <upgrading>
101 Configuring directory fragmentation <dirfrags>
102 Configuring multiple active MDS daemons <multimds>
103
104.. raw:: html
105
106 </td></tr></tbody></table>
107
108For developers
109==============
110
111.. toctree::
112 :maxdepth: 1
113
114 Client's Capabilities <capabilities>
115 libcephfs <../../api/libcephfs-java/>
116 Mantle <mantle>
117