]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/index.rst
import new upstream nautilus stable release 14.2.8
[ceph.git] / ceph / doc / cephfs / index.rst
1 .. _ceph-filesystem:
2
3 =================
4 Ceph Filesystem
5 =================
6
7 The Ceph Filesystem (CephFS) is a POSIX-compliant filesystem that uses
8 a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph
9 Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3
10 and Swift APIs, or native bindings (librados).
11
12 .. note:: If you are evaluating CephFS for the first time, please review
13 the best practices for deployment: :doc:`/cephfs/best-practices`
14
15 .. ditaa::
16 +-----------------------+ +------------------------+
17 | | | CephFS FUSE |
18 | | +------------------------+
19 | |
20 | | +------------------------+
21 | CephFS Kernel Object | | CephFS Library |
22 | | +------------------------+
23 | |
24 | | +------------------------+
25 | | | librados |
26 +-----------------------+ +------------------------+
27
28 +---------------+ +---------------+ +---------------+
29 | OSDs | | MDSs | | Monitors |
30 +---------------+ +---------------+ +---------------+
31
32
33 Using CephFS
34 ============
35
36 Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in
37 your Ceph Storage Cluster.
38
39
40
41 .. raw:: html
42
43 <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
44 <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Metadata Server</h3>
45
46 To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at
47 least one :term:`Ceph Metadata Server` running.
48
49
50 .. toctree::
51 :maxdepth: 1
52
53 Provision/Add/Remove MDS(s) <add-remove-mds>
54 MDS failover and standby configuration <standby>
55 MDS Configuration Settings <mds-config-ref>
56 Client Configuration Settings <client-config-ref>
57 Journaler Configuration <journaler>
58 Manpage ceph-mds <../../man/8/ceph-mds>
59
60 .. raw:: html
61
62 </td><td><h3>Step 2: Mount CephFS</h3>
63
64 Once you have a healthy Ceph Storage Cluster with at least
65 one Ceph Metadata Server, you may create and mount your Ceph Filesystem.
66 Ensure that your client has network connectivity and the proper
67 authentication keyring.
68
69 .. toctree::
70 :maxdepth: 1
71
72 Create a CephFS file system <createfs>
73 Mount CephFS <kernel>
74 Mount CephFS as FUSE <fuse>
75 Mount CephFS in fstab <fstab>
76 Use the CephFS Shell <cephfs-shell>
77 Supported Features of Kernel Driver <kernel-features>
78 Manpage ceph-fuse <../../man/8/ceph-fuse>
79 Manpage mount.ceph <../../man/8/mount.ceph>
80 Manpage mount.fuse.ceph <../../man/8/mount.fuse.ceph>
81
82
83 .. raw:: html
84
85 </td><td><h3>Additional Details</h3>
86
87 .. toctree::
88 :maxdepth: 1
89
90 Deployment best practices <best-practices>
91 MDS States <mds-states>
92 Administrative commands <administration>
93 Understanding MDS Cache Size Limits <cache-size-limits>
94 POSIX compatibility <posix>
95 Experimental Features <experimental-features>
96 CephFS Quotas <quota>
97 Using Ceph with Hadoop <hadoop>
98 cephfs-journal-tool <cephfs-journal-tool>
99 File layouts <file-layouts>
100 Client eviction <eviction>
101 Handling full filesystems <full>
102 Health messages <health-messages>
103 Troubleshooting <troubleshooting>
104 Disaster recovery <disaster-recovery>
105 Client authentication <client-auth>
106 Upgrading old filesystems <upgrading>
107 Configuring directory fragmentation <dirfrags>
108 Configuring multiple active MDS daemons <multimds>
109 Export over NFS <nfs>
110 Application best practices <app-best-practices>
111 Scrub <scrub>
112 LazyIO <lazyio>
113 FS volume and subvolumes <fs-volumes>
114
115 .. toctree::
116 :hidden:
117
118 Advanced: Metadata repair <disaster-recovery-experts>
119
120 .. raw:: html
121
122 </td></tr></tbody></table>
123
124 For developers
125 ==============
126
127 .. toctree::
128 :maxdepth: 1
129
130 Client's Capabilities <capabilities>
131 libcephfs <../../api/libcephfs-java/>
132 Mantle <mantle>
133