]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/index.rst
update download target update for octopus release
[ceph.git] / ceph / doc / cephfs / index.rst
CommitLineData
91327a77
AA
1.. _ceph-filesystem:
2
7c673cae
FG
3=================
4 Ceph Filesystem
5=================
6
11fdf7f2 7The Ceph Filesystem (CephFS) is a POSIX-compliant filesystem that uses
7c673cae
FG
8a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph
9Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3
10and Swift APIs, or native bindings (librados).
11
12.. note:: If you are evaluating CephFS for the first time, please review
13 the best practices for deployment: :doc:`/cephfs/best-practices`
14
15.. ditaa::
16 +-----------------------+ +------------------------+
17 | | | CephFS FUSE |
18 | | +------------------------+
19 | |
20 | | +------------------------+
21 | CephFS Kernel Object | | CephFS Library |
22 | | +------------------------+
23 | |
24 | | +------------------------+
25 | | | librados |
26 +-----------------------+ +------------------------+
27
28 +---------------+ +---------------+ +---------------+
29 | OSDs | | MDSs | | Monitors |
30 +---------------+ +---------------+ +---------------+
31
32
33Using CephFS
34============
35
36Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in
37your Ceph Storage Cluster.
38
39
40
41.. raw:: html
42
43 <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
44 <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Metadata Server</h3>
45
46To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at
47least one :term:`Ceph Metadata Server` running.
48
49
50.. toctree::
51 :maxdepth: 1
52
92f5a8d4 53 Provision/Add/Remove MDS(s) <add-remove-mds>
7c673cae
FG
54 MDS failover and standby configuration <standby>
55 MDS Configuration Settings <mds-config-ref>
56 Client Configuration Settings <client-config-ref>
57 Journaler Configuration <journaler>
58 Manpage ceph-mds <../../man/8/ceph-mds>
59
60.. raw:: html
61
62 </td><td><h3>Step 2: Mount CephFS</h3>
63
64Once you have a healthy Ceph Storage Cluster with at least
65one Ceph Metadata Server, you may create and mount your Ceph Filesystem.
b32b8144 66Ensure that your client has network connectivity and the proper
7c673cae
FG
67authentication keyring.
68
69.. toctree::
70 :maxdepth: 1
71
92f5a8d4 72 Create a CephFS file system <createfs>
7c673cae
FG
73 Mount CephFS <kernel>
74 Mount CephFS as FUSE <fuse>
75 Mount CephFS in fstab <fstab>
11fdf7f2
TL
76 Use the CephFS Shell <cephfs-shell>
77 Supported Features of Kernel Driver <kernel-features>
7c673cae
FG
78 Manpage ceph-fuse <../../man/8/ceph-fuse>
79 Manpage mount.ceph <../../man/8/mount.ceph>
b32b8144 80 Manpage mount.fuse.ceph <../../man/8/mount.fuse.ceph>
7c673cae
FG
81
82
83.. raw:: html
84
85 </td><td><h3>Additional Details</h3>
86
87.. toctree::
88 :maxdepth: 1
89
90 Deployment best practices <best-practices>
11fdf7f2 91 MDS States <mds-states>
7c673cae 92 Administrative commands <administration>
11fdf7f2 93 Understanding MDS Cache Size Limits <cache-size-limits>
7c673cae
FG
94 POSIX compatibility <posix>
95 Experimental Features <experimental-features>
96 CephFS Quotas <quota>
97 Using Ceph with Hadoop <hadoop>
98 cephfs-journal-tool <cephfs-journal-tool>
99 File layouts <file-layouts>
100 Client eviction <eviction>
101 Handling full filesystems <full>
102 Health messages <health-messages>
103 Troubleshooting <troubleshooting>
104 Disaster recovery <disaster-recovery>
105 Client authentication <client-auth>
106 Upgrading old filesystems <upgrading>
107 Configuring directory fragmentation <dirfrags>
108 Configuring multiple active MDS daemons <multimds>
11fdf7f2
TL
109 Export over NFS <nfs>
110 Application best practices <app-best-practices>
111 Scrub <scrub>
81eedcae 112 LazyIO <lazyio>
eafe8130 113 FS volume and subvolumes <fs-volumes>
11fdf7f2
TL
114
115.. toctree::
116 :hidden:
117
118 Advanced: Metadata repair <disaster-recovery-experts>
7c673cae
FG
119
120.. raw:: html
121
122 </td></tr></tbody></table>
123
124For developers
125==============
126
127.. toctree::
128 :maxdepth: 1
129
130 Client's Capabilities <capabilities>
131 libcephfs <../../api/libcephfs-java/>
132 Mantle <mantle>
133