]> git.proxmox.com Git - ceph.git/blob - ceph/doc/start/intro.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / start / intro.rst
1 ===============
2 Intro to Ceph
3 ===============
4
5 Whether you want to provide :term:`Ceph Object Storage` and/or
6 :term:`Ceph Block Device` services to :term:`Cloud Platforms`, deploy
7 a :term:`Ceph Filesystem` or use Ceph for another purpose, all
8 :term:`Ceph Storage Cluster` deployments begin with setting up each
9 :term:`Ceph Node`, your network, and the Ceph Storage Cluster. A Ceph
10 Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and
11 Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also
12 required when running Ceph Filesystem clients.
13
14 .. ditaa:: +---------------+ +------------+ +------------+ +---------------+
15 | OSDs | | Monitors | | Managers | | MDSs |
16 +---------------+ +------------+ +------------+ +---------------+
17
18 - **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps
19 of the cluster state, including the monitor map, manager map, the
20 OSD map, and the CRUSH map. These maps are critical cluster state
21 required for Ceph daemons to coordinate with each other. Monitors
22 are also responsible for managing authentication between daemons and
23 clients. At least three monitors are normally required for
24 redundancy and high availability.
25
26 - **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
27 responsible for keeping track of runtime metrics and the current
28 state of the Ceph cluster, including storage utilization, current
29 performance metrics, and system load. The Ceph Manager daemons also
30 host python-based modules to manage and expose Ceph cluster
31 information, including a web-based :ref:`mgr-dashboard` and
32 `REST API`_. At least two managers are normally required for high
33 availability.
34
35 - **Ceph OSDs**: A :term:`Ceph OSD` (object storage daemon,
36 ``ceph-osd``) stores data, handles data replication, recovery,
37 rebalancing, and provides some monitoring information to Ceph
38 Monitors and Managers by checking other Ceph OSD Daemons for a
39 heartbeat. At least 3 Ceph OSDs are normally required for redundancy
40 and high availability.
41
42 - **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores
43 metadata on behalf of the :term:`Ceph Filesystem` (i.e., Ceph Block
44 Devices and Ceph Object Storage do not use MDS). Ceph Metadata
45 Servers allow POSIX file system users to execute basic commands (like
46 ``ls``, ``find``, etc.) without placing an enormous burden on the
47 Ceph Storage Cluster.
48
49 Ceph stores data as objects within logical storage pools. Using the
50 :term:`CRUSH` algorithm, Ceph calculates which placement group should
51 contain the object, and further calculates which Ceph OSD Daemon
52 should store the placement group. The CRUSH algorithm enables the
53 Ceph Storage Cluster to scale, rebalance, and recover dynamically.
54
55 .. _REST API: ../../mgr/restful
56
57 .. raw:: html
58
59 <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
60 <table cellpadding="10"><colgroup><col width="50%"><col width="50%"></colgroup><tbody valign="top"><tr><td><h3>Recommendations</h3>
61
62 To begin using Ceph in production, you should review our hardware
63 recommendations and operating system recommendations.
64
65 .. toctree::
66 :maxdepth: 2
67
68 Hardware Recommendations <hardware-recommendations>
69 OS Recommendations <os-recommendations>
70
71
72 .. raw:: html
73
74 </td><td><h3>Get Involved</h3>
75
76 You can avail yourself of help or contribute documentation, source
77 code or bugs by getting involved in the Ceph community.
78
79 .. toctree::
80 :maxdepth: 2
81
82 get-involved
83 documenting-ceph
84
85 .. raw:: html
86
87 </td></tr></tbody></table>