]> git.proxmox.com Git - ceph.git/blame - ceph/doc/start/intro.rst
import quincy beta 17.1.0
[ceph.git] / ceph / doc / start / intro.rst
CommitLineData
7c673cae
FG
1===============
2 Intro to Ceph
3===============
4
224ce89b
WB
5Whether you want to provide :term:`Ceph Object Storage` and/or
6:term:`Ceph Block Device` services to :term:`Cloud Platforms`, deploy
9f95a23c 7a :term:`Ceph File System` or use Ceph for another purpose, all
224ce89b
WB
8:term:`Ceph Storage Cluster` deployments begin with setting up each
9:term:`Ceph Node`, your network, and the Ceph Storage Cluster. A Ceph
10Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and
11Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also
9f95a23c 12required when running Ceph File System clients.
224ce89b 13
f91f0fd5
TL
14.. ditaa::
15
16 +---------------+ +------------+ +------------+ +---------------+
224ce89b
WB
17 | OSDs | | Monitors | | Managers | | MDSs |
18 +---------------+ +------------+ +------------+ +---------------+
19
20- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps
21 of the cluster state, including the monitor map, manager map, the
9f95a23c
TL
22 OSD map, the MDS map, and the CRUSH map. These maps are critical
23 cluster state required for Ceph daemons to coordinate with each other.
24 Monitors are also responsible for managing authentication between
25 daemons and clients. At least three monitors are normally required
26 for redundancy and high availability.
224ce89b
WB
27
28- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
29 responsible for keeping track of runtime metrics and the current
30 state of the Ceph cluster, including storage utilization, current
31 performance metrics, and system load. The Ceph Manager daemons also
11fdf7f2
TL
32 host python-based modules to manage and expose Ceph cluster
33 information, including a web-based :ref:`mgr-dashboard` and
34 `REST API`_. At least two managers are normally required for high
35 availability.
224ce89b
WB
36
37- **Ceph OSDs**: A :term:`Ceph OSD` (object storage daemon,
38 ``ceph-osd``) stores data, handles data replication, recovery,
39 rebalancing, and provides some monitoring information to Ceph
40 Monitors and Managers by checking other Ceph OSD Daemons for a
41 heartbeat. At least 3 Ceph OSDs are normally required for redundancy
42 and high availability.
43
44- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores
9f95a23c 45 metadata on behalf of the :term:`Ceph File System` (i.e., Ceph Block
224ce89b
WB
46 Devices and Ceph Object Storage do not use MDS). Ceph Metadata
47 Servers allow POSIX file system users to execute basic commands (like
48 ``ls``, ``find``, etc.) without placing an enormous burden on the
49 Ceph Storage Cluster.
50
51Ceph stores data as objects within logical storage pools. Using the
52:term:`CRUSH` algorithm, Ceph calculates which placement group should
53contain the object, and further calculates which Ceph OSD Daemon
54should store the placement group. The CRUSH algorithm enables the
55Ceph Storage Cluster to scale, rebalance, and recover dynamically.
56
224ce89b 57.. _REST API: ../../mgr/restful
7c673cae 58
20effc67 59.. container:: columns-2
7c673cae 60
20effc67 61 .. container:: column
7c673cae 62
20effc67 63 .. raw:: html
7c673cae 64
20effc67 65 <h3>Recommendations</h3>
7c673cae 66
20effc67
TL
67 To begin using Ceph in production, you should review our hardware
68 recommendations and operating system recommendations.
7c673cae 69
20effc67
TL
70 .. toctree::
71 :maxdepth: 2
7c673cae 72
20effc67
TL
73 Hardware Recommendations <hardware-recommendations>
74 OS Recommendations <os-recommendations>
7c673cae 75
20effc67 76 .. container:: column
7c673cae 77
20effc67 78 .. raw:: html
7c673cae 79
20effc67 80 <h3>Get Involved</h3>
7c673cae 81
20effc67
TL
82 You can avail yourself of help or contribute documentation, source
83 code or bugs by getting involved in the Ceph community.
7c673cae 84
20effc67
TL
85 .. toctree::
86 :maxdepth: 2
87
88 get-involved
89 documenting-ceph