]> git.proxmox.com Git - ceph.git/blame - ceph/doc/start/intro.rst
update ceph source to reef 18.1.2
[ceph.git] / ceph / doc / start / intro.rst
CommitLineData
7c673cae
FG
1===============
2 Intro to Ceph
3===============
4
1e59de90
TL
5Ceph can be used to provide :term:`Ceph Object Storage` to :term:`Cloud
6Platforms` and Ceph can be used to provide :term:`Ceph Block Device` services
7to :term:`Cloud Platforms`. Ceph can be used to deploy a :term:`Ceph File
8System`. All :term:`Ceph Storage Cluster` deployments begin with setting up
9each :term:`Ceph Node` and then setting up the network.
10
11A Ceph Storage Cluster requires the following: at least one Ceph Monitor and at
12least one Ceph Manager, and at least as many Ceph OSDs as there are copies of
13an object stored on the Ceph cluster (for example, if three copies of a given
14object are stored on the Ceph cluster, then at least three OSDs must exist in
15that Ceph cluster).
16
17The Ceph Metadata Server is necessary to run Ceph File System clients.
18
19.. note::
20
21 It is a best practice to have a Ceph Manager for each Monitor, but it is not
22 necessary.
224ce89b 23
f91f0fd5
TL
24.. ditaa::
25
26 +---------------+ +------------+ +------------+ +---------------+
224ce89b
WB
27 | OSDs | | Monitors | | Managers | | MDSs |
28 +---------------+ +------------+ +------------+ +---------------+
29
30- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps
31 of the cluster state, including the monitor map, manager map, the
9f95a23c
TL
32 OSD map, the MDS map, and the CRUSH map. These maps are critical
33 cluster state required for Ceph daemons to coordinate with each other.
34 Monitors are also responsible for managing authentication between
35 daemons and clients. At least three monitors are normally required
36 for redundancy and high availability.
224ce89b
WB
37
38- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
39 responsible for keeping track of runtime metrics and the current
40 state of the Ceph cluster, including storage utilization, current
41 performance metrics, and system load. The Ceph Manager daemons also
11fdf7f2
TL
42 host python-based modules to manage and expose Ceph cluster
43 information, including a web-based :ref:`mgr-dashboard` and
44 `REST API`_. At least two managers are normally required for high
45 availability.
224ce89b 46
2a845540 47- **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`,
224ce89b
WB
48 ``ceph-osd``) stores data, handles data replication, recovery,
49 rebalancing, and provides some monitoring information to Ceph
50 Monitors and Managers by checking other Ceph OSD Daemons for a
33c7a0ef
TL
51 heartbeat. At least three Ceph OSDs are normally required for
52 redundancy and high availability.
224ce89b
WB
53
54- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores
9f95a23c 55 metadata on behalf of the :term:`Ceph File System` (i.e., Ceph Block
224ce89b
WB
56 Devices and Ceph Object Storage do not use MDS). Ceph Metadata
57 Servers allow POSIX file system users to execute basic commands (like
58 ``ls``, ``find``, etc.) without placing an enormous burden on the
59 Ceph Storage Cluster.
60
61Ceph stores data as objects within logical storage pools. Using the
2a845540
TL
62:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should
63contain the object, and which OSD should store the placement group. The
64CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
65recover dynamically.
224ce89b 66
224ce89b 67.. _REST API: ../../mgr/restful
7c673cae 68
20effc67 69.. container:: columns-2
7c673cae 70
20effc67 71 .. container:: column
7c673cae 72
20effc67 73 .. raw:: html
7c673cae 74
20effc67 75 <h3>Recommendations</h3>
7c673cae 76
20effc67
TL
77 To begin using Ceph in production, you should review our hardware
78 recommendations and operating system recommendations.
7c673cae 79
20effc67
TL
80 .. toctree::
81 :maxdepth: 2
7c673cae 82
20effc67
TL
83 Hardware Recommendations <hardware-recommendations>
84 OS Recommendations <os-recommendations>
7c673cae 85
20effc67 86 .. container:: column
7c673cae 87
20effc67 88 .. raw:: html
7c673cae 89
20effc67 90 <h3>Get Involved</h3>
7c673cae 91
20effc67
TL
92 You can avail yourself of help or contribute documentation, source
93 code or bugs by getting involved in the Ceph community.
7c673cae 94
20effc67
TL
95 .. toctree::
96 :maxdepth: 2
97
98 get-involved
99 documenting-ceph