]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | =============== |
2 | Intro to Ceph | |
3 | =============== | |
4 | ||
1e59de90 TL |
5 | Ceph can be used to provide :term:`Ceph Object Storage` to :term:`Cloud |
6 | Platforms` and Ceph can be used to provide :term:`Ceph Block Device` services | |
7 | to :term:`Cloud Platforms`. Ceph can be used to deploy a :term:`Ceph File | |
8 | System`. All :term:`Ceph Storage Cluster` deployments begin with setting up | |
9 | each :term:`Ceph Node` and then setting up the network. | |
10 | ||
11 | A Ceph Storage Cluster requires the following: at least one Ceph Monitor and at | |
12 | least one Ceph Manager, and at least as many Ceph OSDs as there are copies of | |
13 | an object stored on the Ceph cluster (for example, if three copies of a given | |
14 | object are stored on the Ceph cluster, then at least three OSDs must exist in | |
15 | that Ceph cluster). | |
16 | ||
17 | The Ceph Metadata Server is necessary to run Ceph File System clients. | |
18 | ||
19 | .. note:: | |
20 | ||
21 | It is a best practice to have a Ceph Manager for each Monitor, but it is not | |
22 | necessary. | |
224ce89b | 23 | |
f91f0fd5 TL |
24 | .. ditaa:: |
25 | ||
26 | +---------------+ +------------+ +------------+ +---------------+ | |
224ce89b WB |
27 | | OSDs | | Monitors | | Managers | | MDSs | |
28 | +---------------+ +------------+ +------------+ +---------------+ | |
29 | ||
30 | - **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps | |
31 | of the cluster state, including the monitor map, manager map, the | |
9f95a23c TL |
32 | OSD map, the MDS map, and the CRUSH map. These maps are critical |
33 | cluster state required for Ceph daemons to coordinate with each other. | |
34 | Monitors are also responsible for managing authentication between | |
35 | daemons and clients. At least three monitors are normally required | |
36 | for redundancy and high availability. | |
224ce89b WB |
37 | |
38 | - **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is | |
39 | responsible for keeping track of runtime metrics and the current | |
40 | state of the Ceph cluster, including storage utilization, current | |
41 | performance metrics, and system load. The Ceph Manager daemons also | |
11fdf7f2 TL |
42 | host python-based modules to manage and expose Ceph cluster |
43 | information, including a web-based :ref:`mgr-dashboard` and | |
44 | `REST API`_. At least two managers are normally required for high | |
45 | availability. | |
224ce89b | 46 | |
2a845540 | 47 | - **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`, |
224ce89b WB |
48 | ``ceph-osd``) stores data, handles data replication, recovery, |
49 | rebalancing, and provides some monitoring information to Ceph | |
50 | Monitors and Managers by checking other Ceph OSD Daemons for a | |
33c7a0ef TL |
51 | heartbeat. At least three Ceph OSDs are normally required for |
52 | redundancy and high availability. | |
224ce89b WB |
53 | |
54 | - **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores | |
9f95a23c | 55 | metadata on behalf of the :term:`Ceph File System` (i.e., Ceph Block |
224ce89b WB |
56 | Devices and Ceph Object Storage do not use MDS). Ceph Metadata |
57 | Servers allow POSIX file system users to execute basic commands (like | |
58 | ``ls``, ``find``, etc.) without placing an enormous burden on the | |
59 | Ceph Storage Cluster. | |
60 | ||
61 | Ceph stores data as objects within logical storage pools. Using the | |
2a845540 TL |
62 | :term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should |
63 | contain the object, and which OSD should store the placement group. The | |
64 | CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and | |
65 | recover dynamically. | |
224ce89b | 66 | |
224ce89b | 67 | .. _REST API: ../../mgr/restful |
7c673cae | 68 | |
20effc67 | 69 | .. container:: columns-2 |
7c673cae | 70 | |
20effc67 | 71 | .. container:: column |
7c673cae | 72 | |
20effc67 | 73 | .. raw:: html |
7c673cae | 74 | |
20effc67 | 75 | <h3>Recommendations</h3> |
7c673cae | 76 | |
20effc67 TL |
77 | To begin using Ceph in production, you should review our hardware |
78 | recommendations and operating system recommendations. | |
7c673cae | 79 | |
20effc67 TL |
80 | .. toctree:: |
81 | :maxdepth: 2 | |
7c673cae | 82 | |
20effc67 TL |
83 | Hardware Recommendations <hardware-recommendations> |
84 | OS Recommendations <os-recommendations> | |
7c673cae | 85 | |
20effc67 | 86 | .. container:: column |
7c673cae | 87 | |
20effc67 | 88 | .. raw:: html |
7c673cae | 89 | |
20effc67 | 90 | <h3>Get Involved</h3> |
7c673cae | 91 | |
20effc67 TL |
92 | You can avail yourself of help or contribute documentation, source |
93 | code or bugs by getting involved in the Ceph community. | |
7c673cae | 94 | |
20effc67 TL |
95 | .. toctree:: |
96 | :maxdepth: 2 | |
97 | ||
98 | get-involved | |
99 | documenting-ceph |