]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | =============== |
2 | Intro to Ceph | |
3 | =============== | |
4 | ||
224ce89b WB |
5 | Whether you want to provide :term:`Ceph Object Storage` and/or |
6 | :term:`Ceph Block Device` services to :term:`Cloud Platforms`, deploy | |
7 | a :term:`Ceph Filesystem` or use Ceph for another purpose, all | |
8 | :term:`Ceph Storage Cluster` deployments begin with setting up each | |
9 | :term:`Ceph Node`, your network, and the Ceph Storage Cluster. A Ceph | |
10 | Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and | |
11 | Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also | |
12 | required when running Ceph Filesystem clients. | |
13 | ||
14 | .. ditaa:: +---------------+ +------------+ +------------+ +---------------+ | |
15 | | OSDs | | Monitors | | Managers | | MDSs | | |
16 | +---------------+ +------------+ +------------+ +---------------+ | |
17 | ||
18 | - **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps | |
19 | of the cluster state, including the monitor map, manager map, the | |
20 | OSD map, and the CRUSH map. These maps are critical cluster state | |
21 | required for Ceph daemons to coordinate with each other. Monitors | |
22 | are also responsible for managing authentication between daemons and | |
23 | clients. At least three monitors are normally required for | |
24 | redundancy and high availability. | |
25 | ||
26 | - **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is | |
27 | responsible for keeping track of runtime metrics and the current | |
28 | state of the Ceph cluster, including storage utilization, current | |
29 | performance metrics, and system load. The Ceph Manager daemons also | |
30 | host python-based plugins to manage and expose Ceph cluster | |
31 | information, including a web-based `dashboard`_ and `REST API`_. At | |
32 | least two managers are normally required for high availability. | |
33 | ||
34 | - **Ceph OSDs**: A :term:`Ceph OSD` (object storage daemon, | |
35 | ``ceph-osd``) stores data, handles data replication, recovery, | |
36 | rebalancing, and provides some monitoring information to Ceph | |
37 | Monitors and Managers by checking other Ceph OSD Daemons for a | |
38 | heartbeat. At least 3 Ceph OSDs are normally required for redundancy | |
39 | and high availability. | |
40 | ||
41 | - **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores | |
42 | metadata on behalf of the :term:`Ceph Filesystem` (i.e., Ceph Block | |
43 | Devices and Ceph Object Storage do not use MDS). Ceph Metadata | |
44 | Servers allow POSIX file system users to execute basic commands (like | |
45 | ``ls``, ``find``, etc.) without placing an enormous burden on the | |
46 | Ceph Storage Cluster. | |
47 | ||
48 | Ceph stores data as objects within logical storage pools. Using the | |
49 | :term:`CRUSH` algorithm, Ceph calculates which placement group should | |
50 | contain the object, and further calculates which Ceph OSD Daemon | |
51 | should store the placement group. The CRUSH algorithm enables the | |
52 | Ceph Storage Cluster to scale, rebalance, and recover dynamically. | |
53 | ||
54 | .. _dashboard: ../../mgr/dashboard | |
55 | .. _REST API: ../../mgr/restful | |
7c673cae FG |
56 | |
57 | .. raw:: html | |
58 | ||
59 | <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style> | |
60 | <table cellpadding="10"><colgroup><col width="50%"><col width="50%"></colgroup><tbody valign="top"><tr><td><h3>Recommendations</h3> | |
61 | ||
62 | To begin using Ceph in production, you should review our hardware | |
63 | recommendations and operating system recommendations. | |
64 | ||
65 | .. toctree:: | |
66 | :maxdepth: 2 | |
67 | ||
68 | Hardware Recommendations <hardware-recommendations> | |
69 | OS Recommendations <os-recommendations> | |
70 | ||
71 | ||
72 | .. raw:: html | |
73 | ||
74 | </td><td><h3>Get Involved</h3> | |
75 | ||
76 | You can avail yourself of help or contribute documentation, source | |
77 | code or bugs by getting involved in the Ceph community. | |
78 | ||
79 | .. toctree:: | |
80 | :maxdepth: 2 | |
81 | ||
82 | get-involved | |
83 | documenting-ceph | |
84 | ||
85 | .. raw:: html | |
86 | ||
87 | </td></tr></tbody></table> |