]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | =============== |
2 | Intro to Ceph | |
3 | =============== | |
4 | ||
5 | Whether you want to provide :term:`Ceph Object Storage` and/or :term:`Ceph Block | |
6 | Device` services to :term:`Cloud Platforms`, deploy a :term:`Ceph Filesystem` or | |
7 | use Ceph for another purpose, all :term:`Ceph Storage Cluster` deployments begin | |
8 | with setting up each :term:`Ceph Node`, your network and the Ceph Storage | |
9 | Cluster. A Ceph Storage Cluster requires at least one Ceph Monitor and at least | |
10 | two Ceph OSD Daemons. The Ceph Metadata Server is essential when running Ceph | |
11 | Filesystem clients. | |
12 | ||
13 | .. ditaa:: +---------------+ +---------------+ +---------------+ | |
14 | | OSDs | | Monitor | | MDS | | |
15 | +---------------+ +---------------+ +---------------+ | |
16 | ||
17 | - **Ceph OSDs**: A :term:`Ceph OSD Daemon` (Ceph OSD) stores data, handles data | |
18 | replication, recovery, backfilling, rebalancing, and provides some monitoring | |
19 | information to Ceph Monitors by checking other Ceph OSD Daemons for a | |
20 | heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to | |
21 | achieve an ``active + clean`` state when the cluster makes two copies of your | |
22 | data (Ceph makes 3 copies by default, but you can adjust it). | |
23 | ||
24 | - **Monitors**: A :term:`Ceph Monitor` maintains maps of the cluster state, | |
25 | including the monitor map, the OSD map, the Placement Group (PG) map, and the | |
26 | CRUSH map. Ceph maintains a history (called an "epoch") of each state change | |
27 | in the Ceph Monitors, Ceph OSD Daemons, and PGs. | |
28 | ||
29 | - **MDSs**: A :term:`Ceph Metadata Server` (MDS) stores metadata on behalf of | |
30 | the :term:`Ceph Filesystem` (i.e., Ceph Block Devices and Ceph Object Storage | |
31 | do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system | |
32 | users to execute basic commands like ``ls``, ``find``, etc. without placing | |
33 | an enormous burden on the Ceph Storage Cluster. | |
34 | ||
35 | Ceph stores a client's data as objects within storage pools. Using the CRUSH | |
36 | algorithm, Ceph calculates which placement group should contain the object, | |
37 | and further calculates which Ceph OSD Daemon should store the placement group. | |
38 | The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and | |
39 | recover dynamically. | |
40 | ||
41 | ||
42 | .. raw:: html | |
43 | ||
44 | <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style> | |
45 | <table cellpadding="10"><colgroup><col width="50%"><col width="50%"></colgroup><tbody valign="top"><tr><td><h3>Recommendations</h3> | |
46 | ||
47 | To begin using Ceph in production, you should review our hardware | |
48 | recommendations and operating system recommendations. | |
49 | ||
50 | .. toctree:: | |
51 | :maxdepth: 2 | |
52 | ||
53 | Hardware Recommendations <hardware-recommendations> | |
54 | OS Recommendations <os-recommendations> | |
55 | ||
56 | ||
57 | .. raw:: html | |
58 | ||
59 | </td><td><h3>Get Involved</h3> | |
60 | ||
61 | You can avail yourself of help or contribute documentation, source | |
62 | code or bugs by getting involved in the Ceph community. | |
63 | ||
64 | .. toctree:: | |
65 | :maxdepth: 2 | |
66 | ||
67 | get-involved | |
68 | documenting-ceph | |
69 | ||
70 | .. raw:: html | |
71 | ||
72 | </td></tr></tbody></table> |