]>
Commit | Line | Data |
---|---|---|
a4b75251 TL |
1 | .. _rados-operations: |
2 | ||
7c673cae FG |
3 | ==================== |
4 | Cluster Operations | |
5 | ==================== | |
6 | ||
7 | .. raw:: html | |
8 | ||
9 | <table><colgroup><col width="50%"><col width="50%"></colgroup><tbody valign="top"><tr><td><h3>High-level Operations</h3> | |
10 | ||
11 | High-level cluster operations consist primarily of starting, stopping, and | |
12 | restarting a cluster with the ``ceph`` service; checking the cluster's health; | |
13 | and, monitoring an operating cluster. | |
14 | ||
15 | .. toctree:: | |
11fdf7f2 TL |
16 | :maxdepth: 1 |
17 | ||
7c673cae | 18 | operating |
c07f9fc5 | 19 | health-checks |
7c673cae FG |
20 | monitoring |
21 | monitoring-osd-pg | |
22 | user-management | |
11fdf7f2 | 23 | pg-repair |
f38dd50b | 24 | pgcalc/index |
7c673cae | 25 | |
11fdf7f2 | 26 | .. raw:: html |
7c673cae FG |
27 | |
28 | </td><td><h3>Data Placement</h3> | |
29 | ||
30 | Once you have your cluster up and running, you may begin working with data | |
31 | placement. Ceph supports petabyte-scale data storage clusters, with storage | |
32 | pools and placement groups that distribute data across the cluster using Ceph's | |
33 | CRUSH algorithm. | |
34 | ||
35 | .. toctree:: | |
36 | :maxdepth: 1 | |
37 | ||
38 | data-placement | |
39 | pools | |
40 | erasure-code | |
41 | cache-tiering | |
42 | placement-groups | |
c07f9fc5 | 43 | upmap |
aee94f69 TL |
44 | read-balancer |
45 | balancer | |
7c673cae | 46 | crush-map |
c07f9fc5 | 47 | crush-map-edits |
f67539c2 TL |
48 | stretch-mode |
49 | change-mon-elections | |
7c673cae FG |
50 | |
51 | ||
52 | ||
53 | .. raw:: html | |
54 | ||
55 | </td></tr><tr><td><h3>Low-level Operations</h3> | |
56 | ||
57 | Low-level cluster operations consist of starting, stopping, and restarting a | |
58 | particular daemon within a cluster; changing the settings of a particular | |
11fdf7f2 | 59 | daemon or subsystem; and, adding a daemon to the cluster or removing a daemon |
7c673cae FG |
60 | from the cluster. The most common use cases for low-level operations include |
61 | growing or shrinking the Ceph cluster and replacing legacy or failed hardware | |
62 | with new hardware. | |
63 | ||
64 | .. toctree:: | |
65 | :maxdepth: 1 | |
66 | ||
67 | add-or-rm-osds | |
68 | add-or-rm-mons | |
11fdf7f2 | 69 | devices |
94b18763 | 70 | bluestore-migration |
7c673cae FG |
71 | Command Reference <control> |
72 | ||
11fdf7f2 | 73 | |
7c673cae FG |
74 | |
75 | .. raw:: html | |
76 | ||
77 | </td><td><h3>Troubleshooting</h3> | |
78 | ||
79 | Ceph is still on the leading edge, so you may encounter situations that require | |
80 | you to evaluate your Ceph configuration and modify your logging and debugging | |
81 | settings to identify and remedy issues you are encountering with your cluster. | |
82 | ||
83 | .. toctree:: | |
11fdf7f2 | 84 | :maxdepth: 1 |
7c673cae FG |
85 | |
86 | ../troubleshooting/community | |
87 | ../troubleshooting/troubleshooting-mon | |
88 | ../troubleshooting/troubleshooting-osd | |
89 | ../troubleshooting/troubleshooting-pg | |
90 | ../troubleshooting/log-and-debug | |
91 | ../troubleshooting/cpu-profiling | |
92 | ../troubleshooting/memory-profiling | |
93 | ||
94 | ||
95 | ||
96 | ||
97 | .. raw:: html | |
98 | ||
99 | </td></tr></tbody></table> | |
100 |