]>
Commit | Line | Data |
---|---|---|
f67539c2 TL |
1 | =========== |
2 | MON Service | |
3 | =========== | |
4 | ||
5 | .. _deploy_additional_monitors: | |
6 | ||
7 | Deploying additional monitors | |
a4b75251 | 8 | ============================= |
f67539c2 TL |
9 | |
10 | A typical Ceph cluster has three or five monitor daemons that are spread | |
11 | across different hosts. We recommend deploying five monitors if there are | |
12 | five or more nodes in your cluster. | |
13 | ||
14 | .. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation | |
15 | ||
16 | Ceph deploys monitor daemons automatically as the cluster grows and Ceph | |
17 | scales back monitor daemons automatically as the cluster shrinks. The | |
18 | smooth execution of this automatic growing and shrinking depends upon | |
19 | proper subnet configuration. | |
20 | ||
21 | The cephadm bootstrap procedure assigns the first monitor daemon in the | |
22 | cluster to a particular subnet. ``cephadm`` designates that subnet as the | |
23 | default subnet of the cluster. New monitor daemons will be assigned by | |
24 | default to that subnet unless cephadm is instructed to do otherwise. | |
25 | ||
26 | If all of the ceph monitor daemons in your cluster are in the same subnet, | |
27 | manual administration of the ceph monitor daemons is not necessary. | |
28 | ``cephadm`` will automatically add up to five monitors to the subnet, as | |
522d829b TL |
29 | needed, as new hosts are added to the cluster. |
30 | ||
31 | By default, cephadm will deploy 5 daemons on arbitrary hosts. See | |
32 | :ref:`orchestrator-cli-placement-spec` for details of specifying | |
33 | the placement of daemons. | |
f67539c2 TL |
34 | |
35 | Designating a Particular Subnet for Monitors | |
36 | -------------------------------------------- | |
37 | ||
38 | To designate a particular IP subnet for use by ceph monitor daemons, use a | |
39 | command of the following form, including the subnet's address in `CIDR`_ | |
40 | format (e.g., ``10.1.2.0/24``): | |
41 | ||
42 | .. prompt:: bash # | |
43 | ||
44 | ceph config set mon public_network *<mon-cidr-network>* | |
45 | ||
46 | For example: | |
47 | ||
48 | .. prompt:: bash # | |
49 | ||
50 | ceph config set mon public_network 10.1.2.0/24 | |
51 | ||
52 | Cephadm deploys new monitor daemons only on hosts that have IP addresses in | |
53 | the designated subnet. | |
54 | ||
522d829b | 55 | You can also specify two public networks by using a list of networks: |
f67539c2 TL |
56 | |
57 | .. prompt:: bash # | |
58 | ||
522d829b | 59 | ceph config set mon public_network *<mon-cidr-network1>,<mon-cidr-network2>* |
f67539c2 | 60 | |
522d829b | 61 | For example: |
f67539c2 TL |
62 | |
63 | .. prompt:: bash # | |
64 | ||
522d829b | 65 | ceph config set mon public_network 10.1.2.0/24,192.168.0.1/24 |
f67539c2 | 66 | |
f67539c2 | 67 | |
522d829b TL |
68 | Deploying Monitors on a Particular Network |
69 | ------------------------------------------ | |
70 | ||
71 | You can explicitly specify the IP address or CIDR network for each monitor and | |
72 | control where each monitor is placed. To disable automated monitor deployment, | |
73 | run this command: | |
f67539c2 | 74 | |
f67539c2 TL |
75 | .. prompt:: bash # |
76 | ||
522d829b | 77 | ceph orch apply mon --unmanaged |
f67539c2 | 78 | |
522d829b | 79 | To deploy each additional monitor: |
f67539c2 TL |
80 | |
81 | .. prompt:: bash # | |
82 | ||
522d829b | 83 | ceph orch daemon add mon *<host1:ip-or-network1> |
f67539c2 | 84 | |
522d829b TL |
85 | For example, to deploy a second monitor on ``newhost1`` using an IP |
86 | address ``10.1.2.123`` and a third monitor on ``newhost2`` in | |
87 | network ``10.1.2.0/24``, run the following commands: | |
f67539c2 TL |
88 | |
89 | .. prompt:: bash # | |
90 | ||
522d829b TL |
91 | ceph orch apply mon --unmanaged |
92 | ceph orch daemon add mon newhost1:10.1.2.123 | |
93 | ceph orch daemon add mon newhost2:10.1.2.0/24 | |
94 | ||
95 | Now, enable automatic placement of Daemons | |
96 | ||
97 | .. prompt:: bash # | |
f67539c2 | 98 | |
522d829b | 99 | ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run |
f67539c2 | 100 | |
522d829b TL |
101 | See :ref:`orchestrator-cli-placement-spec` for details of specifying |
102 | the placement of daemons. | |
f67539c2 | 103 | |
522d829b | 104 | Finally apply this new placement by dropping ``--dry-run`` |
f67539c2 TL |
105 | |
106 | .. prompt:: bash # | |
107 | ||
522d829b | 108 | ceph orch apply mon --placement="newhost1,newhost2,newhost3" |
f67539c2 | 109 | |
f67539c2 | 110 | |
522d829b TL |
111 | Moving Monitors to a Different Network |
112 | -------------------------------------- | |
f67539c2 | 113 | |
522d829b TL |
114 | To move Monitors to a new network, deploy new monitors on the new network and |
115 | subsequently remove monitors from the old network. It is not advised to | |
116 | modify and inject the ``monmap`` manually. | |
117 | ||
118 | First, disable the automated placement of daemons: | |
f67539c2 TL |
119 | |
120 | .. prompt:: bash # | |
121 | ||
122 | ceph orch apply mon --unmanaged | |
123 | ||
522d829b | 124 | To deploy each additional monitor: |
f67539c2 TL |
125 | |
126 | .. prompt:: bash # | |
127 | ||
522d829b | 128 | ceph orch daemon add mon *<newhost1:ip-or-network1>* |
f67539c2 | 129 | |
522d829b TL |
130 | For example, to deploy a second monitor on ``newhost1`` using an IP |
131 | address ``10.1.2.123`` and a third monitor on ``newhost2`` in | |
132 | network ``10.1.2.0/24``, run the following commands: | |
f67539c2 TL |
133 | |
134 | .. prompt:: bash # | |
135 | ||
136 | ceph orch apply mon --unmanaged | |
137 | ceph orch daemon add mon newhost1:10.1.2.123 | |
138 | ceph orch daemon add mon newhost2:10.1.2.0/24 | |
139 | ||
522d829b TL |
140 | Subsequently remove monitors from the old network: |
141 | ||
142 | .. prompt:: bash # | |
143 | ||
144 | ceph orch daemon rm *mon.<oldhost1>* | |
145 | ||
146 | Update the ``public_network``: | |
147 | ||
148 | .. prompt:: bash # | |
149 | ||
150 | ceph config set mon public_network *<mon-cidr-network>* | |
151 | ||
152 | For example: | |
153 | ||
154 | .. prompt:: bash # | |
155 | ||
156 | ceph config set mon public_network 10.1.2.0/24 | |
157 | ||
158 | Now, enable automatic placement of Daemons | |
159 | ||
160 | .. prompt:: bash # | |
161 | ||
162 | ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run | |
163 | ||
164 | See :ref:`orchestrator-cli-placement-spec` for details of specifying | |
165 | the placement of daemons. | |
166 | ||
167 | Finally apply this new placement by dropping ``--dry-run`` | |
168 | ||
169 | .. prompt:: bash # | |
170 | ||
171 | ceph orch apply mon --placement="newhost1,newhost2,newhost3" | |
a4b75251 | 172 | |
1e59de90 TL |
173 | |
174 | Setting Crush Locations for Monitors | |
175 | ------------------------------------ | |
176 | ||
177 | Cephadm supports setting CRUSH locations for mon daemons | |
178 | using the mon service spec. The CRUSH locations are set | |
179 | by hostname. When cephadm deploys a mon on a host that matches | |
180 | a hostname specified in the CRUSH locations, it will add | |
181 | ``--set-crush-location <CRUSH-location>`` where the CRUSH location | |
182 | is the first entry in the list of CRUSH locations for that | |
183 | host. If multiple CRUSH locations are set for one host, cephadm | |
184 | will attempt to set the additional locations using the | |
185 | "ceph mon set_location" command. | |
186 | ||
187 | .. note:: | |
188 | ||
189 | Setting the CRUSH location in the spec is the recommended way of | |
190 | replacing tiebreaker mon daemons, as they require having a location | |
191 | set when they are added. | |
192 | ||
193 | .. note:: | |
194 | ||
195 | Tiebreaker mon daemons are a part of stretch mode clusters. For more | |
196 | info on stretch mode clusters see :ref:`stretch_mode` | |
197 | ||
198 | Example syntax for setting the CRUSH locations: | |
199 | ||
200 | .. code-block:: yaml | |
201 | ||
202 | service_type: mon | |
203 | service_name: mon | |
204 | placement: | |
205 | count: 5 | |
206 | spec: | |
207 | crush_locations: | |
208 | host1: | |
209 | - datacenter=a | |
210 | host2: | |
211 | - datacenter=b | |
212 | - rack=2 | |
213 | host3: | |
214 | - datacenter=a | |
215 | ||
216 | .. note:: | |
217 | ||
218 | Sometimes, based on the timing of mon daemons being admitted to the mon | |
219 | quorum, cephadm may fail to set the CRUSH location for some mon daemons | |
220 | when multiple locations are specified. In this case, the recommended | |
221 | action is to re-apply the same mon spec to retrigger the service action. | |
222 | ||
223 | .. note:: | |
224 | ||
225 | Mon daemons will only get the ``--set-crush-location`` flag set when cephadm | |
226 | actually deploys them. This means if a spec is applied that includes a CRUSH | |
227 | location for a mon that is already deployed, the flag may not be set until | |
228 | a redeploy command is issued for that mon daemon. | |
229 | ||
230 | ||
20effc67 TL |
231 | Further Reading |
232 | =============== | |
a4b75251 TL |
233 | |
234 | * :ref:`rados-operations` | |
235 | * :ref:`rados-troubleshooting-mon` | |
236 | * :ref:`cephadm-restore-quorum` | |
237 |