]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/services/mon.rst
update ceph source to reef 18.1.2
[ceph.git] / ceph / doc / cephadm / services / mon.rst
CommitLineData
f67539c2
TL
1===========
2MON Service
3===========
4
5.. _deploy_additional_monitors:
6
7Deploying additional monitors
a4b75251 8=============================
f67539c2
TL
9
10A typical Ceph cluster has three or five monitor daemons that are spread
11across different hosts. We recommend deploying five monitors if there are
12five or more nodes in your cluster.
13
14.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation
15
16Ceph deploys monitor daemons automatically as the cluster grows and Ceph
17scales back monitor daemons automatically as the cluster shrinks. The
18smooth execution of this automatic growing and shrinking depends upon
19proper subnet configuration.
20
21The cephadm bootstrap procedure assigns the first monitor daemon in the
22cluster to a particular subnet. ``cephadm`` designates that subnet as the
23default subnet of the cluster. New monitor daemons will be assigned by
24default to that subnet unless cephadm is instructed to do otherwise.
25
26If all of the ceph monitor daemons in your cluster are in the same subnet,
27manual administration of the ceph monitor daemons is not necessary.
28``cephadm`` will automatically add up to five monitors to the subnet, as
522d829b
TL
29needed, as new hosts are added to the cluster.
30
31By default, cephadm will deploy 5 daemons on arbitrary hosts. See
32:ref:`orchestrator-cli-placement-spec` for details of specifying
33the placement of daemons.
f67539c2
TL
34
35Designating a Particular Subnet for Monitors
36--------------------------------------------
37
38To designate a particular IP subnet for use by ceph monitor daemons, use a
39command of the following form, including the subnet's address in `CIDR`_
40format (e.g., ``10.1.2.0/24``):
41
42 .. prompt:: bash #
43
44 ceph config set mon public_network *<mon-cidr-network>*
45
46 For example:
47
48 .. prompt:: bash #
49
50 ceph config set mon public_network 10.1.2.0/24
51
52Cephadm deploys new monitor daemons only on hosts that have IP addresses in
53the designated subnet.
54
522d829b 55You can also specify two public networks by using a list of networks:
f67539c2
TL
56
57 .. prompt:: bash #
58
522d829b 59 ceph config set mon public_network *<mon-cidr-network1>,<mon-cidr-network2>*
f67539c2 60
522d829b 61 For example:
f67539c2
TL
62
63 .. prompt:: bash #
64
522d829b 65 ceph config set mon public_network 10.1.2.0/24,192.168.0.1/24
f67539c2 66
f67539c2 67
522d829b
TL
68Deploying Monitors on a Particular Network
69------------------------------------------
70
71You can explicitly specify the IP address or CIDR network for each monitor and
72control where each monitor is placed. To disable automated monitor deployment,
73run this command:
f67539c2 74
f67539c2
TL
75 .. prompt:: bash #
76
522d829b 77 ceph orch apply mon --unmanaged
f67539c2 78
522d829b 79 To deploy each additional monitor:
f67539c2
TL
80
81 .. prompt:: bash #
82
522d829b 83 ceph orch daemon add mon *<host1:ip-or-network1>
f67539c2 84
522d829b
TL
85 For example, to deploy a second monitor on ``newhost1`` using an IP
86 address ``10.1.2.123`` and a third monitor on ``newhost2`` in
87 network ``10.1.2.0/24``, run the following commands:
f67539c2
TL
88
89 .. prompt:: bash #
90
522d829b
TL
91 ceph orch apply mon --unmanaged
92 ceph orch daemon add mon newhost1:10.1.2.123
93 ceph orch daemon add mon newhost2:10.1.2.0/24
94
95 Now, enable automatic placement of Daemons
96
97 .. prompt:: bash #
f67539c2 98
522d829b 99 ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run
f67539c2 100
522d829b
TL
101 See :ref:`orchestrator-cli-placement-spec` for details of specifying
102 the placement of daemons.
f67539c2 103
522d829b 104 Finally apply this new placement by dropping ``--dry-run``
f67539c2
TL
105
106 .. prompt:: bash #
107
522d829b 108 ceph orch apply mon --placement="newhost1,newhost2,newhost3"
f67539c2 109
f67539c2 110
522d829b
TL
111Moving Monitors to a Different Network
112--------------------------------------
f67539c2 113
522d829b
TL
114To move Monitors to a new network, deploy new monitors on the new network and
115subsequently remove monitors from the old network. It is not advised to
116modify and inject the ``monmap`` manually.
117
118First, disable the automated placement of daemons:
f67539c2
TL
119
120 .. prompt:: bash #
121
122 ceph orch apply mon --unmanaged
123
522d829b 124To deploy each additional monitor:
f67539c2
TL
125
126 .. prompt:: bash #
127
522d829b 128 ceph orch daemon add mon *<newhost1:ip-or-network1>*
f67539c2 129
522d829b
TL
130For example, to deploy a second monitor on ``newhost1`` using an IP
131address ``10.1.2.123`` and a third monitor on ``newhost2`` in
132network ``10.1.2.0/24``, run the following commands:
f67539c2
TL
133
134 .. prompt:: bash #
135
136 ceph orch apply mon --unmanaged
137 ceph orch daemon add mon newhost1:10.1.2.123
138 ceph orch daemon add mon newhost2:10.1.2.0/24
139
522d829b
TL
140 Subsequently remove monitors from the old network:
141
142 .. prompt:: bash #
143
144 ceph orch daemon rm *mon.<oldhost1>*
145
146 Update the ``public_network``:
147
148 .. prompt:: bash #
149
150 ceph config set mon public_network *<mon-cidr-network>*
151
152 For example:
153
154 .. prompt:: bash #
155
156 ceph config set mon public_network 10.1.2.0/24
157
158 Now, enable automatic placement of Daemons
159
160 .. prompt:: bash #
161
162 ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run
163
164 See :ref:`orchestrator-cli-placement-spec` for details of specifying
165 the placement of daemons.
166
167 Finally apply this new placement by dropping ``--dry-run``
168
169 .. prompt:: bash #
170
171 ceph orch apply mon --placement="newhost1,newhost2,newhost3"
a4b75251 172
1e59de90
TL
173
174Setting Crush Locations for Monitors
175------------------------------------
176
177Cephadm supports setting CRUSH locations for mon daemons
178using the mon service spec. The CRUSH locations are set
179by hostname. When cephadm deploys a mon on a host that matches
180a hostname specified in the CRUSH locations, it will add
181``--set-crush-location <CRUSH-location>`` where the CRUSH location
182is the first entry in the list of CRUSH locations for that
183host. If multiple CRUSH locations are set for one host, cephadm
184will attempt to set the additional locations using the
185"ceph mon set_location" command.
186
187.. note::
188
189 Setting the CRUSH location in the spec is the recommended way of
190 replacing tiebreaker mon daemons, as they require having a location
191 set when they are added.
192
193 .. note::
194
195 Tiebreaker mon daemons are a part of stretch mode clusters. For more
196 info on stretch mode clusters see :ref:`stretch_mode`
197
198Example syntax for setting the CRUSH locations:
199
200.. code-block:: yaml
201
202 service_type: mon
203 service_name: mon
204 placement:
205 count: 5
206 spec:
207 crush_locations:
208 host1:
209 - datacenter=a
210 host2:
211 - datacenter=b
212 - rack=2
213 host3:
214 - datacenter=a
215
216.. note::
217
218 Sometimes, based on the timing of mon daemons being admitted to the mon
219 quorum, cephadm may fail to set the CRUSH location for some mon daemons
220 when multiple locations are specified. In this case, the recommended
221 action is to re-apply the same mon spec to retrigger the service action.
222
223.. note::
224
225 Mon daemons will only get the ``--set-crush-location`` flag set when cephadm
226 actually deploys them. This means if a spec is applied that includes a CRUSH
227 location for a mon that is already deployed, the flag may not be set until
228 a redeploy command is issued for that mon daemon.
229
230
20effc67
TL
231Further Reading
232===============
a4b75251
TL
233
234* :ref:`rados-operations`
235* :ref:`rados-troubleshooting-mon`
236* :ref:`cephadm-restore-quorum`
237