]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_pmgcm]] | |
2 | ifdef::manvolnum[] | |
3 | pmgcm(1) | |
4 | ======== | |
5 | :pmg-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | pmgcm - Proxmox Mail Gateway Cluster Management Toolkit | |
11 | ||
12 | ||
13 | SYNOPSIS | |
14 | -------- | |
15 | ||
16 | include::pmgcm.1-synopsis.adoc[] | |
17 | ||
18 | ||
19 | DESCRIPTION | |
20 | ----------- | |
21 | endif::manvolnum[] | |
22 | ifndef::manvolnum[] | |
23 | Cluster Management | |
24 | ================== | |
25 | :pmg-toplevel: | |
26 | endif::manvolnum[] | |
27 | ||
28 | We are living in a world where email is becoming more and more important - | |
29 | failures in email systems are not acceptable. To meet these | |
30 | requirements, we developed the Proxmox HA (High Availability) Cluster. | |
31 | ||
32 | The {pmg} HA Cluster consists of a master node and several slave nodes | |
33 | (minimum one slave node). Configuration is done on the master, | |
34 | and data is synchronized to all cluster nodes via a VPN tunnel. This | |
35 | provides the following advantages: | |
36 | ||
37 | * centralized configuration management | |
38 | ||
39 | * fully redundant data storage | |
40 | ||
41 | * high availability | |
42 | ||
43 | * high performance | |
44 | ||
45 | We use a unique application level clustering scheme, which provides | |
46 | extremely good performance. Special considerations were taken to make | |
47 | management as easy as possible. A complete cluster setup is done within | |
48 | minutes, and nodes automatically reintegrate after temporary failures, | |
49 | without any operator interaction. | |
50 | ||
51 | image::images/Proxmox_HA_cluster_final_1024.png[] | |
52 | ||
53 | ||
54 | Hardware Requirements | |
55 | --------------------- | |
56 | ||
57 | There are no special hardware requirements, although it is highly | |
58 | recommended to use fast and reliable server hardware, with redundant disks on | |
59 | all cluster nodes (Hardware RAID with BBU and write cache enabled). | |
60 | ||
61 | The HA Cluster can also run in virtualized environments. | |
62 | ||
63 | ||
64 | Subscriptions | |
65 | ------------- | |
66 | ||
67 | Each node in a cluster has its own subscription. If you want support | |
68 | for a cluster, each cluster node needs to have a valid | |
69 | subscription. All nodes must have the same subscription level. | |
70 | ||
71 | ||
72 | Load Balancing | |
73 | -------------- | |
74 | ||
75 | It is usually advisable to distribute mail traffic among all cluster | |
76 | nodes. Please note that this is not always required, because it is | |
77 | also reasonable to use only one node to handle SMTP traffic. The | |
78 | second node can then be used as a quarantine host, that only provides the web | |
79 | interface to the user quarantine. | |
80 | ||
81 | The normal mail delivery process looks up DNS Mail Exchange (`MX`) | |
82 | records to determine the destination host. An `MX` record tells the | |
83 | sending system where to deliver mail for a certain domain. It is also | |
84 | possible to have several `MX` records for a single domain, each of which can | |
85 | have different priorities. For example, our `MX` record looks like this: | |
86 | ||
87 | ---- | |
88 | # dig -t mx proxmox.com | |
89 | ||
90 | ;; ANSWER SECTION: | |
91 | proxmox.com. 22879 IN MX 10 mail.proxmox.com. | |
92 | ||
93 | ;; ADDITIONAL SECTION: | |
94 | mail.proxmox.com. 22879 IN A 213.129.239.114 | |
95 | ---- | |
96 | ||
97 | Notice that there is a single `MX` record for the domain | |
98 | `proxmox.com`, pointing to `mail.proxmox.com`. The `dig` command | |
99 | automatically outputs the corresponding address record, if it | |
100 | exists. In our case it points to `213.129.239.114`. The priority of | |
101 | our `MX` record is set to 10 (preferred default value). | |
102 | ||
103 | ||
104 | Hot standby with backup `MX` records | |
105 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
106 | ||
107 | Many people do not want to install two redundant mail proxies. Instead | |
108 | they use the mail proxy of their ISP as a fallback. This can be done | |
109 | by adding an additional `MX` record with a lower priority (higher | |
110 | number). Continuing from the example above, this would look like: | |
111 | ||
112 | ---- | |
113 | proxmox.com. 22879 IN MX 100 mail.provider.tld. | |
114 | ---- | |
115 | ||
116 | In such a setup, your provider must accept mails for your domain and | |
117 | forward them to you. Please note that this is not advisable, because | |
118 | spam detection needs to be done by the backup `MX` server as well, and | |
119 | external servers provided by ISPs usually don't do this. | |
120 | ||
121 | However, you will never lose mails with such a setup, because the sending Mail | |
122 | Transport Agent (MTA) will simply deliver the mail to the backup | |
123 | server (mail.provider.tld), if the primary server (mail.proxmox.com) is | |
124 | not available. | |
125 | ||
126 | NOTE: Any reasonable mail server retries mail delivery if the target | |
127 | server is not available. {pmg} stores mail and retries delivery | |
128 | for up to one week. Thus, you will not lose emails if your mail server is | |
129 | down, even if you run a single server setup. | |
130 | ||
131 | ||
132 | Load balancing with `MX` records | |
133 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
134 | ||
135 | Using your ISP's mail server is not always a good idea, because many | |
136 | ISPs do not use advanced spam prevention techniques, or do not filter | |
137 | spam at all. It is often better to run a second server yourself to | |
138 | avoid lower spam detection rates. | |
139 | ||
140 | It’s quite simple to set up a high-performance, load-balanced | |
141 | mail cluster using `MX` records. You just need to define two `MX` | |
142 | records with the same priority. The rest of this section will provide | |
143 | a complete example. | |
144 | ||
145 | First, you need to have at least two working {pmg} servers | |
146 | (mail1.example.com and mail2.example.com), configured as a cluster (see | |
147 | section xref:pmg_cluster_administration[Cluster Administration] | |
148 | below), with each having its own IP address. Let us assume the | |
149 | following DNS address records: | |
150 | ||
151 | ---- | |
152 | mail1.example.com. 22879 IN A 1.2.3.4 | |
153 | mail2.example.com. 22879 IN A 1.2.3.5 | |
154 | ---- | |
155 | ||
156 | It is always a good idea to add reverse lookup entries (PTR | |
157 | records) for those hosts, as many email systems nowadays reject mails | |
158 | from hosts without valid PTR records. Then you need to define your `MX` | |
159 | records: | |
160 | ||
161 | ---- | |
162 | example.com. 22879 IN MX 10 mail1.example.com. | |
163 | example.com. 22879 IN MX 10 mail2.example.com. | |
164 | ---- | |
165 | ||
166 | This is all you need. Following this, you will receive mail on both | |
167 | hosts, load-balanced using round-robin scheduling. If one host fails, | |
168 | the other one is used. | |
169 | ||
170 | ||
171 | Other ways | |
172 | ~~~~~~~~~~ | |
173 | ||
174 | Multiple address records | |
175 | ^^^^^^^^^^^^^^^^^^^^^^^^ | |
176 | ||
177 | Using several DNS `MX` records can be tedious, if you have many | |
178 | domains. It is also possible to use one `MX` record per domain, but | |
179 | multiple address records: | |
180 | ||
181 | ---- | |
182 | example.com. 22879 IN MX 10 mail.example.com. | |
183 | mail.example.com. 22879 IN A 1.2.3.4 | |
184 | mail.example.com. 22879 IN A 1.2.3.5 | |
185 | ---- | |
186 | ||
187 | ||
188 | Using firewall features | |
189 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
190 | ||
191 | Many firewalls can do some kind of RR-Scheduling (round-robin) when | |
192 | using DNAT. See your firewall manual for more details. | |
193 | ||
194 | ||
195 | [[pmg_cluster_administration]] | |
196 | Cluster Administration | |
197 | ---------------------- | |
198 | ||
199 | Cluster administration can be done from the GUI or by using the command-line | |
200 | utility `pmgcm`. The CLI tool is a bit more verbose, so we suggest | |
201 | to use that if you run into any problems. | |
202 | ||
203 | NOTE: Always set up the IP configuration, before adding a node to the | |
204 | cluster. IP address, network mask, gateway address and hostname can’t | |
205 | be changed later. | |
206 | ||
207 | Creating a Cluster | |
208 | ~~~~~~~~~~~~~~~~~~ | |
209 | ||
210 | [thumbnail="screenshot/pmg-gui-cluster-panel.png", big=1] | |
211 | ||
212 | You can create a cluster from any existing {pmg} host. All data is | |
213 | preserved. | |
214 | ||
215 | * make sure you have the right IP configuration | |
216 | (IP/MASK/GATEWAY/HOSTNAME), because you cannot change that later | |
217 | ||
218 | * press the create button on the GUI, or run the cluster creation command: | |
219 | + | |
220 | ---- | |
221 | pmgcm create | |
222 | ---- | |
223 | ||
224 | NOTE: The node where you run the cluster create command will be the | |
225 | 'master' node. | |
226 | ||
227 | ||
228 | Show Cluster Status | |
229 | ~~~~~~~~~~~~~~~~~~~ | |
230 | ||
231 | The GUI shows the status of all cluster nodes. You can also view this | |
232 | using the command-line tool: | |
233 | ||
234 | ---- | |
235 | pmgcm status | |
236 | --NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK | |
237 | pmg5(1) 192.168.2.127 master A 1 day 21:18 0.30 80% 41% | |
238 | ---- | |
239 | ||
240 | ||
241 | [[pmgcm_join]] | |
242 | Adding Cluster Nodes | |
243 | ~~~~~~~~~~~~~~~~~~~~ | |
244 | ||
245 | [thumbnail="screenshot/pmg-gui-cluster-join.png", big=1] | |
246 | ||
247 | When you add a new node to a cluster (using `join`), all data on that node is | |
248 | destroyed. The whole database is initialized with the cluster data from | |
249 | the master. | |
250 | ||
251 | * make sure you have the right IP configuration | |
252 | ||
253 | * run the cluster join command (on the new node): | |
254 | + | |
255 | ---- | |
256 | pmgcm join <master_ip> | |
257 | ---- | |
258 | ||
259 | You need to enter the root password of the master host, when asked for | |
260 | a password. When joining a cluster using the GUI, you also need to | |
261 | enter the 'fingerprint' of the master node. You can get this information | |
262 | by pressing the `Add` button on the master node. | |
263 | ||
264 | NOTE: Joining a cluster with two-factor authentication enabled for the `root` | |
265 | user is not supported. Remove the second factor when joining the cluster. | |
266 | ||
267 | CAUTION: Node initialization deletes all existing databases, stops all | |
268 | services accessing the database and then restarts them. Therefore, do | |
269 | not add nodes which are already active and receive mail. | |
270 | ||
271 | Also note that joining a cluster can take several minutes, because the | |
272 | new node needs to synchronize all data from the master (although this | |
273 | is done in the background). | |
274 | ||
275 | NOTE: If you join a new node, existing quarantined items from the | |
276 | other nodes are not synchronized to the new node. | |
277 | ||
278 | ||
279 | Deleting Nodes | |
280 | ~~~~~~~~~~~~~~ | |
281 | ||
282 | Please detach nodes from the cluster network, before removing them | |
283 | from the cluster configuration. Only then you should run the following | |
284 | command on the master node: | |
285 | ||
286 | ---- | |
287 | pmgcm delete <cid> | |
288 | ---- | |
289 | ||
290 | Parameter `<cid>` is the unique cluster node ID, as listed with `pmgcm status`. | |
291 | ||
292 | ||
293 | Disaster Recovery | |
294 | ~~~~~~~~~~~~~~~~~ | |
295 | ||
296 | It is highly recommended to use redundant disks on all cluster nodes | |
297 | (RAID). So in almost any circumstance, you just need to replace the | |
298 | damaged hardware or disk. {pmg} uses an asynchronous | |
299 | clustering algorithm, so you just need to reboot the repaired node, | |
300 | and everything will work again transparently. | |
301 | ||
302 | The following scenarios only apply when you really lose the contents | |
303 | of the hard disk. | |
304 | ||
305 | ||
306 | Single Node Failure | |
307 | ^^^^^^^^^^^^^^^^^^^ | |
308 | ||
309 | * delete failed node on master | |
310 | + | |
311 | ---- | |
312 | pmgcm delete <cid> | |
313 | ---- | |
314 | ||
315 | * add (re-join) a new node | |
316 | + | |
317 | ---- | |
318 | pmgcm join <master_ip> | |
319 | ---- | |
320 | ||
321 | ||
322 | Master Failure | |
323 | ^^^^^^^^^^^^^^ | |
324 | ||
325 | * force another node to be master | |
326 | + | |
327 | ----- | |
328 | pmgcm promote | |
329 | ----- | |
330 | ||
331 | * tell other nodes that master has changed | |
332 | + | |
333 | ---- | |
334 | pmgcm sync --master_ip <master_ip> | |
335 | ---- | |
336 | ||
337 | ||
338 | Total Cluster Failure | |
339 | ^^^^^^^^^^^^^^^^^^^^^ | |
340 | ||
341 | * restore backup (Cluster and node information is not restored; you | |
342 | have to recreate master and nodes) | |
343 | ||
344 | * tell it to become master | |
345 | + | |
346 | ---- | |
347 | pmgcm create | |
348 | ---- | |
349 | ||
350 | * install new nodes | |
351 | ||
352 | * add those new nodes to the cluster | |
353 | + | |
354 | ---- | |
355 | pmgcm join <master_ip> | |
356 | ---- | |
357 | ||
358 | ||
359 | ifdef::manvolnum[] | |
360 | include::pmg-copyright.adoc[] | |
361 | endif::manvolnum[] |