]> git.proxmox.com Git - pmg-docs.git/blame_incremental - pmgcm.adoc
installation: fix codeblock rendering in zfs performance tips section
[pmg-docs.git] / pmgcm.adoc
... / ...
CommitLineData
1[[chapter_pmgcm]]
2ifdef::manvolnum[]
3pmgcm(1)
4========
5:pmg-toplevel:
6
7NAME
8----
9
10pmgcm - Proxmox Mail Gateway Cluster Management Toolkit
11
12
13SYNOPSIS
14--------
15
16include::pmgcm.1-synopsis.adoc[]
17
18
19DESCRIPTION
20-----------
21endif::manvolnum[]
22ifndef::manvolnum[]
23Cluster Management
24==================
25:pmg-toplevel:
26endif::manvolnum[]
27
28We are living in a world where email is becoming more and more important -
29failures in email systems are not acceptable. To meet these
30requirements, we developed the Proxmox HA (High Availability) Cluster.
31
32The {pmg} HA Cluster consists of a master node and several slave nodes
33(minimum one slave node). Configuration is done on the master,
34and data is synchronized to all cluster nodes via a VPN tunnel. This
35provides the following advantages:
36
37* centralized configuration management
38
39* fully redundant data storage
40
41* high availability
42
43* high performance
44
45We use a unique application level clustering scheme, which provides
46extremely good performance. Special considerations were taken to make
47management as easy as possible. A complete cluster setup is done within
48minutes, and nodes automatically reintegrate after temporary failures,
49without any operator interaction.
50
51image::images/Proxmox_HA_cluster_final_1024.png[]
52
53
54Hardware Requirements
55---------------------
56
57There are no special hardware requirements, although it is highly
58recommended to use fast and reliable server hardware, with redundant disks on
59all cluster nodes (Hardware RAID with BBU and write cache enabled).
60
61The HA Cluster can also run in virtualized environments.
62
63
64Subscriptions
65-------------
66
67Each node in a cluster has its own subscription. If you want support
68for a cluster, each cluster node needs to have a valid
69subscription. All nodes must have the same subscription level.
70
71
72Load Balancing
73--------------
74
75It is usually advisable to distribute mail traffic among all cluster
76nodes. Please note that this is not always required, because it is
77also reasonable to use only one node to handle SMTP traffic. The
78second node can then be used as a quarantine host, that only provides the web
79interface to the user quarantine.
80
81The normal mail delivery process looks up DNS Mail Exchange (`MX`)
82records to determine the destination host. An `MX` record tells the
83sending system where to deliver mail for a certain domain. It is also
84possible to have several `MX` records for a single domain, each of which can
85have different priorities. For example, our `MX` record looks like this:
86
87----
88# dig -t mx proxmox.com
89
90;; ANSWER SECTION:
91proxmox.com. 22879 IN MX 10 mail.proxmox.com.
92
93;; ADDITIONAL SECTION:
94mail.proxmox.com. 22879 IN A 213.129.239.114
95----
96
97Notice that there is a single `MX` record for the domain
98`proxmox.com`, pointing to `mail.proxmox.com`. The `dig` command
99automatically outputs the corresponding address record, if it
100exists. In our case it points to `213.129.239.114`. The priority of
101our `MX` record is set to 10 (preferred default value).
102
103
104Hot standby with backup `MX` records
105~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
106
107Many people do not want to install two redundant mail proxies. Instead
108they use the mail proxy of their ISP as a fallback. This can be done
109by adding an additional `MX` record with a lower priority (higher
110number). Continuing from the example above, this would look like:
111
112----
113proxmox.com. 22879 IN MX 100 mail.provider.tld.
114----
115
116In such a setup, your provider must accept mails for your domain and
117forward them to you. Please note that this is not advisable, because
118spam detection needs to be done by the backup `MX` server as well, and
119external servers provided by ISPs usually don't do this.
120
121However, you will never lose mails with such a setup, because the sending Mail
122Transport Agent (MTA) will simply deliver the mail to the backup
123server (mail.provider.tld), if the primary server (mail.proxmox.com) is
124not available.
125
126NOTE: Any reasonable mail server retries mail delivery if the target
127server is not available. {pmg} stores mail and retries delivery
128for up to one week. Thus, you will not lose emails if your mail server is
129down, even if you run a single server setup.
130
131
132Load balancing with `MX` records
133~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
134
135Using your ISP's mail server is not always a good idea, because many
136ISPs do not use advanced spam prevention techniques, or do not filter
137spam at all. It is often better to run a second server yourself to
138avoid lower spam detection rates.
139
140It’s quite simple to set up a high-performance, load-balanced
141mail cluster using `MX` records. You just need to define two `MX`
142records with the same priority. The rest of this section will provide
143a complete example.
144
145First, you need to have at least two working {pmg} servers
146(mail1.example.com and mail2.example.com), configured as a cluster (see
147section xref:pmg_cluster_administration[Cluster Administration]
148below), with each having its own IP address. Let us assume the
149following DNS address records:
150
151----
152mail1.example.com. 22879 IN A 1.2.3.4
153mail2.example.com. 22879 IN A 1.2.3.5
154----
155
156It is always a good idea to add reverse lookup entries (PTR
157records) for those hosts, as many email systems nowadays reject mails
158from hosts without valid PTR records. Then you need to define your `MX`
159records:
160
161----
162example.com. 22879 IN MX 10 mail1.example.com.
163example.com. 22879 IN MX 10 mail2.example.com.
164----
165
166This is all you need. Following this, you will receive mail on both
167hosts, load-balanced using round-robin scheduling. If one host fails,
168the other one is used.
169
170
171Other ways
172~~~~~~~~~~
173
174Multiple address records
175^^^^^^^^^^^^^^^^^^^^^^^^
176
177Using several DNS `MX` records can be tedious, if you have many
178domains. It is also possible to use one `MX` record per domain, but
179multiple address records:
180
181----
182example.com. 22879 IN MX 10 mail.example.com.
183mail.example.com. 22879 IN A 1.2.3.4
184mail.example.com. 22879 IN A 1.2.3.5
185----
186
187
188Using firewall features
189^^^^^^^^^^^^^^^^^^^^^^^
190
191Many firewalls can do some kind of RR-Scheduling (round-robin) when
192using DNAT. See your firewall manual for more details.
193
194
195[[pmg_cluster_administration]]
196Cluster Administration
197----------------------
198
199Cluster administration can be done from the GUI or by using the command-line
200utility `pmgcm`. The CLI tool is a bit more verbose, so we suggest
201to use that if you run into any problems.
202
203NOTE: Always set up the IP configuration, before adding a node to the
204cluster. IP address, network mask, gateway address and hostname can’t
205be changed later.
206
207Creating a Cluster
208~~~~~~~~~~~~~~~~~~
209
210[thumbnail="screenshot/pmg-gui-cluster-panel.png", big=1]
211
212You can create a cluster from any existing {pmg} host. All data is
213preserved.
214
215* make sure you have the right IP configuration
216 (IP/MASK/GATEWAY/HOSTNAME), because you cannot change that later
217
218* press the create button on the GUI, or run the cluster creation command:
219+
220----
221pmgcm create
222----
223
224NOTE: The node where you run the cluster create command will be the
225'master' node.
226
227
228Show Cluster Status
229~~~~~~~~~~~~~~~~~~~
230
231The GUI shows the status of all cluster nodes. You can also view this
232using the command-line tool:
233
234----
235pmgcm status
236--NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
237pmg5(1) 192.168.2.127 master A 1 day 21:18 0.30 80% 41%
238----
239
240
241[[pmgcm_join]]
242Adding Cluster Nodes
243~~~~~~~~~~~~~~~~~~~~
244
245[thumbnail="screenshot/pmg-gui-cluster-join.png", big=1]
246
247When you add a new node to a cluster (using `join`), all data on that node is
248destroyed. The whole database is initialized with the cluster data from
249the master.
250
251* make sure you have the right IP configuration
252
253* run the cluster join command (on the new node):
254+
255----
256pmgcm join <master_ip>
257----
258
259You need to enter the root password of the master host, when asked for
260a password. When joining a cluster using the GUI, you also need to
261enter the 'fingerprint' of the master node. You can get this information
262by pressing the `Add` button on the master node.
263
264NOTE: Joining a cluster with two-factor authentication enabled for the `root`
265user is not supported. Remove the second factor when joining the cluster.
266
267CAUTION: Node initialization deletes all existing databases, stops all
268services accessing the database and then restarts them. Therefore, do
269not add nodes which are already active and receive mail.
270
271Also note that joining a cluster can take several minutes, because the
272new node needs to synchronize all data from the master (although this
273is done in the background).
274
275NOTE: If you join a new node, existing quarantined items from the
276other nodes are not synchronized to the new node.
277
278
279Deleting Nodes
280~~~~~~~~~~~~~~
281
282Please detach nodes from the cluster network, before removing them
283from the cluster configuration. Only then you should run the following
284command on the master node:
285
286----
287pmgcm delete <cid>
288----
289
290Parameter `<cid>` is the unique cluster node ID, as listed with `pmgcm status`.
291
292
293Disaster Recovery
294~~~~~~~~~~~~~~~~~
295
296It is highly recommended to use redundant disks on all cluster nodes
297(RAID). So in almost any circumstance, you just need to replace the
298damaged hardware or disk. {pmg} uses an asynchronous
299clustering algorithm, so you just need to reboot the repaired node,
300and everything will work again transparently.
301
302The following scenarios only apply when you really lose the contents
303of the hard disk.
304
305
306Single Node Failure
307^^^^^^^^^^^^^^^^^^^
308
309* delete failed node on master
310+
311----
312pmgcm delete <cid>
313----
314
315* add (re-join) a new node
316+
317----
318pmgcm join <master_ip>
319----
320
321
322Master Failure
323^^^^^^^^^^^^^^
324
325* force another node to be master
326+
327-----
328pmgcm promote
329-----
330
331* tell other nodes that master has changed
332+
333----
334pmgcm sync --master_ip <master_ip>
335----
336
337
338Total Cluster Failure
339^^^^^^^^^^^^^^^^^^^^^
340
341* restore backup (Cluster and node information is not restored; you
342 have to recreate master and nodes)
343
344* tell it to become master
345+
346----
347pmgcm create
348----
349
350* install new nodes
351
352* add those new nodes to the cluster
353+
354----
355pmgcm join <master_ip>
356----
357
358
359ifdef::manvolnum[]
360include::pmg-copyright.adoc[]
361endif::manvolnum[]