minutes, and nodes automatically reintegrate after temporary failures
without any operator interaction.
-image::images/pmg-ha-cluster.png[]
+image::images/Proxmox_HA_cluster_final_1024.png[]
Hardware requirements
Load balancing
--------------
-You can use one of the mechanism described in chapter 9 if you want to
-distribute mail traffic among the cluster nodes. Please note that this
-is not always required, because it is also reasonable to use only one
-node to handle SMTP traffic. The second node is used as quarantine
-host (provide the web interface to user quarantine).
+It is usually advisable to distribute mail traffic among all cluster
+nodes. Please note that this is not always required, because it is
+also reasonable to use only one node to handle SMTP traffic. The
+second node is used as quarantine host, and only provides the web
+interface to the user quarantine.
+The normal mail delivery process looks up DNS Mail Exchange (`MX`)
+records to determine the destination host. A `MX` record tells the
+sending system where to deliver mail for a certain domain. It is also
+possible to have several `MX` records for a single domain, they can have
+different priorities. For example, our `MX` record looks like that:
+----
+# dig -t mx proxmox.com
+
+;; ANSWER SECTION:
+proxmox.com. 22879 IN MX 10 mail.proxmox.com.
+
+;; ADDITIONAL SECTION:
+mail.proxmox.com. 22879 IN A 213.129.239.114
+----
+
+Please notice that there is one single `MX` record for the Domain
+`proxmox.com`, pointing to `mail.proxmox.com`. The `dig` command
+automatically puts out the corresponding address record if it
+exists. In our case it points to `213.129.239.114`. The priority of
+our `MX` record is set to 10 (preferred default value).
+
+
+Hot standby with backup `MX` records
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Many people do not want to install two redundant mail proxies, instead
+they use the mail proxy of their ISP as fallback. This is simply done
+by adding an additional `MX` Record with a lower priority (higher
+number). With the example above this looks like that:
+
+----
+proxmox.com. 22879 IN MX 100 mail.provider.tld.
+----
+
+In such a setup, your provider must accept mails for your domain and
+forward them to you. Please note that this is not advisable, because
+spam detection needs to be done by the backup `MX` server as well, and
+external servers provided by ISPs usually don't.
+
+However, you will never lose mails with such a setup, because the sending Mail
+Transport Agent (MTA) will simply deliver the mail to the backup
+server (mail.provider.tld) if the primary server (mail.proxmox.com) is
+not available.
+
+NOTE: Any reasonable mail server retries mail delivery if the target
+server is not available, i.e. {pmg} stores mail and retries delivery
+for up to one week. So you will not lose mail if your mail server is
+down, even if you run a single server setup.
+
+
+Load balancing with `MX` records
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Using your ISPs mail server is not always a good idea, because many
+ISPs do not use advanced spam prevention techniques, or do not filter
+SPAM at all. It is often better to run a second server yourself to
+avoid lower spam detection rates.
+
+Anyways, it’s quite simple to set up a high performance load balanced
+mail cluster using `MX` records. You just need to define two `MX` records
+with the same priority. Here is a complete example to make it clearer.
+
+First, you need to have at least 2 working {pmg} servers
+(mail1.example.com and mail2.example.com) configured as cluster (see
+section xref:pmg_cluster_administration[Cluster administration]
+below), each having its own IP address. Let us assume the following
+addresses (DNS address records):
+
+----
+mail1.example.com. 22879 IN A 1.2.3.4
+mail2.example.com. 22879 IN A 1.2.3.5
+----
+
+It is always a good idea to add reverse lookup entries (PTR
+records) for those hosts. Many email systems nowadays reject mails
+from hosts without valid PTR records. Then you need to define your `MX`
+records:
+
+----
+example.com. 22879 IN MX 10 mail1.example.com.
+example.com. 22879 IN MX 10 mail2.example.com.
+----
+
+This is all you need. You will receive mails on both hosts, more or
+less load-balanced using round-robin scheduling. If one host fails the
+other one is used.
+
+
+Other ways
+~~~~~~~~~~
+
+Multiple address records
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Using several DNS `MX` records is sometimes clumsy if you have many
+domains. It is also possible to use one `MX` record per domain, but
+multiple address records:
+
+----
+example.com. 22879 IN MX 10 mail.example.com.
+mail.example.com. 22879 IN A 1.2.3.4
+mail.example.com. 22879 IN A 1.2.3.5
+----
+
+
+Using firewall features
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Many firewalls can do some kind of RR-Scheduling (round-robin) when
+using DNAT. See your firewall manual for more details.
+
+
+[[pmg_cluster_administration]]
Cluster administration
----------------------
-Cluster administration is done with a single command line utility
-called `pmgcm'. So you need to login via ssh to manage the cluster
-setup.
+Cluster administration can be done on the GUI or using the command
+line utility `pmgcm`. The CLI tool is a bit more verbose, so we suggest
+to use that if you run into problems.
NOTE: Always setup the IP configuration before adding a node to the
cluster. IP address, network mask, gateway address and hostname can’t
be changed later.
-
Creating a Cluster
~~~~~~~~~~~~~~~~~~
-You can create a cluster from any existing Proxmox host. All data is
+[thumbnail="pmg-gui-cluster-panel.png", big=1]
+
+You can create a cluster from any existing {pmg} host. All data is
preserved.
* make sure you have the right IP configuration
- (IP/MASK/GATEWAY/HOSTNAME), because you cannot changed that later
+ (IP/MASK/GATEWAY/HOSTNAME), because you cannot change that later
-* run the cluster creation command:
+* press the create button on the GUI, or run the cluster creation command:
+
----
pmgcm create
----
+NOTE: The node where you run the cluster create command will be the
+'master' node.
+
-List Cluster Status
+Show Cluster Status
~~~~~~~~~~~~~~~~~~~
+The GUI shows the status of all cluster nodes, and it is also possible
+to use the command line tool:
+
----
pmgcm status
--NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
----
+[[pmgcm_join]]
Adding Cluster Nodes
~~~~~~~~~~~~~~~~~~~~
-When you add a new node to a cluster (join) all data on that node is
+[thumbnail="pmg-gui-cluster-join.png", big=1]
+
+When you add a new node to a cluster (using `join`) all data on that node is
destroyed. The whole database is initialized with cluster data from
the master.
----
You need to enter the root password of the master host when asked for
-a password.
+a password. When joining a cluster using the GUI, you also need to
+enter the 'fingerprint' of the master node. You get that information
+by pressing the `Add` button on the master node.
CAUTION: Node initialization deletes all existing databases, stops and
then restarts all services accessing the database. So do not add nodes
clustering algorithm, so you just need to reboot the repaired node,
and everything will work again transparently.
-The following scenarios only apply when you really loose the contents
+The following scenarios only apply when you really lose the contents
of the hard disk.