requirements we developed the Proxmox HA (High Availability) Cluster.
The {pmg} HA Cluster consists of a master and several slave nodes
-(minimum one node). Configuration is done on the master. Configuration
+(minimum one slave node). Configuration is done on the master. Configuration
and data is synchronized to all cluster nodes over a VPN tunnel. This
provides the following advantages:
* high performance
We use a unique application level clustering scheme, which provides
-extremely good performance. Special considerations where taken to make
-management as easy as possible. Complete Cluster setup is done within
+extremely good performance. Special considerations were taken to make
+management as easy as possible. A complete cluster setup is done within
minutes, and nodes automatically reintegrate after temporary failures
without any operator interaction.
-image::images/pmg-ha-cluster.png[]
+image::images/Proxmox_HA_cluster_final_1024.png[]
Hardware requirements
Subscriptions
-------------
-Each host in a cluster has its own subscription. If you want support
+Each node in a cluster has its own subscription. If you want support
for a cluster, each cluster node needs to have a valid
subscription. All nodes must have the same subscription level.
interface to the user quarantine.
The normal mail delivery process looks up DNS Mail Exchange (`MX`)
-records to determine the destination host. A `MX` record tells the
+records to determine the destination host. An `MX` record tells the
sending system where to deliver mail for a certain domain. It is also
possible to have several `MX` records for a single domain, they can have
different priorities. For example, our `MX` record looks like that:
mail.proxmox.com. 22879 IN A 213.129.239.114
----
-Please notice that there is one single `MX` record for the Domain
+Notice that there is a single `MX` record for the domain
`proxmox.com`, pointing to `mail.proxmox.com`. The `dig` command
automatically puts out the corresponding address record if it
exists. In our case it points to `213.129.239.114`. The priority of
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Many people do not want to install two redundant mail proxies, instead
-they use the mail proxy of their ISP as fall-back. This is simply done
+they use the mail proxy of their ISP as fallback. This is simply done
by adding an additional `MX` Record with a lower priority (higher
number). With the example above this looks like that:
proxmox.com. 22879 IN MX 100 mail.provider.tld.
----
-Sure, your provider must accept mails for your domain and forward
-received mails to you. Please note that such setup is not really
-advisable, because spam detection needs to be done by that backup `MX`
-server also, and external servers provided by ISPs usually don't do
-that.
+In such a setup, your provider must accept mails for your domain and
+forward them to you. Please note that this is not advisable, because
+spam detection needs to be done by the backup `MX` server as well, and
+external servers provided by ISPs usually don't.
-You will never lose mails with such a setup, because the sending Mail
+However, you will never lose mails with such a setup, because the sending Mail
Transport Agent (MTA) will simply deliver the mail to the backup
server (mail.provider.tld) if the primary server (mail.proxmox.com) is
not available.
-NOTE: Any resononable mail server retries mail devivery if the target
-server is not available, i.e. {pmg} stores mail and retries delivery
-for up to one week. So you will not loose mail if you mail server is
+NOTE: Any reasonable mail server retries mail delivery if the target
+server is not available, and {pmg} stores mail and retries delivery
+for up to one week. So you will not lose mails if your mail server is
down, even if you run a single server setup.
Load balancing with `MX` records
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Using your ISPs mail server is not always a good idea, because many
+Using your ISP's mail server is not always a good idea, because many
ISPs do not use advanced spam prevention techniques, or do not filter
-SPAM at all. It is often better to run a second server yourself to
+spam at all. It is often better to run a second server yourself to
avoid lower spam detection rates.
-Anyways, it’s quite simple to set up a high performance load balanced
-mail cluster using `MX` records. You just need to define two `MX` records
-with the same priority. I will explain this using a complete example
-to make it clearer.
+It’s quite simple to set up a high performance load balanced
+mail cluster using `MX` records. You need to define two `MX` records
+with the same priority. Here is a complete example to make it clearer.
First, you need to have at least 2 working {pmg} servers
(mail1.example.com and mail2.example.com) configured as cluster (see
section xref:pmg_cluster_administration[Cluster administration]
below), each having its own IP address. Let us assume the following
-addresses (DNS address records):
+DNS address records:
----
mail1.example.com. 22879 IN A 1.2.3.4
mail2.example.com. 22879 IN A 1.2.3.5
----
-Btw, it is always a good idea to add reverse lookup entries (PTR
+It is always a good idea to add reverse lookup entries (PTR
records) for those hosts. Many email systems nowadays reject mails
-from hosts without valid PTR records. Then you need to define your `MX`
+from hosts without valid PTR records. Then you need to define your `MX`
records:
----
example.com. 22879 IN MX 10 mail2.example.com.
----
-This is all you need. You will receive mails on both hosts, more or
-less load-balanced using round-robin scheduling. If one host fails the
-other is used.
+This is all you need. You will receive mails on both hosts, load-balanced using
+round-robin scheduling. If one host fails the other one is used.
Other ways
Multiple address records
^^^^^^^^^^^^^^^^^^^^^^^^
-Using several DNS `MX` record is sometime clumsy if you have many
+Using several DNS `MX` records is sometimes tedious if you have many
domains. It is also possible to use one `MX` record per domain, but
multiple address records:
Cluster administration
----------------------
-Cluster administration can be done on the GUI or using the command
+Cluster administration can be done in the GUI or by using the command
line utility `pmgcm`. The CLI tool is a bit more verbose, so we suggest
-to use that if you run into problems.
+to use that if you run into any problems.
NOTE: Always setup the IP configuration before adding a node to the
cluster. IP address, network mask, gateway address and hostname can’t
Creating a Cluster
~~~~~~~~~~~~~~~~~~
-image::images/screenshot/pmg-gui-cluster-panel.png[]
+[thumbnail="pmg-gui-cluster-panel.png", big=1]
-You can create a cluster from any existing Proxmox host. All data is
+You can create a cluster from any existing {pmg} host. All data is
preserved.
* make sure you have the right IP configuration
----
+[[pmgcm_join]]
Adding Cluster Nodes
~~~~~~~~~~~~~~~~~~~~
-image::images/screenshot/pmg-gui-cluster-join.png[]
+[thumbnail="pmg-gui-cluster-join.png", big=1]
-When you add a new node to a cluster (join) all data on that node is
-destroyed. The whole database is initialized with cluster data from
+When you add a new node to a cluster (using `join`), all data on that node is
+destroyed. The whole database is initialized with the cluster data from
the master.
* make sure you have the right IP configuration
You need to enter the root password of the master host when asked for
a password. When joining a cluster using the GUI, you also need to
-enter the 'fingerprint' of the master node. You get that information
-by pressing the `Join` button on the master node.
+enter the 'fingerprint' of the master node. You can get that information
+by pressing the `Add` button on the master node.
CAUTION: Node initialization deletes all existing databases, stops and
then restarts all services accessing the database. So do not add nodes
clustering algorithm, so you just need to reboot the repaired node,
and everything will work again transparently.
-The following scenarios only apply when you really loose the contents
+The following scenarios only apply when you really lose the contents
of the hard disk.