+[[chapter_pvecm]]
ifdef::manvolnum[]
pvecm(1)
========
Currently the cluster creation has to be done on the console, so you
need to login via `ssh`.
+[[pvecm_create_cluster]]
Create the Cluster
------------------
hp1# pvecm status
+Multiple Clusters In Same Network
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+It is possible to create multiple clusters in the same physical or logical
+network. Each cluster must have a unique name, which is used to generate the
+cluster's multicast group address. As long as no duplicate cluster names are
+configured in one network segment, the different clusters won't interfere with
+each other.
+
+If multiple clusters operate in a single network it may be beneficial to setup
+an IGMP querier and enable IGMP Snooping in said network. This may reduce the
+load of the network significantly because multicast packets are only delivered
+to endpoints of the respective member nodes.
+
+
+[[pvecm_join_node_to_cluster]]
Adding Nodes to the Cluster
---------------------------
4 1 hp4
----
+[[adding-nodes-with-separated-cluster-network]]
Adding Nodes With Separated Cluster Network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
the 'pvecm create' command used for creating a new cluster.
-If you have setup a additional NIC with a static address on 10.10.10.1/25
+If you have setup an additional NIC with a static address on 10.10.10.1/25
and want to send and receive all cluster communication over this interface
you would execute:
systemctl status corosync
----
+Afterwards, proceed as descripted in the section to
+<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
+
[[separate-cluster-net-after-creation]]
Separate After Cluster Creation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
hostnames ensure that they are resolvable from all nodes.
In my example I want to switch my cluster communication to the 10.10.10.1/25
-network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
+network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
in the totem section of the config to an address of the new network. It can be
any address from the subnet configured on the new network interface.
If corosync runs again correct restart corosync also on all other nodes.
They will then join the cluster membership one by one on the new network.
+[[pvecm_rrp]]
Redundant Ring Protocol
~~~~~~~~~~~~~~~~~~~~~~~
To avoid a single point of failure you should implement counter measurements.
Corosync Configuration
----------------------
-The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
+The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
controls the cluster member ship and its network.
For reading more about it check the corosync.conf man page:
[source,bash]
(``UPS'', also called ``battery backup'') to avoid this state, especially if
you want HA.
-On node startup, service `pve-manager` is started and waits for
+On node startup, the `pve-guests` service is started and waits for
quorum. Once quorate, it starts all guests which have the `onboot`
flag set.
Migration Type
~~~~~~~~~~~~~~
-The migration type defines if the migration data should be sent over a
+The migration type defines if the migration data should be sent over an
encrypted (`secure`) channel or an unencrypted (`insecure`) one.
Setting the migration type to insecure means that the RAM content of a
-virtual guest gets also transfered unencrypted, which can lead to
+virtual guest gets also transferred unencrypted, which can lead to
information disclosure of critical data from inside the guest (for
example passwords or encryption keys).