machines. By using corosync, these files are replicated in real time
on all cluster nodes. The file system stores all data inside a
persistent database on disk, nonetheless, a copy of the data resides
-in RAM which provides a maximum storage size is 30MB - more than
+in RAM which provides a maximum storage size of 30MB - more than
enough for thousands of VMs.
+
Proxmox VE is the only virtualization platform using this unique
Proxmox VE uses a bridged networking model. All VMs can share one
bridge as if virtual network cables from each guest were all plugged
into the same switch. For connecting VMs to the outside world, bridges
-are attached to physical network cards assigned a TCP/IP
+are attached to physical network cards and assigned a TCP/IP
configuration.
For further flexibility, VLANs (IEEE 802.1q) and network
core infrastructure independent from a single vendor.
-Your benefit with {pve}
+Your benefits with {pve}
-----------------------
* Open source software
:pve-toplevel:
endif::wiki[]
-Network configuration can be done either via the GUI, or by manually
+Network configuration can be done either via the GUI, or by manually
editing the file `/etc/network/interfaces`, which contains the
whole network configuration. The `interfaces(5)` manual page contains the
complete format description. All {pve} tools try hard to keep direct
user modifications, but using the GUI is still preferable, because it
protects you from errors.
-Once the network is configured, you can use the Debian traditional tools `ifup`
+Once the network is configured, you can use the Debian traditional tools `ifup`
and `ifdown` commands to bring interfaces up and down.
NOTE: {pve} does not write changes directly to
Choosing a network configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Depending on your current network organization and your resources you can
+Depending on your current network organization and your resources you can
choose either a bridged, routed, or masquerading networking setup.
{pve} server in a private LAN, using an external gateway to reach the internet
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The *Bridged* model makes the most sense in this case, and this is also
+The *Bridged* model makes the most sense in this case, and this is also
the default mode on new {pve} installations.
-Each of your Guest system will have a virtual interface attached to the
-{pve} bridge. This is similar in effect to having the Guest network card
+Each of your Guest system will have a virtual interface attached to the
+{pve} bridge. This is similar in effect to having the Guest network card
directly connected to a new switch on your LAN, the {pve} host playing the role
of the switch.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In that case the only way to get outgoing network accesses for your guest
-systems is to use *Masquerading*. For incoming network access to your guests,
+systems is to use *Masquerading*. For incoming network access to your guests,
you will need to configure *Port Forwarding*.
For further flexibility, you can configure
[thumbnail="default-network-setup-bridge.svg"]
Bridges are like physical network switches implemented in software.
-All VMs can share a single bridge, or you can create multiple bridges to
+All VMs can share a single bridge, or you can create multiple bridges to
separate network domains. Each host can have up to 4094 bridges.
The installation program creates a single bridge named `vmbr0`, which
traffic.
If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
-the corresponding bonding mode (802.3ad). Otherwise you should generally use the
+the corresponding bonding mode (802.3ad). Otherwise you should generally use the
active-backup mode. +
// http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
If you intend to run your cluster network on the bonding interfaces, then you
{pve} supports this setup out of the box. You can specify the VLAN tag
when you create a VM. The VLAN tag is part of the guest network
-configuration. The networking layer supports differnet modes to
+configuration. The networking layer supports different modes to
implement VLANs, depending on the bridge configuration:
* *VLAN awareness on the Linux bridge:*
In this case, each guest's virtual network card is assigned to a VLAN tag,
which is transparently supported by the Linux bridge.
-Trunk mode is also possible, but that makes the configuration
+Trunk mode is also possible, but that makes configuration
in the guest necessary.
* *"traditional" VLAN on the Linux bridge:*
In contrast to the VLAN awareness method, this method is not transparent
and creates a VLAN device with associated bridge for each VLAN.
-That is, if e.g. in our default network, a guest VLAN 5 is used
-to create eno1.5 and vmbr0v5, which remains until rebooting.
+That is, creating a guest on VLAN 5 for example, would create two
+interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
* *Open vSwitch VLAN:*
This mode uses the OVS VLAN feature.
-* *Guest configured VLAN:*
+* *Guest configured VLAN:*
VLANs are assigned inside the guest. In this case, the setup is
completely done inside the guest and can not be influenced from the
outside. The benefit is that you can use more than one VLAN on a
hardware, but even then, many modern system can support this.
Please refer to your hardware vendor to check if they support this feature
-under Linux for your specific setup
+under Linux for your specific setup.
Configuration
Mediated Devices (vGPU, GVT-g)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Mediated devices are another method to use reuse features and performance from
+Mediated devices are another method to reuse features and performance from
physical hardware for virtualized hardware. These are found most common in
virtualized GPU setups such as Intels GVT-g and Nvidias vGPUs used in their
GRID technology.
^^^^^^^^^^^^^^^^^^
In general your card's driver must support that feature, otherwise it will
-not work. So please refer to your vendor for compatbile drivers and how to
+not work. So please refer to your vendor for compatible drivers and how to
configure them.
-Intels drivers for GVT-g are integraded in the Kernel and should work
-with the 5th, 6th and 7th generation Intel Core Processors, further E3 v4, E3
-v5 and E3 v6 Xeon Processors are supported.
+Intels drivers for GVT-g are integrated in the Kernel and should work
+with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3
+v5 and E3 v6 Xeon Processors.
To enable it for Intel Graphcs, you have to make sure to load the module
'kvmgt' (for example via `/etc/modules`) and to enable it on the Kernel