:pve-toplevel:
endif::wiki[]
-{pve} uses a bridged networking model. Each host can have up to 4094
-bridges. Bridges are like physical network switches implemented in
-software. All VMs can share a single bridge, as if
-virtual network cables from each guest were all plugged into the same
-switch. But you can also create multiple bridges to separate network
-domains.
-
-For connecting VMs to the outside world, bridges are attached to
-physical network cards. For further flexibility, you can configure
-VLANs (IEEE 802.1q) and network bonding, also known as "link
-aggregation". That way it is possible to build complex and flexible
-virtual networks.
+Network configuration can be done either via the GUI, or by manually
+editing the file `/etc/network/interfaces`, which contains the
+whole network configuration. The `interfaces(5)` manual page contains the
+complete format description. All {pve} tools try hard to keep direct
+ user modifications, but using the GUI is still preferable, because it
+protects you from errors.
-Debian traditionally uses the `ifup` and `ifdown` commands to
-configure the network. The file `/etc/network/interfaces` contains the
-whole network setup. Please refer to to manual page (`man interfaces`)
-for a complete format description.
+Once the network is configured, you can use the Debian traditional tools `ifup`
+and `ifdown` commands to bring interfaces up and down.
NOTE: {pve} does not write changes directly to
`/etc/network/interfaces`. Instead, we write into a temporary file
called `/etc/network/interfaces.new`, and commit those changes when
you reboot the node.
-It is worth mentioning that you can directly edit the configuration
-file. All {pve} tools tries hard to keep such direct user
-modifications. Using the GUI is still preferable, because it
-protect you from errors.
-
-
Naming Conventions
~~~~~~~~~~~~~~~~~~
We currently use the following naming conventions for device names:
-* New Ethernet devices: en*, systemd network interface names.
+* Ethernet devices: en*, systemd network interface names. This naming scheme is
+ used for new {pve} installations since version 5.0.
-* Legacy Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
-They are available when Proxmox VE has been updated by an earlier version.
+* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) This naming
+scheme is used for {pve} hosts which were installed before the 5.0
+release. When upgrading to 5.0, the names are kept as-is.
* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
separated by a period (`eno1.50`, `bond1.30`)
This makes it easier to debug networks problems, because the device
-names implies the device type.
-
+name implies the device type.
Systemd Network Interface Names
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
+Choosing a network configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Depending on your current network organization and your resources you can
+choose either a bridged, routed, or masquerading networking setup.
+
+{pve} server in a private LAN, using an external gateway to reach the internet
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The *Bridged* model makes the most sense in this case, and this is also
+the default mode on new {pve} installations.
+Each of your Guest system will have a virtual interface attached to the
+{pve} bridge. This is similar in effect to having the Guest network card
+directly connected to a new switch on your LAN, the {pve} host playing the role
+of the switch.
+
+{pve} server at hosting provider, with public IP ranges for Guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For this setup, you can use either a *Bridged* or *Routed* model, depending on
+what your provider allows.
+
+{pve} server at hosting provider, with a single public IP address
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In that case the only way to get outgoing network accesses for your guest
+systems is to use *Masquerading*. For incoming network access to your guests,
+you will need to configure *Port Forwarding*.
+
+For further flexibility, you can configure
+VLANs (IEEE 802.1q) and network bonding, also known as "link
+aggregation". That way it is possible to build complex and flexible
+virtual networks.
Default Configuration using a Bridge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+[thumbnail="default-network-setup-bridge.svg"]
+Bridges are like physical network switches implemented in software.
+All virtual guests can share a single bridge, or you can create multiple
+bridges to separate network domains. Each host can have up to 4094 bridges.
+
The installation program creates a single bridge named `vmbr0`, which
-is connected to the first Ethernet card `eno0`. The corresponding
-configuration in `/etc/network/interfaces` looks like this:
+is connected to the first Ethernet card. The corresponding
+configuration in `/etc/network/interfaces` might look like this:
----
auto lo
having its own MAC, even though there is only one network cable
connecting all of these VMs to the network.
-
Routed Configuration
~~~~~~~~~~~~~~~~~~~~
reasons, they disable networking as soon as they detect multiple MAC
addresses on a single interface.
-TIP: Some providers allows you to register additional MACs on there
+TIP: Some providers allows you to register additional MACs on their
management interface. This avoids the problem, but is clumsy to
configure because you need to register a MAC for each of your VMs.
interface. This makes sure that all network packets use the same MAC
address.
-A common scenario is that you have a public IP (assume `192.168.10.2`
+[thumbnail="default-network-setup-routed.svg"]
+A common scenario is that you have a public IP (assume `198.51.100.5`
for this example), and an additional IP block for your VMs
-(`10.10.10.1/255.255.255.0`). We recommend the following setup for such
+(`203.0.113.16/29`). We recommend the following setup for such
situations:
----
auto eno1
iface eno1 inet static
- address 192.168.10.2
+ address 198.51.100.5
netmask 255.255.255.0
- gateway 192.168.10.1
+ gateway 198.51.100.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
auto vmbr0
iface vmbr0 inet static
- address 10.10.10.1
- netmask 255.255.255.0
+ address 203.0.113.17
+ netmask 255.255.255.248
bridge_ports none
bridge_stp off
bridge_fd 0
Masquerading (NAT) with `iptables`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In some cases you may want to use private IPs behind your Proxmox
-host's true IP, and masquerade the traffic using NAT:
+Masquerading allows guests having only a private IP address to access the
+network by using the host IP address for outgoing traffic. Each outgoing
+packet is rewritten by `iptables` to appear as originating from the host,
+and responses are rewritten accordingly to be routed to the original sender.
----
auto lo
iface lo inet loopback
-auto eno0
+auto eno1
#real IP address
iface eno1 inet static
- address 192.168.10.2
+ address 198.51.100.5
netmask 255.255.255.0
- gateway 192.168.10.1
+ gateway 198.51.100.1
auto vmbr0
#private sub network
network-peers use different MAC addresses for their network packet
traffic.
-For the most setups the active-backup are the best choice or if your
-switch support LACP "IEEE 802.3ad" this mode should be preferred.
+If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
+the corresponding bonding mode (802.3ad). Otherwise you should generally use the
+active-backup mode. +
+// http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
+If you intend to run your cluster network on the bonding interfaces, then you
+have to use active-passive mode on the bonding interfaces, other modes are
+unsupported.
The following bond configuration can be used as distributed/shared
storage network. The benefit would be that you get more speed and the
iface vmbr0 inet static
address 10.10.10.2
netmask 255.255.255.0
- gateway 10.10.10.1
+ gateway 10.10.10.1
bridge_ports eno1
bridge_stp off
bridge_fd 0
----
+[thumbnail="default-network-setup-bond.svg"]
Another possibility it to use the bond directly as bridge port.
This can be used to make the guest network fault-tolerant.
iface vmbr0 inet static
address 10.10.10.2
netmask 255.255.255.0
- gateway 10.10.10.1
+ gateway 10.10.10.1
+ bridge_ports bond0
+ bridge_stp off
+ bridge_fd 0
+
+----
+
+
+VLAN 802.1Q
+~~~~~~~~~~~
+
+A virtual LAN (VLAN) is a broadcast domain that is partitioned and
+isolated in the network at layer two. So it is possible to have
+multiple networks (4096) in a physical network, each independent of
+the other ones.
+
+Each VLAN network is identified by a number often called 'tag'.
+Network packages are then 'tagged' to identify which virtual network
+they belong to.
+
+
+VLAN for Guest Networks
+^^^^^^^^^^^^^^^^^^^^^^^
+
+{pve} supports this setup out of the box. You can specify the VLAN tag
+when you create a VM. The VLAN tag is part of the guest network
+configuration. The networking layer supports different modes to
+implement VLANs, depending on the bridge configuration:
+
+* *VLAN awareness on the Linux bridge:*
+In this case, each guest's virtual network card is assigned to a VLAN tag,
+which is transparently supported by the Linux bridge.
+Trunk mode is also possible, but that makes configuration
+in the guest necessary.
+
+* *"traditional" VLAN on the Linux bridge:*
+In contrast to the VLAN awareness method, this method is not transparent
+and creates a VLAN device with associated bridge for each VLAN.
+That is, creating a guest on VLAN 5 for example, would create two
+interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
+
+* *Open vSwitch VLAN:*
+This mode uses the OVS VLAN feature.
+
+* *Guest configured VLAN:*
+VLANs are assigned inside the guest. In this case, the setup is
+completely done inside the guest and can not be influenced from the
+outside. The benefit is that you can use more than one VLAN on a
+single virtual NIC.
+
+
+VLAN on the Host
+^^^^^^^^^^^^^^^^
+
+To allow host communication with an isolated network. It is possible
+to apply VLAN tags to any network device (NIC, Bond, Bridge). In
+general, you should configure the VLAN on the interface with the least
+abstraction layers between itself and the physical NIC.
+
+For example, in a default configuration where you want to place
+the host management address on a separate VLAN.
+
+
+.Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
+----
+auto lo
+iface lo inet loopback
+
+iface eno1 inet manual
+
+iface eno1.5 inet manual
+
+auto vmbr0v5
+iface vmbr0v5 inet static
+ address 10.10.10.2
+ netmask 255.255.255.0
+ gateway 10.10.10.1
+ bridge_ports eno1.5
+ bridge_stp off
+ bridge_fd 0
+
+auto vmbr0
+iface vmbr0 inet manual
+ bridge_ports eno1
+ bridge_stp off
+ bridge_fd 0
+
+----
+
+.Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
+----
+auto lo
+iface lo inet loopback
+
+iface eno1 inet manual
+
+
+auto vmbr0.5
+iface vmbr0.5 inet static
+ address 10.10.10.2
+ netmask 255.255.255.0
+ gateway 10.10.10.1
+
+auto vmbr0
+iface vmbr0 inet manual
+ bridge_ports eno1
+ bridge_stp off
+ bridge_fd 0
+ bridge_vlan_aware yes
+----
+
+The next example is the same setup but a bond is used to
+make this network fail-safe.
+
+.Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
+----
+auto lo
+iface lo inet loopback
+
+iface eno1 inet manual
+
+iface eno2 inet manual
+
+auto bond0
+iface bond0 inet manual
+ slaves eno1 eno2
+ bond_miimon 100
+ bond_mode 802.3ad
+ bond_xmit_hash_policy layer2+3
+
+iface bond0.5 inet manual
+
+auto vmbr0v5
+iface vmbr0v5 inet static
+ address 10.10.10.2
+ netmask 255.255.255.0
+ gateway 10.10.10.1
+ bridge_ports bond0.5
+ bridge_stp off
+ bridge_fd 0
+
+auto vmbr0
+iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0