:pve-toplevel:
endif::wiki[]
-Network configuration can be done either via the GUI, or by manually
+Network configuration can be done either via the GUI, or by manually
editing the file `/etc/network/interfaces`, which contains the
whole network configuration. The `interfaces(5)` manual page contains the
complete format description. All {pve} tools try hard to keep direct
- user modifications, but using the GUI is still preferable, because it
+user modifications, but using the GUI is still preferable, because it
protects you from errors.
-Once the network is configured, you can use the Debian traditional tools `ifup`
+Once the network is configured, you can use the Debian traditional tools `ifup`
and `ifdown` commands to bring interfaces up and down.
-NOTE: {pve} does not write changes directly to
-`/etc/network/interfaces`. Instead, we write into a temporary file
-called `/etc/network/interfaces.new`, and commit those changes when
-you reboot the node.
+Apply Network Changes
+~~~~~~~~~~~~~~~~~~~~~
+
+{pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
+write into a temporary file called `/etc/network/interfaces.new`, this way you
+can do many related changes at once. This also allows to ensure your changes
+are correct before applying, as a wrong network configuration may render a node
+inaccessible.
+
+Reboot Node to apply
+^^^^^^^^^^^^^^^^^^^^
+
+With the default installed `ifupdown` network managing package you need to
+reboot to commit any pending network changes. Most of the time, the basic {pve}
+network setup is stable and does not change often, so rebooting should not be
+required often.
+
+Reload Network with ifupdown2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+With the optional `ifupdown2` network managing package you also can reload the
+network configuration live, without requiring a reboot.
+
+NOTE: 'ifupdown2' cannot understand 'OpenVSwitch' syntax, so reloading is *not*
+possible if OVS interfaces are configured.
+
+Since {pve} 6.1 you can apply pending network changes over the web-interface,
+using the 'Apply Configuration' button in the 'Network' panel of a node.
+
+To install 'ifupdown2' ensure you have the latest {pve} updates installed, then
+
+WARNING: installing 'ifupdown2' will remove 'ifupdown', but as the removal
+scripts of 'ifupdown' before version '0.8.35+pve1' have a issue where network
+is fully stopped on removal footnote:[Introduced with Debian Buster:
+https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=945877] you *must* ensure
+that you have a up to date 'ifupdown' package version.
+
+For the installation itself you can then simply do:
+
+ apt install ifupdown2
+
+With that you're all set. You can also switch back to the 'ifupdown' variant at
+any time, if you run into issues.
Naming Conventions
~~~~~~~~~~~~~~~~~~
Choosing a network configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Depending on your current network organization and your resources you can
+Depending on your current network organization and your resources you can
choose either a bridged, routed, or masquerading networking setup.
{pve} server in a private LAN, using an external gateway to reach the internet
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The *Bridged* model makes the most sense in this case, and this is also
+The *Bridged* model makes the most sense in this case, and this is also
the default mode on new {pve} installations.
-Each of your Guest system will have a virtual interface attached to the
-{pve} bridge. This is similar in effect to having the Guest network card
+Each of your Guest system will have a virtual interface attached to the
+{pve} bridge. This is similar in effect to having the Guest network card
directly connected to a new switch on your LAN, the {pve} host playing the role
of the switch.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In that case the only way to get outgoing network accesses for your guest
-systems is to use *Masquerading*. For incoming network access to your guests,
+systems is to use *Masquerading*. For incoming network access to your guests,
you will need to configure *Port Forwarding*.
For further flexibility, you can configure
[thumbnail="default-network-setup-bridge.svg"]
Bridges are like physical network switches implemented in software.
-All VMs can share a single bridge, or you can create multiple bridges to
-separate network domains. Each host can have up to 4094 bridges.
+All virtual guests can share a single bridge, or you can create multiple
+bridges to separate network domains. Each host can have up to 4094 bridges.
The installation program creates a single bridge named `vmbr0`, which
is connected to the first Ethernet card. The corresponding
address 192.168.10.2
netmask 255.255.255.0
gateway 192.168.10.1
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
----
Virtual machines behave as if they were directly connected to the
reasons, they disable networking as soon as they detect multiple MAC
addresses on a single interface.
-TIP: Some providers allows you to register additional MACs on there
-management interface. This avoids the problem, but is clumsy to
+TIP: Some providers allow you to register additional MACs through their
+management interface. This avoids the problem, but can be clumsy to
configure because you need to register a MAC for each of your VMs.
You can avoid the problem by ``routing'' all traffic via a single
iface vmbr0 inet static
address 203.0.113.17
netmask 255.255.255.248
- bridge_ports none
- bridge_stp off
- bridge_fd 0
+ bridge-ports none
+ bridge-stp off
+ bridge-fd 0
----
iface vmbr0 inet static
address 10.10.10.1
netmask 255.255.255.0
- bridge_ports none
- bridge_stp off
- bridge_fd 0
+ bridge-ports none
+ bridge-stp off
+ bridge-fd 0
- post-up echo 1 > /proc/sys/net/ipv4/ip_forward
+ post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
----
+NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
+needed for outgoing connections. Otherwise the firewall could block outgoing
+connections since they will prefer the `POSTROUTING` of the VM bridge (and not
+`MASQUERADE`).
+
+Adding these lines in the `/etc/network/interfaces` can fix this problem:
+
+----
+post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
+post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
+----
+
+For more information about this, refer to the following links:
+
+https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
+
+https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
+
+https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
+
+
Linux Bond
~~~~~~~~~~
traffic.
If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
-the corresponding bonding mode (802.3ad). Otherwise you should generally use the
+the corresponding bonding mode (802.3ad). Otherwise you should generally use the
active-backup mode. +
// http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
If you intend to run your cluster network on the bonding interfaces, then you
iface eno2 inet manual
+iface eno3 inet manual
+
auto bond0
iface bond0 inet static
- slaves eno1 eno2
+ bond-slaves eno1 eno2
address 192.168.1.2
netmask 255.255.255.0
- bond_miimon 100
- bond_mode 802.3ad
- bond_xmit_hash_policy layer2+3
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno3
+ bridge-stp off
+ bridge-fd 0
----
auto bond0
iface bond0 inet manual
- slaves eno1 eno2
- bond_miimon 100
- bond_mode 802.3ad
- bond_xmit_hash_policy layer2+3
+ bond-slaves eno1 eno2
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
- bridge_ports bond0
- bridge_stp off
- bridge_fd 0
+ bridge-ports bond0
+ bridge-stp off
+ bridge-fd 0
----
{pve} supports this setup out of the box. You can specify the VLAN tag
when you create a VM. The VLAN tag is part of the guest network
-confinuration. The networking layer supports differnet modes to
+configuration. The networking layer supports different modes to
implement VLANs, depending on the bridge configuration:
* *VLAN awareness on the Linux bridge:*
In this case, each guest's virtual network card is assigned to a VLAN tag,
which is transparently supported by the Linux bridge.
-Trunk mode is also possible, but that makes the configuration
+Trunk mode is also possible, but that makes configuration
in the guest necessary.
* *"traditional" VLAN on the Linux bridge:*
In contrast to the VLAN awareness method, this method is not transparent
and creates a VLAN device with associated bridge for each VLAN.
-That is, if e.g. in our default network, a guest VLAN 5 is used
-to create eno1.5 and vmbr0v5, which remains until rebooting.
+That is, creating a guest on VLAN 5 for example, would create two
+interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
* *Open vSwitch VLAN:*
This mode uses the OVS VLAN feature.
-* *Guest configured VLAN:*
+* *Guest configured VLAN:*
VLANs are assigned inside the guest. In this case, the setup is
completely done inside the guest and can not be influenced from the
outside. The benefit is that you can use more than one VLAN on a
For example, in a default configuration where you want to place
the host management address on a separate VLAN.
-NOTE: In the examples we use the VLAN at bridge level to ensure the correct
-function of VLAN 5 in the guest network, but in combination with VLAN anwareness
-bridge this it will not work for guest network VLAN 5.
-The downside of this setup is more CPU usage.
.Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
----
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
- bridge_ports eno1.5
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1.5
+ bridge-stp off
+ bridge-fd 0
auto vmbr0
iface vmbr0 inet manual
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
----
auto vmbr0
iface vmbr0 inet manual
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
- bridge_vlan_aware yes
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
+ bridge-vlan-aware yes
----
The next example is the same setup but a bond is used to
auto bond0
iface bond0 inet manual
- slaves eno1 eno2
- bond_miimon 100
- bond_mode 802.3ad
- bond_xmit_hash_policy layer2+3
+ bond-slaves eno1 eno2
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
iface bond0.5 inet manual
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
- bridge_ports bond0.5
- bridge_stp off
- bridge_fd 0
+ bridge-ports bond0.5
+ bridge-stp off
+ bridge-fd 0
auto vmbr0
iface vmbr0 inet manual
- bridge_ports bond0
- bridge_stp off
- bridge_fd 0
+ bridge-ports bond0
+ bridge-stp off
+ bridge-fd 0
----