:pve-toplevel:
endif::wiki[]
-Network configuration can be done either via the GUI, or by manually
+Network configuration can be done either via the GUI, or by manually
editing the file `/etc/network/interfaces`, which contains the
whole network configuration. The `interfaces(5)` manual page contains the
complete format description. All {pve} tools try hard to keep direct
- user modifications, but using the GUI is still preferable, because it
+user modifications, but using the GUI is still preferable, because it
protects you from errors.
-Once the network is configured, you can use the Debian traditional tools `ifup`
+Once the network is configured, you can use the Debian traditional tools `ifup`
and `ifdown` commands to bring interfaces up and down.
-NOTE: {pve} does not write changes directly to
-`/etc/network/interfaces`. Instead, we write into a temporary file
-called `/etc/network/interfaces.new`, and commit those changes when
-you reboot the node.
+Apply Network Changes
+~~~~~~~~~~~~~~~~~~~~~
+
+{pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
+write into a temporary file called `/etc/network/interfaces.new`, this way you
+can do many related changes at once. This also allows to ensure your changes
+are correct before applying, as a wrong network configuration may render a node
+inaccessible.
+
+Reboot Node to apply
+^^^^^^^^^^^^^^^^^^^^
+
+With the default installed `ifupdown` network managing package you need to
+reboot to commit any pending network changes. Most of the time, the basic {pve}
+network setup is stable and does not change often, so rebooting should not be
+required often.
+
+Reload Network with ifupdown2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+With the optional `ifupdown2` network managing package you also can reload the
+network configuration live, without requiring a reboot.
+
+Since {pve} 6.1 you can apply pending network changes over the web-interface,
+using the 'Apply Configuration' button in the 'Network' panel of a node.
+
+To install 'ifupdown2' ensure you have the latest {pve} updates installed, then
+
+WARNING: installing 'ifupdown2' will remove 'ifupdown', but as the removal
+scripts of 'ifupdown' before version '0.8.35+pve1' have a issue where network
+is fully stopped on removal footnote:[Introduced with Debian Buster:
+https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=945877] you *must* ensure
+that you have a up to date 'ifupdown' package version.
+
+For the installation itself you can then simply do:
+
+ apt install ifupdown2
+
+With that you're all set. You can also switch back to the 'ifupdown' variant at
+any time, if you run into issues.
Naming Conventions
~~~~~~~~~~~~~~~~~~
Choosing a network configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Depending on your current network organization and your resources you can
+Depending on your current network organization and your resources you can
choose either a bridged, routed, or masquerading networking setup.
{pve} server in a private LAN, using an external gateway to reach the internet
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The *Bridged* model makes the most sense in this case, and this is also
+The *Bridged* model makes the most sense in this case, and this is also
the default mode on new {pve} installations.
-Each of your Guest system will have a virtual interface attached to the
-{pve} bridge. This is similar in effect to having the Guest network card
+Each of your Guest system will have a virtual interface attached to the
+{pve} bridge. This is similar in effect to having the Guest network card
directly connected to a new switch on your LAN, the {pve} host playing the role
of the switch.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In that case the only way to get outgoing network accesses for your guest
-systems is to use *Masquerading*. For incoming network access to your guests,
+systems is to use *Masquerading*. For incoming network access to your guests,
you will need to configure *Port Forwarding*.
For further flexibility, you can configure
Default Configuration using a Bridge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+[thumbnail="default-network-setup-bridge.svg"]
Bridges are like physical network switches implemented in software.
-All VMs can share a single bridge, or you can create multiple bridges to
-separate network domains. Each host can have up to 4094 bridges.
+All virtual guests can share a single bridge, or you can create multiple
+bridges to separate network domains. Each host can have up to 4094 bridges.
The installation program creates a single bridge named `vmbr0`, which
is connected to the first Ethernet card. The corresponding
auto vmbr0
iface vmbr0 inet static
- address 192.168.10.2
- netmask 255.255.255.0
+ address 192.168.10.2/24
gateway 192.168.10.1
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
----
Virtual machines behave as if they were directly connected to the
reasons, they disable networking as soon as they detect multiple MAC
addresses on a single interface.
-TIP: Some providers allows you to register additional MACs on there
-management interface. This avoids the problem, but is clumsy to
+TIP: Some providers allow you to register additional MACs through their
+management interface. This avoids the problem, but can be clumsy to
configure because you need to register a MAC for each of your VMs.
You can avoid the problem by ``routing'' all traffic via a single
interface. This makes sure that all network packets use the same MAC
address.
+[thumbnail="default-network-setup-routed.svg"]
A common scenario is that you have a public IP (assume `198.51.100.5`
for this example), and an additional IP block for your VMs
-(`203.0.113.16/29`). We recommend the following setup for such
+(`203.0.113.16/28`). We recommend the following setup for such
situations:
----
auto lo
iface lo inet loopback
-auto eno1
-iface eno1 inet static
- address 198.51.100.5
- netmask 255.255.255.0
+auto eno0
+iface eno0 inet static
+ address 198.51.100.5/29
gateway 198.51.100.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
- post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
+ post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
auto vmbr0
iface vmbr0 inet static
- address 203.0.113.17
- netmask 255.255.255.248
- bridge_ports none
- bridge_stp off
- bridge_fd 0
+ address 203.0.113.17/28
+ bridge-ports none
+ bridge-stp off
+ bridge-fd 0
----
auto eno1
#real IP address
iface eno1 inet static
- address 198.51.100.5
- netmask 255.255.255.0
+ address 198.51.100.5/24
gateway 198.51.100.1
auto vmbr0
#private sub network
iface vmbr0 inet static
- address 10.10.10.1
- netmask 255.255.255.0
- bridge_ports none
- bridge_stp off
- bridge_fd 0
+ address 10.10.10.1/24
+ bridge-ports none
+ bridge-stp off
+ bridge-fd 0
- post-up echo 1 > /proc/sys/net/ipv4/ip_forward
+ post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
----
+NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
+needed for outgoing connections. Otherwise the firewall could block outgoing
+connections since they will prefer the `POSTROUTING` of the VM bridge (and not
+`MASQUERADE`).
+
+Adding these lines in the `/etc/network/interfaces` can fix this problem:
+
+----
+post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
+post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
+----
+
+For more information about this, refer to the following links:
+
+https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
+
+https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
+
+https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
+
+
Linux Bond
~~~~~~~~~~
traffic.
If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
-the corresponding bonding mode (802.3ad). Otherwise you should generally use the
+the corresponding bonding mode (802.3ad). Otherwise you should generally use the
active-backup mode. +
// http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
If you intend to run your cluster network on the bonding interfaces, then you
iface eno2 inet manual
+iface eno3 inet manual
+
auto bond0
iface bond0 inet static
- slaves eno1 eno2
- address 192.168.1.2
- netmask 255.255.255.0
- bond_miimon 100
- bond_mode 802.3ad
- bond_xmit_hash_policy layer2+3
+ bond-slaves eno1 eno2
+ address 192.168.1.2/24
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
- address 10.10.10.2
- netmask 255.255.255.0
+ address 10.10.10.2/24
gateway 10.10.10.1
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno3
+ bridge-stp off
+ bridge-fd 0
----
+[thumbnail="default-network-setup-bond.svg"]
Another possibility it to use the bond directly as bridge port.
This can be used to make the guest network fault-tolerant.
auto bond0
iface bond0 inet manual
- slaves eno1 eno2
- bond_miimon 100
- bond_mode 802.3ad
- bond_xmit_hash_policy layer2+3
+ bond-slaves eno1 eno2
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
- address 10.10.10.2
- netmask 255.255.255.0
+ address 10.10.10.2/24
gateway 10.10.10.1
- bridge_ports bond0
- bridge_stp off
- bridge_fd 0
+ bridge-ports bond0
+ bridge-stp off
+ bridge-fd 0
----
+
VLAN 802.1Q
~~~~~~~~~~~
-A virtual LAN (VLAN) is any broadcast domain that is partitioned
-and isolated in the network at layer 2.
-So it is possible to have multiple networks (4096) in a physical network,
-each independent of the other ones.
-Each VLAN network is identified by a number often called `tag`.
-Network packages are then `tagged` to identify which virtual
-network they belong to.
+A virtual LAN (VLAN) is a broadcast domain that is partitioned and
+isolated in the network at layer two. So it is possible to have
+multiple networks (4096) in a physical network, each independent of
+the other ones.
+
+Each VLAN network is identified by a number often called 'tag'.
+Network packages are then 'tagged' to identify which virtual network
+they belong to.
-One or more VLANs can be used at any network device (Nic, Bond, Bridge).
-VLANs can be configured in several ways. Here, only the most common ones get
-described. We assume a network infrastructure based on Linux Kernel Networking
-(opposed to, e.g., Open vSwitch).
-Of course, there are scenarios that are not possible with this configuration,
-but it will work for most standard setups.
-Two of the most common and popular usage scenarios are:
+VLAN for Guest Networks
+^^^^^^^^^^^^^^^^^^^^^^^
-1.) VLAN for the guest networks.
-Proxmox supports three different ways of using VLAN in guests:
+{pve} supports this setup out of the box. You can specify the VLAN tag
+when you create a VM. The VLAN tag is part of the guest network
+configuration. The networking layer supports different modes to
+implement VLANs, depending on the bridge configuration:
-* *VLAN awareness on the Linux Bridge:*
+* *VLAN awareness on the Linux bridge:*
In this case, each guest's virtual network card is assigned to a VLAN tag,
-which is transparently supported by the Linux Bridge.
-Trunk mode is also possible, but that makes the configuration
+which is transparently supported by the Linux bridge.
+Trunk mode is also possible, but that makes configuration
in the guest necessary.
* *"traditional" VLAN on the Linux bridge:*
In contrast to the VLAN awareness method, this method is not transparent
and creates a VLAN device with associated bridge for each VLAN.
-That is, if e.g. in our default network, a guest VLAN 5 is used
-to create eno1.5 and vmbr0v5, which remains until rebooting.
+That is, creating a guest on VLAN 5 for example, would create two
+interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
+
+* *Open vSwitch VLAN:*
+This mode uses the OVS VLAN feature.
+
+* *Guest configured VLAN:*
+VLANs are assigned inside the guest. In this case, the setup is
+completely done inside the guest and can not be influenced from the
+outside. The benefit is that you can use more than one VLAN on a
+single virtual NIC.
-* *Guest configured:* The VLANs are assigned in the guest.
-In this case, the setup is in the guest and can not be influenced from the
-outside.
-The benefit is more then one VLAN on a single virtual NIC can be used.
-2.) VLAN on the host, to allow the host communication whit an isolated network.
-As already mentioned, it is possible to apply the VLAN to all network devices.
-In general, you should configure the VLAN on the interface with the least
+VLAN on the Host
+^^^^^^^^^^^^^^^^
+
+To allow host communication with an isolated network. It is possible
+to apply VLAN tags to any network device (NIC, Bond, Bridge). In
+general, you should configure the VLAN on the interface with the least
abstraction layers between itself and the physical NIC.
For example, in a default configuration where you want to place
the host management address on a separate VLAN.
-NOTE: In the examples we use the VLAN at bridge level to ensure the correct
-function of VLAN 5 in the guest network, but in combination with VLAN anwareness
-bridge this it will not work for guest network VLAN 5.
-The downside of this setup is more CPU usage.
-.Example: Use VLAN 5 for the {pve} management IP
+.Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
----
auto lo
iface lo inet loopback
auto vmbr0v5
iface vmbr0v5 inet static
- address 10.10.10.2
- netmask 255.255.255.0
+ address 10.10.10.2/24
gateway 10.10.10.1
- bridge_ports eno1.5
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1.5
+ bridge-stp off
+ bridge-fd 0
auto vmbr0
iface vmbr0 inet manual
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
----
+.Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
+----
+auto lo
+iface lo inet loopback
+
+iface eno1 inet manual
+
+
+auto vmbr0.5
+iface vmbr0.5 inet static
+ address 10.10.10.2/24
+ gateway 10.10.10.1
+
+auto vmbr0
+iface vmbr0 inet manual
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
+ bridge-vlan-aware yes
+ bridge-vids 2-4094
+----
+
The next example is the same setup but a bond is used to
make this network fail-safe.
-.Example: Use VLAN 5 with bond0 for the {pve} management IP
+.Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
----
auto lo
iface lo inet loopback
auto bond0
iface bond0 inet manual
- slaves eno1 eno2
- bond_miimon 100
- bond_mode 802.3ad
- bond_xmit_hash_policy layer2+3
+ bond-slaves eno1 eno2
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
iface bond0.5 inet manual
auto vmbr0v5
iface vmbr0v5 inet static
- address 10.10.10.2
- netmask 255.255.255.0
+ address 10.10.10.2/24
gateway 10.10.10.1
- bridge_ports bond0.5
- bridge_stp off
- bridge_fd 0
+ bridge-ports bond0.5
+ bridge-stp off
+ bridge-fd 0
auto vmbr0
iface vmbr0 inet manual
- bridge_ports bond0
- bridge_stp off
- bridge_fd 0
+ bridge-ports bond0
+ bridge-stp off
+ bridge-fd 0
+
+----
+
+Disabling IPv6 on the Node
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+{pve} works correctly in all environments, irrespective of whether IPv6 is
+deployed or not. We recommend leaving all settings at the provided defaults.
+Should you still need to disable support for IPv6 on your node, do so by
+creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
+https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
+for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
+
+----
+net.ipv6.conf.all.disable_ipv6 = 1
+net.ipv6.conf.default.disable_ipv6 = 1
----
+This method is preferred to disabling the loading of the IPv6 module on the
+https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
+
////
TODO: explain IPv6 support?
TODO: explain OVS