With the optional `ifupdown2` network managing package you also can reload the
network configuration live, without requiring a reboot.
-NOTE: 'ifupdown2' cannot understand 'OpenVSwitch' syntax, so reloading is *not*
-possible if OVS interfaces are configured.
-
Since {pve} 6.1 you can apply pending network changes over the web-interface,
using the 'Apply Configuration' button in the 'Network' panel of a node.
auto vmbr0
iface vmbr0 inet static
- address 192.168.10.2
- netmask 255.255.255.0
+ address 192.168.10.2/24
gateway 192.168.10.1
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
----
Virtual machines behave as if they were directly connected to the
reasons, they disable networking as soon as they detect multiple MAC
addresses on a single interface.
-TIP: Some providers allows you to register additional MACs on their
-management interface. This avoids the problem, but is clumsy to
+TIP: Some providers allow you to register additional MACs through their
+management interface. This avoids the problem, but can be clumsy to
configure because you need to register a MAC for each of your VMs.
You can avoid the problem by ``routing'' all traffic via a single
[thumbnail="default-network-setup-routed.svg"]
A common scenario is that you have a public IP (assume `198.51.100.5`
for this example), and an additional IP block for your VMs
-(`203.0.113.16/29`). We recommend the following setup for such
+(`203.0.113.16/28`). We recommend the following setup for such
situations:
----
auto lo
iface lo inet loopback
-auto eno1
-iface eno1 inet static
- address 198.51.100.5
- netmask 255.255.255.0
+auto eno0
+iface eno0 inet static
+ address 198.51.100.5/29
gateway 198.51.100.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
auto vmbr0
iface vmbr0 inet static
- address 203.0.113.17
- netmask 255.255.255.248
- bridge_ports none
- bridge_stp off
- bridge_fd 0
+ address 203.0.113.17/28
+ bridge-ports none
+ bridge-stp off
+ bridge-fd 0
----
auto eno1
#real IP address
iface eno1 inet static
- address 198.51.100.5
- netmask 255.255.255.0
+ address 198.51.100.5/24
gateway 198.51.100.1
auto vmbr0
#private sub network
iface vmbr0 inet static
- address 10.10.10.1
- netmask 255.255.255.0
- bridge_ports none
- bridge_stp off
- bridge_fd 0
+ address 10.10.10.1/24
+ bridge-ports none
+ bridge-stp off
+ bridge-fd 0
- post-up echo 1 > /proc/sys/net/ipv4/ip_forward
+ post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
----
+NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
+needed for outgoing connections. Otherwise the firewall could block outgoing
+connections since they will prefer the `POSTROUTING` of the VM bridge (and not
+`MASQUERADE`).
+
+Adding these lines in the `/etc/network/interfaces` can fix this problem:
+
+----
+post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
+post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
+----
+
+For more information about this, refer to the following links:
+
+https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
+
+https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
+
+https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
+
+
Linux Bond
~~~~~~~~~~
iface eno2 inet manual
+iface eno3 inet manual
+
auto bond0
iface bond0 inet static
- slaves eno1 eno2
- address 192.168.1.2
- netmask 255.255.255.0
- bond_miimon 100
- bond_mode 802.3ad
- bond_xmit_hash_policy layer2+3
+ bond-slaves eno1 eno2
+ address 192.168.1.2/24
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
- address 10.10.10.2
- netmask 255.255.255.0
+ address 10.10.10.2/24
gateway 10.10.10.1
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno3
+ bridge-stp off
+ bridge-fd 0
----
auto bond0
iface bond0 inet manual
- slaves eno1 eno2
- bond_miimon 100
- bond_mode 802.3ad
- bond_xmit_hash_policy layer2+3
+ bond-slaves eno1 eno2
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
- address 10.10.10.2
- netmask 255.255.255.0
+ address 10.10.10.2/24
gateway 10.10.10.1
- bridge_ports bond0
- bridge_stp off
- bridge_fd 0
+ bridge-ports bond0
+ bridge-stp off
+ bridge-fd 0
----
auto vmbr0v5
iface vmbr0v5 inet static
- address 10.10.10.2
- netmask 255.255.255.0
+ address 10.10.10.2/24
gateway 10.10.10.1
- bridge_ports eno1.5
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1.5
+ bridge-stp off
+ bridge-fd 0
auto vmbr0
iface vmbr0 inet manual
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
----
auto vmbr0.5
iface vmbr0.5 inet static
- address 10.10.10.2
- netmask 255.255.255.0
+ address 10.10.10.2/24
gateway 10.10.10.1
auto vmbr0
iface vmbr0 inet manual
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
- bridge_vlan_aware yes
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
+ bridge-vlan-aware yes
+ bridge-vids 2-4094
----
The next example is the same setup but a bond is used to
auto bond0
iface bond0 inet manual
- slaves eno1 eno2
- bond_miimon 100
- bond_mode 802.3ad
- bond_xmit_hash_policy layer2+3
+ bond-slaves eno1 eno2
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
iface bond0.5 inet manual
auto vmbr0v5
iface vmbr0v5 inet static
- address 10.10.10.2
- netmask 255.255.255.0
+ address 10.10.10.2/24
gateway 10.10.10.1
- bridge_ports bond0.5
- bridge_stp off
- bridge_fd 0
+ bridge-ports bond0.5
+ bridge-stp off
+ bridge-fd 0
auto vmbr0
iface vmbr0 inet manual
- bridge_ports bond0
- bridge_stp off
- bridge_fd 0
+ bridge-ports bond0
+ bridge-stp off
+ bridge-fd 0
----
+Disabling IPv6 on the Node
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+{pve} works correctly in all environments, irrespective of whether IPv6 is
+deployed or not. We recommend leaving all settings at the provided defaults.
+
+Should you still need to disable support for IPv6 on your node, do so by
+creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
+https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
+for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
+
+----
+net.ipv6.conf.all.disable_ipv6 = 1
+net.ipv6.conf.default.disable_ipv6 = 1
+----
+
+This method is preferred to disabling the loading of the IPv6 module on the
+https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
+
////
TODO: explain IPv6 support?
TODO: explain OVS