1 [[sysadmin_network_configuration]]
8 {pve} is using the Linux network stack. This provides a lot of flexibility on
9 how to set up the network on the {pve} nodes. The configuration can be done
10 either via the GUI, or by manually editing the file `/etc/network/interfaces`,
11 which contains the whole network configuration. The `interfaces(5)` manual
12 page contains the complete format description. All {pve} tools try hard to keep
13 direct user modifications, but using the GUI is still preferable, because it
14 protects you from errors.
16 A 'vmbr' interface is needed to connect guests to the underlying physical
17 network. They are a Linux bridge which can be thought of as a virtual switch
18 to which the guests and physical interfaces are connected to. This section
19 provides some examples on how the network can be set up to accomodate different
20 use cases like redundancy with a xref:sysadmin_network_bond['bond'],
21 xref:sysadmin_network_vlan['vlans'] or
22 xref:sysadmin_network_routed['routed'] and
23 xref:sysadmin_network_masquerading['NAT'] setups.
25 The xref:chapter_pvesdn[Software Defined Network] is an option for more complex
26 virtual networks in {pve} clusters.
28 WARNING: It's discouraged to use the traditional Debian tools `ifup` and `ifdown`
29 if unsure, as they have some pitfalls like interupting all guest traffic on
30 `ifdown vmbrX` but not reconnecting those guest again when doing `ifup` on the
36 {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
37 write into a temporary file called `/etc/network/interfaces.new`, this way you
38 can do many related changes at once. This also allows to ensure your changes
39 are correct before applying, as a wrong network configuration may render a node
42 Live-Reload Network with ifupdown2
43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
45 With the recommended 'ifupdown2' package (default for new installations since
46 {pve} 7.0), it is possible to apply network configuration changes without a
47 reboot. If you change the network configuration via the GUI, you can click the
48 'Apply Configuration' button. This will move changes from the staging
49 `interfaces.new` file to `/etc/network/interfaces` and apply them live.
51 If you made manual changes directly to the `/etc/network/interfaces` file, you
52 can apply them by running `ifreload -a`
54 NOTE: If you installed {pve} on top of Debian, or upgraded to {pve} 7.0 from an
55 older {pve} installation, make sure 'ifupdown2' is installed: `apt install
61 Another way to apply a new network configuration is to reboot the node.
62 In that case the systemd service `pvenetcommit` will activate the staging
63 `interfaces.new` file before the `networking` service will apply that
69 We currently use the following naming conventions for device names:
71 * Ethernet devices: `en*`, systemd network interface names. This naming scheme is
72 used for new {pve} installations since version 5.0.
74 * Ethernet devices: `eth[N]`, where 0 ≤ N (`eth0`, `eth1`, ...) This naming
75 scheme is used for {pve} hosts which were installed before the 5.0
76 release. When upgrading to 5.0, the names are kept as-is.
78 * Bridge names: `vmbr[N]`, where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
80 * Bonds: `bond[N]`, where 0 ≤ N (`bond0`, `bond1`, ...)
82 * VLANs: Simply add the VLAN number to the device name,
83 separated by a period (`eno1.50`, `bond1.30`)
85 This makes it easier to debug networks problems, because the device
86 name implies the device type.
88 [[systemd_network_interface_names]]
89 Systemd Network Interface Names
90 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
92 Systemd defines a versioned naming scheme for network device names. The
93 scheme uses the two-character prefix `en` for Ethernet network devices. The
94 next characters depends on the device driver, device location and other
95 attributes. Some possible patterns are:
97 * `o<index>[n<phys_port_name>|d<dev_port>]` — devices on board
99 * `s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` — devices by hotplug id
101 * `[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` —
104 * `x<MAC>` — devices by MAC address
106 Some examples for the most common patterns are:
108 * `eno1` — is the first on-board NIC
110 * `enp3s0f1` — is function 1 of the NIC on PCI bus 3, slot 0
112 For a full list of possible device name patterns, see the
113 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
114 systemd.net-naming-scheme(7) manpage].
116 A new version of systemd may define a new version of the network device naming
117 scheme, which it then uses by default. Consequently, updating to a newer
118 systemd version, for example during a major {pve} upgrade, can change the names
119 of network devices and require adjusting the network configuration. To avoid
120 name changes due to a new version of the naming scheme, you can manually pin a
121 particular naming scheme version (see
122 xref:network_pin_naming_scheme_version[below]).
124 However, even with a pinned naming scheme version, network device names can
125 still change due to kernel or driver updates. In order to avoid name changes
126 for a particular network device altogether, you can manually override its name
127 using a link file (see xref:network_override_device_names[below]).
129 For more information on network interface names, see
130 https://systemd.io/PREDICTABLE_INTERFACE_NAMES/[Predictable Network Interface
133 [[network_pin_naming_scheme_version]]
134 Pinning a specific naming scheme version
135 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
137 You can pin a specific version of the naming scheme for network devices by
138 adding the `net.naming-scheme=<version>` parameter to the
139 xref:sysboot_edit_kernel_cmdline[kernel command line]. For a list of naming
140 scheme versions, see the
141 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
142 systemd.net-naming-scheme(7) manpage].
144 For example, to pin the version `v252`, which is the latest naming scheme
145 version for a fresh {pve} 8.0 installation, add the following kernel
146 command-line parameter:
149 net.naming-scheme=v252
152 See also xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel
153 command line. You need to reboot for the changes to take effect.
155 [[network_override_device_names]]
156 Overriding network device names
157 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
159 You can manually assign a name to a particular network device using a custom
160 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link
161 file]. This overrides the name that would be assigned according to the latest
162 network device naming scheme. This way, you can avoid naming changes due to
163 kernel updates, driver updates or newer versions of the naming scheme.
165 Custom link files should be placed in `/etc/systemd/network/` and named
166 `<n>-<id>.link`, where `n` is a priority smaller than `99` and `id` is some
167 identifier. A link file has two sections: `[Match]` determines which interfaces
168 the file will apply to; `[Link]` determines how these interfaces should be
169 configured, including their naming.
171 To assign a name to a particular network device, you need a way to uniquely and
172 permanently identify that device in the `[Match]` section. One possibility is
173 to match the device's MAC address using the `MACAddress` option, as it is
174 unlikely to change. Then, you can assign a name using the `Name` option in the
177 For example, to assign the name `enwan0` to the device with MAC address
178 `aa:bb:cc:dd:ee:ff`, create a file `/etc/systemd/network/10-enwan0.link` with
179 the following contents:
183 MACAddress=aa:bb:cc:dd:ee:ff
189 Do not forget to adjust `/etc/network/interfaces` to use the new name.
190 You need to reboot the node for the change to take effect.
192 NOTE: It is recommended to assign a name starting with `en` or `eth` so that
193 {pve} recognizes the interface as a physical network device which can then be
194 configured via the GUI. Also, you should ensure that the name will not clash
195 with other interface names in the future. One possibility is to assign a name
196 that does not match any name pattern that systemd uses for network interfaces
197 (xref:systemd_network_interface_names[see above]), such as `enwan0` in the
200 For more information on link files, see the
201 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link(5)
204 Choosing a network configuration
205 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
207 Depending on your current network organization and your resources you can
208 choose either a bridged, routed, or masquerading networking setup.
210 {pve} server in a private LAN, using an external gateway to reach the internet
211 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
213 The *Bridged* model makes the most sense in this case, and this is also
214 the default mode on new {pve} installations.
215 Each of your Guest system will have a virtual interface attached to the
216 {pve} bridge. This is similar in effect to having the Guest network card
217 directly connected to a new switch on your LAN, the {pve} host playing the role
220 {pve} server at hosting provider, with public IP ranges for Guests
221 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
223 For this setup, you can use either a *Bridged* or *Routed* model, depending on
224 what your provider allows.
226 {pve} server at hosting provider, with a single public IP address
227 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
229 In that case the only way to get outgoing network accesses for your guest
230 systems is to use *Masquerading*. For incoming network access to your guests,
231 you will need to configure *Port Forwarding*.
233 For further flexibility, you can configure
234 VLANs (IEEE 802.1q) and network bonding, also known as "link
235 aggregation". That way it is possible to build complex and flexible
238 Default Configuration using a Bridge
239 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
241 [thumbnail="default-network-setup-bridge.svg"]
242 Bridges are like physical network switches implemented in software.
243 All virtual guests can share a single bridge, or you can create multiple
244 bridges to separate network domains. Each host can have up to 4094 bridges.
246 The installation program creates a single bridge named `vmbr0`, which
247 is connected to the first Ethernet card. The corresponding
248 configuration in `/etc/network/interfaces` might look like this:
252 iface lo inet loopback
254 iface eno1 inet manual
257 iface vmbr0 inet static
258 address 192.168.10.2/24
265 Virtual machines behave as if they were directly connected to the
266 physical network. The network, in turn, sees each virtual machine as
267 having its own MAC, even though there is only one network cable
268 connecting all of these VMs to the network.
270 [[sysadmin_network_routed]]
274 Most hosting providers do not support the above setup. For security
275 reasons, they disable networking as soon as they detect multiple MAC
276 addresses on a single interface.
278 TIP: Some providers allow you to register additional MACs through their
279 management interface. This avoids the problem, but can be clumsy to
280 configure because you need to register a MAC for each of your VMs.
282 You can avoid the problem by ``routing'' all traffic via a single
283 interface. This makes sure that all network packets use the same MAC
286 [thumbnail="default-network-setup-routed.svg"]
287 A common scenario is that you have a public IP (assume `198.51.100.5`
288 for this example), and an additional IP block for your VMs
289 (`203.0.113.16/28`). We recommend the following setup for such
294 iface lo inet loopback
297 iface eno0 inet static
298 address 198.51.100.5/29
300 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
301 post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
305 iface vmbr0 inet static
306 address 203.0.113.17/28
313 [[sysadmin_network_masquerading]]
314 Masquerading (NAT) with `iptables`
315 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
317 Masquerading allows guests having only a private IP address to access the
318 network by using the host IP address for outgoing traffic. Each outgoing
319 packet is rewritten by `iptables` to appear as originating from the host,
320 and responses are rewritten accordingly to be routed to the original sender.
324 iface lo inet loopback
328 iface eno1 inet static
329 address 198.51.100.5/24
334 iface vmbr0 inet static
335 address 10.10.10.1/24
340 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
341 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
342 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
345 NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
346 needed for outgoing connections. Otherwise the firewall could block outgoing
347 connections since they will prefer the `POSTROUTING` of the VM bridge (and not
350 Adding these lines in the `/etc/network/interfaces` can fix this problem:
353 post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
354 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
357 For more information about this, refer to the following links:
359 https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
361 https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
363 https://web.archive.org/web/20220610151210/https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
366 [[sysadmin_network_bond]]
370 Bonding (also called NIC teaming or Link Aggregation) is a technique
371 for binding multiple NIC's to a single network device. It is possible
372 to achieve different goals, like make the network fault-tolerant,
373 increase the performance or both together.
375 High-speed hardware like Fibre Channel and the associated switching
376 hardware can be quite expensive. By doing link aggregation, two NICs
377 can appear as one logical interface, resulting in double speed. This
378 is a native Linux kernel feature that is supported by most
379 switches. If your nodes have multiple Ethernet ports, you can
380 distribute your points of failure by running network cables to
381 different switches and the bonded connection will failover to one
382 cable or the other in case of network trouble.
384 Aggregated links can improve live-migration delays and improve the
385 speed of replication of data between Proxmox VE Cluster nodes.
387 There are 7 modes for bonding:
389 * *Round-robin (balance-rr):* Transmit network packets in sequential
390 order from the first available network interface (NIC) slave through
391 the last. This mode provides load balancing and fault tolerance.
393 * *Active-backup (active-backup):* Only one NIC slave in the bond is
394 active. A different slave becomes active if, and only if, the active
395 slave fails. The single logical bonded interface's MAC address is
396 externally visible on only one NIC (port) to avoid distortion in the
397 network switch. This mode provides fault tolerance.
399 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
400 address XOR'd with destination MAC address) modulo NIC slave
401 count]. This selects the same NIC slave for each destination MAC
402 address. This mode provides load balancing and fault tolerance.
404 * *Broadcast (broadcast):* Transmit network packets on all slave
405 network interfaces. This mode provides fault tolerance.
407 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
408 aggregation groups that share the same speed and duplex
409 settings. Utilizes all slave network interfaces in the active
410 aggregator group according to the 802.3ad specification.
412 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
413 driver mode that does not require any special network-switch
414 support. The outgoing network packet traffic is distributed according
415 to the current load (computed relative to the speed) on each network
416 interface slave. Incoming traffic is received by one currently
417 designated slave network interface. If this receiving slave fails,
418 another slave takes over the MAC address of the failed receiving
421 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
422 load balancing (rlb) for IPV4 traffic, and does not require any
423 special network switch support. The receive load balancing is achieved
424 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
425 by the local system on their way out and overwrites the source
426 hardware address with the unique hardware address of one of the NIC
427 slaves in the single logical bonded interface such that different
428 network-peers use different MAC addresses for their network packet
431 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
432 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
435 For the cluster network (Corosync) we recommend configuring it with multiple
436 networks. Corosync does not need a bond for network reduncancy as it can switch
437 between networks by itself, if one becomes unusable.
439 The following bond configuration can be used as distributed/shared
440 storage network. The benefit would be that you get more speed and the
441 network will be fault-tolerant.
443 .Example: Use bond with fixed IP address
446 iface lo inet loopback
448 iface eno1 inet manual
450 iface eno2 inet manual
452 iface eno3 inet manual
455 iface bond0 inet static
456 bond-slaves eno1 eno2
457 address 192.168.1.2/24
460 bond-xmit-hash-policy layer2+3
463 iface vmbr0 inet static
464 address 10.10.10.2/24
473 [thumbnail="default-network-setup-bond.svg"]
474 Another possibility it to use the bond directly as bridge port.
475 This can be used to make the guest network fault-tolerant.
477 .Example: Use a bond as bridge port
480 iface lo inet loopback
482 iface eno1 inet manual
484 iface eno2 inet manual
487 iface bond0 inet manual
488 bond-slaves eno1 eno2
491 bond-xmit-hash-policy layer2+3
494 iface vmbr0 inet static
495 address 10.10.10.2/24
504 [[sysadmin_network_vlan]]
508 A virtual LAN (VLAN) is a broadcast domain that is partitioned and
509 isolated in the network at layer two. So it is possible to have
510 multiple networks (4096) in a physical network, each independent of
513 Each VLAN network is identified by a number often called 'tag'.
514 Network packages are then 'tagged' to identify which virtual network
518 VLAN for Guest Networks
519 ^^^^^^^^^^^^^^^^^^^^^^^
521 {pve} supports this setup out of the box. You can specify the VLAN tag
522 when you create a VM. The VLAN tag is part of the guest network
523 configuration. The networking layer supports different modes to
524 implement VLANs, depending on the bridge configuration:
526 * *VLAN awareness on the Linux bridge:*
527 In this case, each guest's virtual network card is assigned to a VLAN tag,
528 which is transparently supported by the Linux bridge.
529 Trunk mode is also possible, but that makes configuration
530 in the guest necessary.
532 * *"traditional" VLAN on the Linux bridge:*
533 In contrast to the VLAN awareness method, this method is not transparent
534 and creates a VLAN device with associated bridge for each VLAN.
535 That is, creating a guest on VLAN 5 for example, would create two
536 interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
538 * *Open vSwitch VLAN:*
539 This mode uses the OVS VLAN feature.
541 * *Guest configured VLAN:*
542 VLANs are assigned inside the guest. In this case, the setup is
543 completely done inside the guest and can not be influenced from the
544 outside. The benefit is that you can use more than one VLAN on a
551 To allow host communication with an isolated network. It is possible
552 to apply VLAN tags to any network device (NIC, Bond, Bridge). In
553 general, you should configure the VLAN on the interface with the least
554 abstraction layers between itself and the physical NIC.
556 For example, in a default configuration where you want to place
557 the host management address on a separate VLAN.
560 .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
563 iface lo inet loopback
565 iface eno1 inet manual
567 iface eno1.5 inet manual
570 iface vmbr0v5 inet static
571 address 10.10.10.2/24
578 iface vmbr0 inet manual
585 .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
588 iface lo inet loopback
590 iface eno1 inet manual
594 iface vmbr0.5 inet static
595 address 10.10.10.2/24
599 iface vmbr0 inet manual
603 bridge-vlan-aware yes
607 The next example is the same setup but a bond is used to
608 make this network fail-safe.
610 .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
613 iface lo inet loopback
615 iface eno1 inet manual
617 iface eno2 inet manual
620 iface bond0 inet manual
621 bond-slaves eno1 eno2
624 bond-xmit-hash-policy layer2+3
626 iface bond0.5 inet manual
629 iface vmbr0v5 inet static
630 address 10.10.10.2/24
637 iface vmbr0 inet manual
644 Disabling IPv6 on the Node
645 ~~~~~~~~~~~~~~~~~~~~~~~~~~
647 {pve} works correctly in all environments, irrespective of whether IPv6 is
648 deployed or not. We recommend leaving all settings at the provided defaults.
650 Should you still need to disable support for IPv6 on your node, do so by
651 creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
652 https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
653 for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
656 net.ipv6.conf.all.disable_ipv6 = 1
657 net.ipv6.conf.default.disable_ipv6 = 1
660 This method is preferred to disabling the loading of the IPv6 module on the
661 https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
664 Disabling MAC Learning on a Bridge
665 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
667 By default, MAC learning is enabled on a bridge to ensure a smooth experience
668 with virtual guests and their networks.
670 But in some environments this can be undesired. Since {pve} 7.3 you can disable
671 MAC learning on the bridge by setting the `bridge-disable-mac-learning 1`
672 configuration on a bridge in `/etc/network/interfaces', for example:
678 iface vmbr0 inet static
679 address 10.10.10.2/24
684 bridge-disable-mac-learning 1
687 Once enabled, {pve} will manually add the configured MAC address from VMs and
688 Containers to the bridges forwarding database to ensure that guest can still
689 use the network - but only when they are using their actual MAC address.
692 TODO: explain IPv6 support?