1 [[sysadmin_network_configuration]]
8 {pve} is using the Linux network stack. This provides a lot of flexibility on
9 how to set up the network on the {pve} nodes. The configuration can be done
10 either via the GUI, or by manually editing the file `/etc/network/interfaces`,
11 which contains the whole network configuration. The `interfaces(5)` manual
12 page contains the complete format description. All {pve} tools try hard to keep
13 direct user modifications, but using the GUI is still preferable, because it
14 protects you from errors.
16 A 'vmbr' interface is needed to connect guests to the underlying physical
17 network. They are a Linux bridge which can be thought of as a virtual switch
18 to which the guests and physical interfaces are connected to. This section
19 provides some examples on how the network can be set up to accomodate different
20 use cases like redundancy with a xref:sysadmin_network_bond['bond'],
21 xref:sysadmin_network_vlan['vlans'] or
22 xref:sysadmin_network_routed['routed'] and
23 xref:sysadmin_network_masquerading['NAT'] setups.
25 The xref:chapter_pvesdn[Software Defined Network] is an option for more complex
26 virtual networks in {pve} clusters.
28 WARNING: It's discouraged to use the traditional Debian tools `ifup` and `ifdown`
29 if unsure, as they have some pitfalls like interupting all guest traffic on
30 `ifdown vmbrX` but not reconnecting those guest again when doing `ifup` on the
36 {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
37 write into a temporary file called `/etc/network/interfaces.new`, this way you
38 can do many related changes at once. This also allows to ensure your changes
39 are correct before applying, as a wrong network configuration may render a node
42 Live-Reload Network with ifupdown2
43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
45 With the recommended 'ifupdown2' package (default for new installations since
46 {pve} 7.0), it is possible to apply network configuration changes without a
47 reboot. If you change the network configuration via the GUI, you can click the
48 'Apply Configuration' button. This will move changes from the staging
49 `interfaces.new` file to `/etc/network/interfaces` and apply them live.
51 If you made manual changes directly to the `/etc/network/interfaces` file, you
52 can apply them by running `ifreload -a`
54 NOTE: If you installed {pve} on top of Debian, or upgraded to {pve} 7.0 from an
55 older {pve} installation, make sure 'ifupdown2' is installed: `apt install
61 Another way to apply a new network configuration is to reboot the node.
62 In that case the systemd service `pvenetcommit` will activate the staging
63 `interfaces.new` file before the `networking` service will apply that
69 We currently use the following naming conventions for device names:
71 * Ethernet devices: en*, systemd network interface names. This naming scheme is
72 used for new {pve} installations since version 5.0.
74 * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) This naming
75 scheme is used for {pve} hosts which were installed before the 5.0
76 release. When upgrading to 5.0, the names are kept as-is.
78 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
80 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
82 * VLANs: Simply add the VLAN number to the device name,
83 separated by a period (`eno1.50`, `bond1.30`)
85 This makes it easier to debug networks problems, because the device
86 name implies the device type.
88 Systemd Network Interface Names
89 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
91 Systemd uses the two character prefix 'en' for Ethernet network
92 devices. The next characters depends on the device driver and the fact
93 which schema matches first.
95 * o<index>[n<phys_port_name>|d<dev_port>] — devices on board
97 * s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
99 * [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
101 * x<MAC> — device by MAC address
103 The most common patterns are:
105 * eno1 — is the first on board NIC
107 * enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
109 For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
111 Choosing a network configuration
112 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
114 Depending on your current network organization and your resources you can
115 choose either a bridged, routed, or masquerading networking setup.
117 {pve} server in a private LAN, using an external gateway to reach the internet
118 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
120 The *Bridged* model makes the most sense in this case, and this is also
121 the default mode on new {pve} installations.
122 Each of your Guest system will have a virtual interface attached to the
123 {pve} bridge. This is similar in effect to having the Guest network card
124 directly connected to a new switch on your LAN, the {pve} host playing the role
127 {pve} server at hosting provider, with public IP ranges for Guests
128 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
130 For this setup, you can use either a *Bridged* or *Routed* model, depending on
131 what your provider allows.
133 {pve} server at hosting provider, with a single public IP address
134 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
136 In that case the only way to get outgoing network accesses for your guest
137 systems is to use *Masquerading*. For incoming network access to your guests,
138 you will need to configure *Port Forwarding*.
140 For further flexibility, you can configure
141 VLANs (IEEE 802.1q) and network bonding, also known as "link
142 aggregation". That way it is possible to build complex and flexible
145 Default Configuration using a Bridge
146 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
148 [thumbnail="default-network-setup-bridge.svg"]
149 Bridges are like physical network switches implemented in software.
150 All virtual guests can share a single bridge, or you can create multiple
151 bridges to separate network domains. Each host can have up to 4094 bridges.
153 The installation program creates a single bridge named `vmbr0`, which
154 is connected to the first Ethernet card. The corresponding
155 configuration in `/etc/network/interfaces` might look like this:
159 iface lo inet loopback
161 iface eno1 inet manual
164 iface vmbr0 inet static
165 address 192.168.10.2/24
172 Virtual machines behave as if they were directly connected to the
173 physical network. The network, in turn, sees each virtual machine as
174 having its own MAC, even though there is only one network cable
175 connecting all of these VMs to the network.
177 [[sysadmin_network_routed]]
181 Most hosting providers do not support the above setup. For security
182 reasons, they disable networking as soon as they detect multiple MAC
183 addresses on a single interface.
185 TIP: Some providers allow you to register additional MACs through their
186 management interface. This avoids the problem, but can be clumsy to
187 configure because you need to register a MAC for each of your VMs.
189 You can avoid the problem by ``routing'' all traffic via a single
190 interface. This makes sure that all network packets use the same MAC
193 [thumbnail="default-network-setup-routed.svg"]
194 A common scenario is that you have a public IP (assume `198.51.100.5`
195 for this example), and an additional IP block for your VMs
196 (`203.0.113.16/28`). We recommend the following setup for such
201 iface lo inet loopback
204 iface eno0 inet static
205 address 198.51.100.5/29
207 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
208 post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
212 iface vmbr0 inet static
213 address 203.0.113.17/28
220 [[sysadmin_network_masquerading]]
221 Masquerading (NAT) with `iptables`
222 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
224 Masquerading allows guests having only a private IP address to access the
225 network by using the host IP address for outgoing traffic. Each outgoing
226 packet is rewritten by `iptables` to appear as originating from the host,
227 and responses are rewritten accordingly to be routed to the original sender.
231 iface lo inet loopback
235 iface eno1 inet static
236 address 198.51.100.5/24
241 iface vmbr0 inet static
242 address 10.10.10.1/24
247 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
248 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
249 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
252 NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
253 needed for outgoing connections. Otherwise the firewall could block outgoing
254 connections since they will prefer the `POSTROUTING` of the VM bridge (and not
257 Adding these lines in the `/etc/network/interfaces` can fix this problem:
260 post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
261 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
264 For more information about this, refer to the following links:
266 https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
268 https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
270 https://web.archive.org/web/20220610151210/https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
273 [[sysadmin_network_bond]]
277 Bonding (also called NIC teaming or Link Aggregation) is a technique
278 for binding multiple NIC's to a single network device. It is possible
279 to achieve different goals, like make the network fault-tolerant,
280 increase the performance or both together.
282 High-speed hardware like Fibre Channel and the associated switching
283 hardware can be quite expensive. By doing link aggregation, two NICs
284 can appear as one logical interface, resulting in double speed. This
285 is a native Linux kernel feature that is supported by most
286 switches. If your nodes have multiple Ethernet ports, you can
287 distribute your points of failure by running network cables to
288 different switches and the bonded connection will failover to one
289 cable or the other in case of network trouble.
291 Aggregated links can improve live-migration delays and improve the
292 speed of replication of data between Proxmox VE Cluster nodes.
294 There are 7 modes for bonding:
296 * *Round-robin (balance-rr):* Transmit network packets in sequential
297 order from the first available network interface (NIC) slave through
298 the last. This mode provides load balancing and fault tolerance.
300 * *Active-backup (active-backup):* Only one NIC slave in the bond is
301 active. A different slave becomes active if, and only if, the active
302 slave fails. The single logical bonded interface's MAC address is
303 externally visible on only one NIC (port) to avoid distortion in the
304 network switch. This mode provides fault tolerance.
306 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
307 address XOR'd with destination MAC address) modulo NIC slave
308 count]. This selects the same NIC slave for each destination MAC
309 address. This mode provides load balancing and fault tolerance.
311 * *Broadcast (broadcast):* Transmit network packets on all slave
312 network interfaces. This mode provides fault tolerance.
314 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
315 aggregation groups that share the same speed and duplex
316 settings. Utilizes all slave network interfaces in the active
317 aggregator group according to the 802.3ad specification.
319 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
320 driver mode that does not require any special network-switch
321 support. The outgoing network packet traffic is distributed according
322 to the current load (computed relative to the speed) on each network
323 interface slave. Incoming traffic is received by one currently
324 designated slave network interface. If this receiving slave fails,
325 another slave takes over the MAC address of the failed receiving
328 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
329 load balancing (rlb) for IPV4 traffic, and does not require any
330 special network switch support. The receive load balancing is achieved
331 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
332 by the local system on their way out and overwrites the source
333 hardware address with the unique hardware address of one of the NIC
334 slaves in the single logical bonded interface such that different
335 network-peers use different MAC addresses for their network packet
338 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
339 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
342 For the cluster network (Corosync) we recommend configuring it with multiple
343 networks. Corosync does not need a bond for network reduncancy as it can switch
344 between networks by itself, if one becomes unusable.
346 The following bond configuration can be used as distributed/shared
347 storage network. The benefit would be that you get more speed and the
348 network will be fault-tolerant.
350 .Example: Use bond with fixed IP address
353 iface lo inet loopback
355 iface eno1 inet manual
357 iface eno2 inet manual
359 iface eno3 inet manual
362 iface bond0 inet static
363 bond-slaves eno1 eno2
364 address 192.168.1.2/24
367 bond-xmit-hash-policy layer2+3
370 iface vmbr0 inet static
371 address 10.10.10.2/24
380 [thumbnail="default-network-setup-bond.svg"]
381 Another possibility it to use the bond directly as bridge port.
382 This can be used to make the guest network fault-tolerant.
384 .Example: Use a bond as bridge port
387 iface lo inet loopback
389 iface eno1 inet manual
391 iface eno2 inet manual
394 iface bond0 inet manual
395 bond-slaves eno1 eno2
398 bond-xmit-hash-policy layer2+3
401 iface vmbr0 inet static
402 address 10.10.10.2/24
411 [[sysadmin_network_vlan]]
415 A virtual LAN (VLAN) is a broadcast domain that is partitioned and
416 isolated in the network at layer two. So it is possible to have
417 multiple networks (4096) in a physical network, each independent of
420 Each VLAN network is identified by a number often called 'tag'.
421 Network packages are then 'tagged' to identify which virtual network
425 VLAN for Guest Networks
426 ^^^^^^^^^^^^^^^^^^^^^^^
428 {pve} supports this setup out of the box. You can specify the VLAN tag
429 when you create a VM. The VLAN tag is part of the guest network
430 configuration. The networking layer supports different modes to
431 implement VLANs, depending on the bridge configuration:
433 * *VLAN awareness on the Linux bridge:*
434 In this case, each guest's virtual network card is assigned to a VLAN tag,
435 which is transparently supported by the Linux bridge.
436 Trunk mode is also possible, but that makes configuration
437 in the guest necessary.
439 * *"traditional" VLAN on the Linux bridge:*
440 In contrast to the VLAN awareness method, this method is not transparent
441 and creates a VLAN device with associated bridge for each VLAN.
442 That is, creating a guest on VLAN 5 for example, would create two
443 interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
445 * *Open vSwitch VLAN:*
446 This mode uses the OVS VLAN feature.
448 * *Guest configured VLAN:*
449 VLANs are assigned inside the guest. In this case, the setup is
450 completely done inside the guest and can not be influenced from the
451 outside. The benefit is that you can use more than one VLAN on a
458 To allow host communication with an isolated network. It is possible
459 to apply VLAN tags to any network device (NIC, Bond, Bridge). In
460 general, you should configure the VLAN on the interface with the least
461 abstraction layers between itself and the physical NIC.
463 For example, in a default configuration where you want to place
464 the host management address on a separate VLAN.
467 .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
470 iface lo inet loopback
472 iface eno1 inet manual
474 iface eno1.5 inet manual
477 iface vmbr0v5 inet static
478 address 10.10.10.2/24
485 iface vmbr0 inet manual
492 .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
495 iface lo inet loopback
497 iface eno1 inet manual
501 iface vmbr0.5 inet static
502 address 10.10.10.2/24
506 iface vmbr0 inet manual
510 bridge-vlan-aware yes
514 The next example is the same setup but a bond is used to
515 make this network fail-safe.
517 .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
520 iface lo inet loopback
522 iface eno1 inet manual
524 iface eno2 inet manual
527 iface bond0 inet manual
528 bond-slaves eno1 eno2
531 bond-xmit-hash-policy layer2+3
533 iface bond0.5 inet manual
536 iface vmbr0v5 inet static
537 address 10.10.10.2/24
544 iface vmbr0 inet manual
551 Disabling IPv6 on the Node
552 ~~~~~~~~~~~~~~~~~~~~~~~~~~
554 {pve} works correctly in all environments, irrespective of whether IPv6 is
555 deployed or not. We recommend leaving all settings at the provided defaults.
557 Should you still need to disable support for IPv6 on your node, do so by
558 creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
559 https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
560 for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
563 net.ipv6.conf.all.disable_ipv6 = 1
564 net.ipv6.conf.default.disable_ipv6 = 1
567 This method is preferred to disabling the loading of the IPv6 module on the
568 https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
571 Disabling MAC Learning on a Bridge
572 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
574 By default, MAC learning is enabled on a bridge to ensure a smooth experience
575 with virtual guests and their networks.
577 But in some environments this can be undesired. Since {pve} 7.3 you can disable
578 MAC learning on the bridge by setting the `bridge-disable-mac-learning 1`
579 configuration on a bridge in `/etc/network/interfaces', for example:
585 iface vmbr0 inet static
586 address 10.10.10.2/24
591 bridge-disable-mac-learning 1
594 Once enabled, {pve} will manually add the configured MAC address from VMs and
595 Containers to the bridges forwarding database to ensure that guest can still
596 use the network - but only when they are using their actual MAC address.
599 TODO: explain IPv6 support?