1 [[sysadmin_network_configuration]]
8 Network configuration can be done either via the GUI, or by manually
9 editing the file `/etc/network/interfaces`, which contains the
10 whole network configuration. The `interfaces(5)` manual page contains the
11 complete format description. All {pve} tools try hard to keep direct
12 user modifications, but using the GUI is still preferable, because it
13 protects you from errors.
15 WARNING: It's discourage to use the Debian traditional tools `ifup` and `ifdown`
16 if unsure, as they have some pitfalls like interupting all guest traffic on
17 `ifdown vmbrX` but not reconnecting those guest again when doing `ifup` on the
23 {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
24 write into a temporary file called `/etc/network/interfaces.new`, this way you
25 can do many related changes at once. This also allows to ensure your changes
26 are correct before applying, as a wrong network configuration may render a node
32 One way to apply a new network configuration is to reboot the node.
34 Reload Network with ifupdown2
35 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
37 With the 'ifupdown2' package (default since {pve} 7), it is possible to apply
38 network configuration changes without a reboot. If you change the network
39 configuration via the GUI, you can click the 'Apply Configuration' button. Run
40 the following command if you make changes directly to the
41 `/etc/network/interfaces` file:
47 NOTE: If you installed {pve} on top of Debian, make sure 'ifupdown2' is
48 installed: 'apt install ifupdown2'
53 We currently use the following naming conventions for device names:
55 * Ethernet devices: en*, systemd network interface names. This naming scheme is
56 used for new {pve} installations since version 5.0.
58 * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) This naming
59 scheme is used for {pve} hosts which were installed before the 5.0
60 release. When upgrading to 5.0, the names are kept as-is.
62 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
64 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
66 * VLANs: Simply add the VLAN number to the device name,
67 separated by a period (`eno1.50`, `bond1.30`)
69 This makes it easier to debug networks problems, because the device
70 name implies the device type.
72 Systemd Network Interface Names
73 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
75 Systemd uses the two character prefix 'en' for Ethernet network
76 devices. The next characters depends on the device driver and the fact
77 which schema matches first.
79 * o<index>[n<phys_port_name>|d<dev_port>] — devices on board
81 * s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
83 * [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
85 * x<MAC> — device by MAC address
87 The most common patterns are:
89 * eno1 — is the first on board NIC
91 * enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
93 For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
95 Choosing a network configuration
96 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98 Depending on your current network organization and your resources you can
99 choose either a bridged, routed, or masquerading networking setup.
101 {pve} server in a private LAN, using an external gateway to reach the internet
102 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
104 The *Bridged* model makes the most sense in this case, and this is also
105 the default mode on new {pve} installations.
106 Each of your Guest system will have a virtual interface attached to the
107 {pve} bridge. This is similar in effect to having the Guest network card
108 directly connected to a new switch on your LAN, the {pve} host playing the role
111 {pve} server at hosting provider, with public IP ranges for Guests
112 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
114 For this setup, you can use either a *Bridged* or *Routed* model, depending on
115 what your provider allows.
117 {pve} server at hosting provider, with a single public IP address
118 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
120 In that case the only way to get outgoing network accesses for your guest
121 systems is to use *Masquerading*. For incoming network access to your guests,
122 you will need to configure *Port Forwarding*.
124 For further flexibility, you can configure
125 VLANs (IEEE 802.1q) and network bonding, also known as "link
126 aggregation". That way it is possible to build complex and flexible
129 Default Configuration using a Bridge
130 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
132 [thumbnail="default-network-setup-bridge.svg"]
133 Bridges are like physical network switches implemented in software.
134 All virtual guests can share a single bridge, or you can create multiple
135 bridges to separate network domains. Each host can have up to 4094 bridges.
137 The installation program creates a single bridge named `vmbr0`, which
138 is connected to the first Ethernet card. The corresponding
139 configuration in `/etc/network/interfaces` might look like this:
143 iface lo inet loopback
145 iface eno1 inet manual
148 iface vmbr0 inet static
149 address 192.168.10.2/24
156 Virtual machines behave as if they were directly connected to the
157 physical network. The network, in turn, sees each virtual machine as
158 having its own MAC, even though there is only one network cable
159 connecting all of these VMs to the network.
164 Most hosting providers do not support the above setup. For security
165 reasons, they disable networking as soon as they detect multiple MAC
166 addresses on a single interface.
168 TIP: Some providers allow you to register additional MACs through their
169 management interface. This avoids the problem, but can be clumsy to
170 configure because you need to register a MAC for each of your VMs.
172 You can avoid the problem by ``routing'' all traffic via a single
173 interface. This makes sure that all network packets use the same MAC
176 [thumbnail="default-network-setup-routed.svg"]
177 A common scenario is that you have a public IP (assume `198.51.100.5`
178 for this example), and an additional IP block for your VMs
179 (`203.0.113.16/28`). We recommend the following setup for such
184 iface lo inet loopback
187 iface eno0 inet static
188 address 198.51.100.5/29
190 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
191 post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
195 iface vmbr0 inet static
196 address 203.0.113.17/28
203 Masquerading (NAT) with `iptables`
204 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
206 Masquerading allows guests having only a private IP address to access the
207 network by using the host IP address for outgoing traffic. Each outgoing
208 packet is rewritten by `iptables` to appear as originating from the host,
209 and responses are rewritten accordingly to be routed to the original sender.
213 iface lo inet loopback
217 iface eno1 inet static
218 address 198.51.100.5/24
223 iface vmbr0 inet static
224 address 10.10.10.1/24
229 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
230 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
231 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
234 NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
235 needed for outgoing connections. Otherwise the firewall could block outgoing
236 connections since they will prefer the `POSTROUTING` of the VM bridge (and not
239 Adding these lines in the `/etc/network/interfaces` can fix this problem:
242 post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
243 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
246 For more information about this, refer to the following links:
248 https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
250 https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
252 https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
259 Bonding (also called NIC teaming or Link Aggregation) is a technique
260 for binding multiple NIC's to a single network device. It is possible
261 to achieve different goals, like make the network fault-tolerant,
262 increase the performance or both together.
264 High-speed hardware like Fibre Channel and the associated switching
265 hardware can be quite expensive. By doing link aggregation, two NICs
266 can appear as one logical interface, resulting in double speed. This
267 is a native Linux kernel feature that is supported by most
268 switches. If your nodes have multiple Ethernet ports, you can
269 distribute your points of failure by running network cables to
270 different switches and the bonded connection will failover to one
271 cable or the other in case of network trouble.
273 Aggregated links can improve live-migration delays and improve the
274 speed of replication of data between Proxmox VE Cluster nodes.
276 There are 7 modes for bonding:
278 * *Round-robin (balance-rr):* Transmit network packets in sequential
279 order from the first available network interface (NIC) slave through
280 the last. This mode provides load balancing and fault tolerance.
282 * *Active-backup (active-backup):* Only one NIC slave in the bond is
283 active. A different slave becomes active if, and only if, the active
284 slave fails. The single logical bonded interface's MAC address is
285 externally visible on only one NIC (port) to avoid distortion in the
286 network switch. This mode provides fault tolerance.
288 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
289 address XOR'd with destination MAC address) modulo NIC slave
290 count]. This selects the same NIC slave for each destination MAC
291 address. This mode provides load balancing and fault tolerance.
293 * *Broadcast (broadcast):* Transmit network packets on all slave
294 network interfaces. This mode provides fault tolerance.
296 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
297 aggregation groups that share the same speed and duplex
298 settings. Utilizes all slave network interfaces in the active
299 aggregator group according to the 802.3ad specification.
301 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
302 driver mode that does not require any special network-switch
303 support. The outgoing network packet traffic is distributed according
304 to the current load (computed relative to the speed) on each network
305 interface slave. Incoming traffic is received by one currently
306 designated slave network interface. If this receiving slave fails,
307 another slave takes over the MAC address of the failed receiving
310 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
311 load balancing (rlb) for IPV4 traffic, and does not require any
312 special network switch support. The receive load balancing is achieved
313 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
314 by the local system on their way out and overwrites the source
315 hardware address with the unique hardware address of one of the NIC
316 slaves in the single logical bonded interface such that different
317 network-peers use different MAC addresses for their network packet
320 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
321 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
322 active-backup mode. +
323 // http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
324 If you intend to run your cluster network on the bonding interfaces, then you
325 have to use active-passive mode on the bonding interfaces, other modes are
328 The following bond configuration can be used as distributed/shared
329 storage network. The benefit would be that you get more speed and the
330 network will be fault-tolerant.
332 .Example: Use bond with fixed IP address
335 iface lo inet loopback
337 iface eno1 inet manual
339 iface eno2 inet manual
341 iface eno3 inet manual
344 iface bond0 inet static
345 bond-slaves eno1 eno2
346 address 192.168.1.2/24
349 bond-xmit-hash-policy layer2+3
352 iface vmbr0 inet static
353 address 10.10.10.2/24
362 [thumbnail="default-network-setup-bond.svg"]
363 Another possibility it to use the bond directly as bridge port.
364 This can be used to make the guest network fault-tolerant.
366 .Example: Use a bond as bridge port
369 iface lo inet loopback
371 iface eno1 inet manual
373 iface eno2 inet manual
376 iface bond0 inet manual
377 bond-slaves eno1 eno2
380 bond-xmit-hash-policy layer2+3
383 iface vmbr0 inet static
384 address 10.10.10.2/24
396 A virtual LAN (VLAN) is a broadcast domain that is partitioned and
397 isolated in the network at layer two. So it is possible to have
398 multiple networks (4096) in a physical network, each independent of
401 Each VLAN network is identified by a number often called 'tag'.
402 Network packages are then 'tagged' to identify which virtual network
406 VLAN for Guest Networks
407 ^^^^^^^^^^^^^^^^^^^^^^^
409 {pve} supports this setup out of the box. You can specify the VLAN tag
410 when you create a VM. The VLAN tag is part of the guest network
411 configuration. The networking layer supports different modes to
412 implement VLANs, depending on the bridge configuration:
414 * *VLAN awareness on the Linux bridge:*
415 In this case, each guest's virtual network card is assigned to a VLAN tag,
416 which is transparently supported by the Linux bridge.
417 Trunk mode is also possible, but that makes configuration
418 in the guest necessary.
420 * *"traditional" VLAN on the Linux bridge:*
421 In contrast to the VLAN awareness method, this method is not transparent
422 and creates a VLAN device with associated bridge for each VLAN.
423 That is, creating a guest on VLAN 5 for example, would create two
424 interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
426 * *Open vSwitch VLAN:*
427 This mode uses the OVS VLAN feature.
429 * *Guest configured VLAN:*
430 VLANs are assigned inside the guest. In this case, the setup is
431 completely done inside the guest and can not be influenced from the
432 outside. The benefit is that you can use more than one VLAN on a
439 To allow host communication with an isolated network. It is possible
440 to apply VLAN tags to any network device (NIC, Bond, Bridge). In
441 general, you should configure the VLAN on the interface with the least
442 abstraction layers between itself and the physical NIC.
444 For example, in a default configuration where you want to place
445 the host management address on a separate VLAN.
448 .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
451 iface lo inet loopback
453 iface eno1 inet manual
455 iface eno1.5 inet manual
458 iface vmbr0v5 inet static
459 address 10.10.10.2/24
466 iface vmbr0 inet manual
473 .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
476 iface lo inet loopback
478 iface eno1 inet manual
482 iface vmbr0.5 inet static
483 address 10.10.10.2/24
487 iface vmbr0 inet manual
491 bridge-vlan-aware yes
495 The next example is the same setup but a bond is used to
496 make this network fail-safe.
498 .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
501 iface lo inet loopback
503 iface eno1 inet manual
505 iface eno2 inet manual
508 iface bond0 inet manual
509 bond-slaves eno1 eno2
512 bond-xmit-hash-policy layer2+3
514 iface bond0.5 inet manual
517 iface vmbr0v5 inet static
518 address 10.10.10.2/24
525 iface vmbr0 inet manual
532 Disabling IPv6 on the Node
533 ~~~~~~~~~~~~~~~~~~~~~~~~~~
535 {pve} works correctly in all environments, irrespective of whether IPv6 is
536 deployed or not. We recommend leaving all settings at the provided defaults.
538 Should you still need to disable support for IPv6 on your node, do so by
539 creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
540 https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
541 for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
544 net.ipv6.conf.all.disable_ipv6 = 1
545 net.ipv6.conf.default.disable_ipv6 = 1
548 This method is preferred to disabling the loading of the IPv6 module on the
549 https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
552 TODO: explain IPv6 support?