1 [[sysadmin_network_configuration]]
8 Network configuration can be done either via the GUI, or by manually
9 editing the file `/etc/network/interfaces`, which contains the
10 whole network configuration. The `interfaces(5)` manual page contains the
11 complete format description. All {pve} tools try hard to keep direct
12 user modifications, but using the GUI is still preferable, because it
13 protects you from errors.
15 Once the network is configured, you can use the Debian traditional tools `ifup`
16 and `ifdown` commands to bring interfaces up and down.
18 NOTE: {pve} does not write changes directly to
19 `/etc/network/interfaces`. Instead, we write into a temporary file
20 called `/etc/network/interfaces.new`, and commit those changes when
26 We currently use the following naming conventions for device names:
28 * Ethernet devices: en*, systemd network interface names. This naming scheme is
29 used for new {pve} installations since version 5.0.
31 * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) This naming
32 scheme is used for {pve} hosts which were installed before the 5.0
33 release. When upgrading to 5.0, the names are kept as-is.
35 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
37 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
39 * VLANs: Simply add the VLAN number to the device name,
40 separated by a period (`eno1.50`, `bond1.30`)
42 This makes it easier to debug networks problems, because the device
43 name implies the device type.
45 Systemd Network Interface Names
46 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
48 Systemd uses the two character prefix 'en' for Ethernet network
49 devices. The next characters depends on the device driver and the fact
50 which schema matches first.
52 * o<index>[n<phys_port_name>|d<dev_port>] — devices on board
54 * s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
56 * [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
58 * x<MAC> — device by MAC address
60 The most common patterns are:
62 * eno1 — is the first on board NIC
64 * enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
66 For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
68 Choosing a network configuration
69 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
71 Depending on your current network organization and your resources you can
72 choose either a bridged, routed, or masquerading networking setup.
74 {pve} server in a private LAN, using an external gateway to reach the internet
75 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
77 The *Bridged* model makes the most sense in this case, and this is also
78 the default mode on new {pve} installations.
79 Each of your Guest system will have a virtual interface attached to the
80 {pve} bridge. This is similar in effect to having the Guest network card
81 directly connected to a new switch on your LAN, the {pve} host playing the role
84 {pve} server at hosting provider, with public IP ranges for Guests
85 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
87 For this setup, you can use either a *Bridged* or *Routed* model, depending on
88 what your provider allows.
90 {pve} server at hosting provider, with a single public IP address
91 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
93 In that case the only way to get outgoing network accesses for your guest
94 systems is to use *Masquerading*. For incoming network access to your guests,
95 you will need to configure *Port Forwarding*.
97 For further flexibility, you can configure
98 VLANs (IEEE 802.1q) and network bonding, also known as "link
99 aggregation". That way it is possible to build complex and flexible
102 Default Configuration using a Bridge
103 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
105 Bridges are like physical network switches implemented in software.
106 All VMs can share a single bridge, or you can create multiple bridges to
107 separate network domains. Each host can have up to 4094 bridges.
109 The installation program creates a single bridge named `vmbr0`, which
110 is connected to the first Ethernet card. The corresponding
111 configuration in `/etc/network/interfaces` might look like this:
115 iface lo inet loopback
117 iface eno1 inet manual
120 iface vmbr0 inet static
122 netmask 255.255.255.0
129 Virtual machines behave as if they were directly connected to the
130 physical network. The network, in turn, sees each virtual machine as
131 having its own MAC, even though there is only one network cable
132 connecting all of these VMs to the network.
137 Most hosting providers do not support the above setup. For security
138 reasons, they disable networking as soon as they detect multiple MAC
139 addresses on a single interface.
141 TIP: Some providers allows you to register additional MACs on there
142 management interface. This avoids the problem, but is clumsy to
143 configure because you need to register a MAC for each of your VMs.
145 You can avoid the problem by ``routing'' all traffic via a single
146 interface. This makes sure that all network packets use the same MAC
149 A common scenario is that you have a public IP (assume `198.51.100.5`
150 for this example), and an additional IP block for your VMs
151 (`203.0.113.16/29`). We recommend the following setup for such
156 iface lo inet loopback
159 iface eno1 inet static
161 netmask 255.255.255.0
163 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
164 post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
168 iface vmbr0 inet static
170 netmask 255.255.255.248
177 Masquerading (NAT) with `iptables`
178 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
180 Masquerading allows guests having only a private IP address to access the
181 network by using the host IP address for outgoing traffic. Each outgoing
182 packet is rewritten by `iptables` to appear as originating from the host,
183 and responses are rewritten accordingly to be routed to the original sender.
187 iface lo inet loopback
191 iface eno1 inet static
193 netmask 255.255.255.0
198 iface vmbr0 inet static
200 netmask 255.255.255.0
205 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
206 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
207 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
214 Bonding (also called NIC teaming or Link Aggregation) is a technique
215 for binding multiple NIC's to a single network device. It is possible
216 to achieve different goals, like make the network fault-tolerant,
217 increase the performance or both together.
219 High-speed hardware like Fibre Channel and the associated switching
220 hardware can be quite expensive. By doing link aggregation, two NICs
221 can appear as one logical interface, resulting in double speed. This
222 is a native Linux kernel feature that is supported by most
223 switches. If your nodes have multiple Ethernet ports, you can
224 distribute your points of failure by running network cables to
225 different switches and the bonded connection will failover to one
226 cable or the other in case of network trouble.
228 Aggregated links can improve live-migration delays and improve the
229 speed of replication of data between Proxmox VE Cluster nodes.
231 There are 7 modes for bonding:
233 * *Round-robin (balance-rr):* Transmit network packets in sequential
234 order from the first available network interface (NIC) slave through
235 the last. This mode provides load balancing and fault tolerance.
237 * *Active-backup (active-backup):* Only one NIC slave in the bond is
238 active. A different slave becomes active if, and only if, the active
239 slave fails. The single logical bonded interface's MAC address is
240 externally visible on only one NIC (port) to avoid distortion in the
241 network switch. This mode provides fault tolerance.
243 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
244 address XOR'd with destination MAC address) modulo NIC slave
245 count]. This selects the same NIC slave for each destination MAC
246 address. This mode provides load balancing and fault tolerance.
248 * *Broadcast (broadcast):* Transmit network packets on all slave
249 network interfaces. This mode provides fault tolerance.
251 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
252 aggregation groups that share the same speed and duplex
253 settings. Utilizes all slave network interfaces in the active
254 aggregator group according to the 802.3ad specification.
256 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
257 driver mode that does not require any special network-switch
258 support. The outgoing network packet traffic is distributed according
259 to the current load (computed relative to the speed) on each network
260 interface slave. Incoming traffic is received by one currently
261 designated slave network interface. If this receiving slave fails,
262 another slave takes over the MAC address of the failed receiving
265 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
266 load balancing (rlb) for IPV4 traffic, and does not require any
267 special network switch support. The receive load balancing is achieved
268 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
269 by the local system on their way out and overwrites the source
270 hardware address with the unique hardware address of one of the NIC
271 slaves in the single logical bonded interface such that different
272 network-peers use different MAC addresses for their network packet
275 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
276 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
277 active-backup mode. +
278 // http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
279 If you intend to run your cluster network on the bonding interfaces, then you
280 have to use active-passive mode on the bonding interfaces, other modes are
283 The following bond configuration can be used as distributed/shared
284 storage network. The benefit would be that you get more speed and the
285 network will be fault-tolerant.
287 .Example: Use bond with fixed IP address
290 iface lo inet loopback
292 iface eno1 inet manual
294 iface eno2 inet manual
297 iface bond0 inet static
300 netmask 255.255.255.0
303 bond_xmit_hash_policy layer2+3
306 iface vmbr0 inet static
308 netmask 255.255.255.0
317 Another possibility it to use the bond directly as bridge port.
318 This can be used to make the guest network fault-tolerant.
320 .Example: Use a bond as bridge port
323 iface lo inet loopback
325 iface eno1 inet manual
327 iface eno2 inet manual
330 iface bond0 inet manual
334 bond_xmit_hash_policy layer2+3
337 iface vmbr0 inet static
339 netmask 255.255.255.0
351 A virtual LAN (VLAN) is a broadcast domain that is partitioned and
352 isolated in the network at layer two. So it is possible to have
353 multiple networks (4096) in a physical network, each independent of
356 Each VLAN network is identified by a number often called 'tag'.
357 Network packages are then 'tagged' to identify which virtual network
361 VLAN for Guest Networks
362 ^^^^^^^^^^^^^^^^^^^^^^^
364 {pve} supports this setup out of the box. You can specify the VLAN tag
365 when you create a VM. The VLAN tag is part of the guest network
366 confinuration. The networking layer supports differnet modes to
367 implement VLANs, depending on the bridge configuration:
369 * *VLAN awareness on the Linux bridge:*
370 In this case, each guest's virtual network card is assigned to a VLAN tag,
371 which is transparently supported by the Linux bridge.
372 Trunk mode is also possible, but that makes the configuration
373 in the guest necessary.
375 * *"traditional" VLAN on the Linux bridge:*
376 In contrast to the VLAN awareness method, this method is not transparent
377 and creates a VLAN device with associated bridge for each VLAN.
378 That is, if e.g. in our default network, a guest VLAN 5 is used
379 to create eno1.5 and vmbr0v5, which remains until rebooting.
381 * *Open vSwitch VLAN:*
382 This mode uses the OVS VLAN feature.
384 * *Guest configured VLAN:*
385 VLANs are assigned inside the guest. In this case, the setup is
386 completely done inside the guest and can not be influenced from the
387 outside. The benefit is that you can use more than one VLAN on a
394 To allow host communication with an isolated network. It is possible
395 to apply VLAN tags to any network device (NIC, Bond, Bridge). In
396 general, you should configure the VLAN on the interface with the least
397 abstraction layers between itself and the physical NIC.
399 For example, in a default configuration where you want to place
400 the host management address on a separate VLAN.
402 NOTE: In the examples we use the VLAN at bridge level to ensure the correct
403 function of VLAN 5 in the guest network, but in combination with VLAN anwareness
404 bridge this it will not work for guest network VLAN 5.
405 The downside of this setup is more CPU usage.
407 .Example: Use VLAN 5 for the {pve} management IP
410 iface lo inet loopback
412 iface eno1 inet manual
414 iface eno1.5 inet manual
417 iface vmbr0v5 inet static
419 netmask 255.255.255.0
426 iface vmbr0 inet manual
433 The next example is the same setup but a bond is used to
434 make this network fail-safe.
436 .Example: Use VLAN 5 with bond0 for the {pve} management IP
439 iface lo inet loopback
441 iface eno1 inet manual
443 iface eno2 inet manual
446 iface bond0 inet manual
450 bond_xmit_hash_policy layer2+3
452 iface bond0.5 inet manual
455 iface vmbr0v5 inet static
457 netmask 255.255.255.0
464 iface vmbr0 inet manual
472 TODO: explain IPv6 support?