1 [[sysadmin_network_configuration]]
8 {pve} uses a bridged networking model. Each host can have up to 4094
9 bridges. Bridges are like physical network switches implemented in
10 software. All VMs can share a single bridge, as if
11 virtual network cables from each guest were all plugged into the same
12 switch. But you can also create multiple bridges to separate network
15 For connecting VMs to the outside world, bridges are attached to
16 physical network cards. For further flexibility, you can configure
17 VLANs (IEEE 802.1q) and network bonding, also known as "link
18 aggregation". That way it is possible to build complex and flexible
21 Debian traditionally uses the `ifup` and `ifdown` commands to
22 configure the network. The file `/etc/network/interfaces` contains the
23 whole network setup. Please refer to to manual page (`man interfaces`)
24 for a complete format description.
26 NOTE: {pve} does not write changes directly to
27 `/etc/network/interfaces`. Instead, we write into a temporary file
28 called `/etc/network/interfaces.new`, and commit those changes when
31 It is worth mentioning that you can directly edit the configuration
32 file. All {pve} tools tries hard to keep such direct user
33 modifications. Using the GUI is still preferable, because it
34 protect you from errors.
40 We currently use the following naming conventions for device names:
42 * New Ethernet devices: en*, systemd network interface names.
44 * Legacy Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
45 They are available when Proxmox VE has been updated by an earlier version.
47 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
49 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
51 * VLANs: Simply add the VLAN number to the device name,
52 separated by a period (`eno1.50`, `bond1.30`)
54 This makes it easier to debug networks problems, because the device
55 names implies the device type.
58 Systemd Network Interface Names
59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
61 Systemd uses the two character prefix 'en' for Ethernet network
62 devices. The next characters depends on the device driver and the fact
63 which schema matches first.
65 * o<index>[n<phys_port_name>|d<dev_port>] — devices on board
67 * s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
69 * [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
71 * x<MAC> — device by MAC address
73 The most common patterns are:
75 * eno1 — is the first on board NIC
77 * enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
79 For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
82 Default Configuration using a Bridge
83 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
85 The installation program creates a single bridge named `vmbr0`, which
86 is connected to the first Ethernet card `eno0`. The corresponding
87 configuration in `/etc/network/interfaces` looks like this:
91 iface lo inet loopback
93 iface eno1 inet manual
96 iface vmbr0 inet static
105 Virtual machines behave as if they were directly connected to the
106 physical network. The network, in turn, sees each virtual machine as
107 having its own MAC, even though there is only one network cable
108 connecting all of these VMs to the network.
114 Most hosting providers do not support the above setup. For security
115 reasons, they disable networking as soon as they detect multiple MAC
116 addresses on a single interface.
118 TIP: Some providers allows you to register additional MACs on there
119 management interface. This avoids the problem, but is clumsy to
120 configure because you need to register a MAC for each of your VMs.
122 You can avoid the problem by ``routing'' all traffic via a single
123 interface. This makes sure that all network packets use the same MAC
126 A common scenario is that you have a public IP (assume `192.168.10.2`
127 for this example), and an additional IP block for your VMs
128 (`10.10.10.1/255.255.255.0`). We recommend the following setup for such
133 iface lo inet loopback
136 iface eno1 inet static
138 netmask 255.255.255.0
140 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
141 post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
145 iface vmbr0 inet static
147 netmask 255.255.255.0
154 Masquerading (NAT) with `iptables`
155 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
157 In some cases you may want to use private IPs behind your Proxmox
158 host's true IP, and masquerade the traffic using NAT:
162 iface lo inet loopback
166 iface eno1 inet static
168 netmask 255.255.255.0
173 iface vmbr0 inet static
175 netmask 255.255.255.0
180 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
181 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
182 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
189 Bonding (also called NIC teaming or Link Aggregation) is a technique
190 for binding multiple NIC's to a single network device. It is possible
191 to achieve different goals, like make the network fault-tolerant,
192 increase the performance or both together.
194 High-speed hardware like Fibre Channel and the associated switching
195 hardware can be quite expensive. By doing link aggregation, two NICs
196 can appear as one logical interface, resulting in double speed. This
197 is a native Linux kernel feature that is supported by most
198 switches. If your nodes have multiple Ethernet ports, you can
199 distribute your points of failure by running network cables to
200 different switches and the bonded connection will failover to one
201 cable or the other in case of network trouble.
203 Aggregated links can improve live-migration delays and improve the
204 speed of replication of data between Proxmox VE Cluster nodes.
206 There are 7 modes for bonding:
208 * *Round-robin (balance-rr):* Transmit network packets in sequential
209 order from the first available network interface (NIC) slave through
210 the last. This mode provides load balancing and fault tolerance.
212 * *Active-backup (active-backup):* Only one NIC slave in the bond is
213 active. A different slave becomes active if, and only if, the active
214 slave fails. The single logical bonded interface's MAC address is
215 externally visible on only one NIC (port) to avoid distortion in the
216 network switch. This mode provides fault tolerance.
218 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
219 address XOR'd with destination MAC address) modulo NIC slave
220 count]. This selects the same NIC slave for each destination MAC
221 address. This mode provides load balancing and fault tolerance.
223 * *Broadcast (broadcast):* Transmit network packets on all slave
224 network interfaces. This mode provides fault tolerance.
226 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
227 aggregation groups that share the same speed and duplex
228 settings. Utilizes all slave network interfaces in the active
229 aggregator group according to the 802.3ad specification.
231 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
232 driver mode that does not require any special network-switch
233 support. The outgoing network packet traffic is distributed according
234 to the current load (computed relative to the speed) on each network
235 interface slave. Incoming traffic is received by one currently
236 designated slave network interface. If this receiving slave fails,
237 another slave takes over the MAC address of the failed receiving
240 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
241 load balancing (rlb) for IPV4 traffic, and does not require any
242 special network switch support. The receive load balancing is achieved
243 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
244 by the local system on their way out and overwrites the source
245 hardware address with the unique hardware address of one of the NIC
246 slaves in the single logical bonded interface such that different
247 network-peers use different MAC addresses for their network packet
250 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
251 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
252 active-backup mode. +
253 // http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
254 If you intend to run your cluster network on the bonding interfaces, then you
255 have to use active-passive mode on the bonding interfaces, other modes are
258 The following bond configuration can be used as distributed/shared
259 storage network. The benefit would be that you get more speed and the
260 network will be fault-tolerant.
262 .Example: Use bond with fixed IP address
265 iface lo inet loopback
267 iface eno1 inet manual
269 iface eno2 inet manual
272 iface bond0 inet static
275 netmask 255.255.255.0
278 bond_xmit_hash_policy layer2+3
281 iface vmbr0 inet static
283 netmask 255.255.255.0
292 Another possibility it to use the bond directly as bridge port.
293 This can be used to make the guest network fault-tolerant.
295 .Example: Use a bond as bridge port
298 iface lo inet loopback
300 iface eno1 inet manual
302 iface eno2 inet manual
305 iface bond0 inet manual
309 bond_xmit_hash_policy layer2+3
312 iface vmbr0 inet static
314 netmask 255.255.255.0
323 TODO: explain IPv6 support?