3 include::attributes.txt[]
9 {pve} uses a bridged networking model. Each host can have up to 4094
10 bridges. Bridges are like physical network switches implemented in
11 software. All VMs can share a single bridge, as if
12 virtual network cables from each guest were all plugged into the same
13 switch. But you can also create multiple bridges to separate network
16 For connecting VMs to the outside world, bridges are attached to
17 physical network cards. For further flexibility, you can configure
18 VLANs (IEEE 802.1q) and network bonding, also known as "link
19 aggregation". That way it is possible to build complex and flexible
22 Debian traditionally uses the `ifup` and `ifdown` commands to
23 configure the network. The file `/etc/network/interfaces` contains the
24 whole network setup. Please refer to to manual page (`man interfaces`)
25 for a complete format description.
27 NOTE: {pve} does not write changes directly to
28 `/etc/network/interfaces`. Instead, we write into a temporary file
29 called `/etc/network/interfaces.new`, and commit those changes when
32 It is worth mentioning that you can directly edit the configuration
33 file. All {pve} tools tries hard to keep such direct user
34 modifications. Using the GUI is still preferable, because it
35 protect you from errors.
41 We currently use the following naming conventions for device names:
43 * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
45 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
47 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
49 * VLANs: Simply add the VLAN number to the device name,
50 separated by a period (`eth0.50`, `bond1.30`)
52 This makes it easier to debug networks problems, because the device
53 names implies the device type.
55 Default Configuration using a Bridge
56 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
58 The installation program creates a single bridge named `vmbr0`, which
59 is connected to the first ethernet card `eth0`. The corresponding
60 configuration in `/etc/network/interfaces` looks like this:
64 iface lo inet loopback
66 iface eth0 inet manual
69 iface vmbr0 inet static
78 Virtual machines behave as if they were directly connected to the
79 physical network. The network, in turn, sees each virtual machine as
80 having its own MAC, even though there is only one network cable
81 connecting all of these VMs to the network.
87 Most hosting providers do not support the above setup. For security
88 reasons, they disable networking as soon as they detect multiple MAC
89 addresses on a single interface.
91 TIP: Some providers allows you to register additional MACs on there
92 management interface. This avoids the problem, but is clumsy to
93 configure because you need to register a MAC for each of your VMs.
95 You can avoid the problem by ``routing'' all traffic via a single
96 interface. This makes sure that all network packets use the same MAC
99 A common scenario is that you have a public IP (assume `192.168.10.2`
100 for this example), and an additional IP block for your VMs
101 (`10.10.10.1/255.255.255.0`). We recommend the following setup for such
106 iface lo inet loopback
109 iface eth0 inet static
111 netmask 255.255.255.0
113 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
117 iface vmbr0 inet static
119 netmask 255.255.255.0
126 Masquerading (NAT) with `iptables`
127 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
129 In some cases you may want to use private IPs behind your Proxmox
130 host's true IP, and masquerade the traffic using NAT:
134 iface lo inet loopback
138 iface eth0 inet static
140 netmask 255.255.255.0
145 iface vmbr0 inet static
147 netmask 255.255.255.0
152 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
153 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
154 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
161 Bonding (also called NIC teaming or Link Aggregation) is a technique
162 for binding multiple NIC's to a single network device. It is possible
163 to achieve different goals, like make the network fault-tolerant,
164 increase the performance or both together.
166 High-speed hardware like Fibre Channel and the associated switching
167 hardware can be quite expensive. By doing link aggregation, two NICs
168 can appear as one logical interface, resulting in double speed. This
169 is a native Linux kernel feature that is supported by most
170 switches. If your nodes have multiple Ethernet ports, you can
171 distribute your points of failure by running network cables to
172 different switches and the bonded connection will failover to one
173 cable or the other in case of network trouble.
175 Aggregated links can improve live-migration delays and improve the
176 speed of replication of data between Proxmox VE Cluster nodes.
178 There are 7 modes for bonding:
180 * *Round-robin (balance-rr):* Transmit network packets in sequential
181 order from the first available network interface (NIC) slave through
182 the last. This mode provides load balancing and fault tolerance.
184 * *Active-backup (active-backup):* Only one NIC slave in the bond is
185 active. A different slave becomes active if, and only if, the active
186 slave fails. The single logical bonded interface's MAC address is
187 externally visible on only one NIC (port) to avoid distortion in the
188 network switch. This mode provides fault tolerance.
190 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
191 address XOR'd with destination MAC address) modulo NIC slave
192 count]. This selects the same NIC slave for each destination MAC
193 address. This mode provides load balancing and fault tolerance.
195 * *Broadcast (broadcast):* Transmit network packets on all slave
196 network interfaces. This mode provides fault tolerance.
198 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
199 aggregation groups that share the same speed and duplex
200 settings. Utilizes all slave network interfaces in the active
201 aggregator group according to the 802.3ad specification.
203 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
204 driver mode that does not require any special network-switch
205 support. The outgoing network packet traffic is distributed according
206 to the current load (computed relative to the speed) on each network
207 interface slave. Incoming traffic is received by one currently
208 designated slave network interface. If this receiving slave fails,
209 another slave takes over the MAC address of the failed receiving
212 * *Adaptive load balancing (balanceIEEE 802.3ad Dynamic link
213 aggregation (802.3ad)(LACP):-alb):* Includes balance-tlb plus receive
214 load balancing (rlb) for IPV4 traffic, and does not require any
215 special network switch support. The receive load balancing is achieved
216 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
217 by the local system on their way out and overwrites the source
218 hardware address with the unique hardware address of one of the NIC
219 slaves in the single logical bonded interface such that different
220 network-peers use different MAC addresses for their network packet
223 For the most setups the active-backup are the best choice or if your
224 switch support LACP "IEEE 802.3ad" this mode should be preferred.
226 The following bond configuration can be used as distributed/shared
227 storage network. The benefit would be that you get more speed and the
228 network will be fault-tolerant.
230 .Example: Use bond with fixed IP address
233 iface lo inet loopback
235 iface eth1 inet manual
237 iface eth2 inet manual
240 iface bond0 inet static
243 netmask 255.255.255.0
246 bond_xmit_hash_policy layer2+3
249 iface vmbr0 inet static
251 netmask 255.255.255.0
260 Another possibility it to use the bond directly as bridge port.
261 This can be used to make the guest network fault-tolerant.
263 .Example: Use a bond as bridge port
266 iface lo inet loopback
268 iface eth1 inet manual
270 iface eth2 inet manual
273 iface bond0 inet maunal
277 bond_xmit_hash_policy layer2+3
280 iface vmbr0 inet static
282 netmask 255.255.255.0
291 TODO: explain IPv6 support?