1 [[sysadmin_network_configuration]]
8 {pve} uses a bridged networking model. Each host can have up to 4094
9 bridges. Bridges are like physical network switches implemented in
10 software. All VMs can share a single bridge, as if
11 virtual network cables from each guest were all plugged into the same
12 switch. But you can also create multiple bridges to separate network
15 For connecting VMs to the outside world, bridges are attached to
16 physical network cards. For further flexibility, you can configure
17 VLANs (IEEE 802.1q) and network bonding, also known as "link
18 aggregation". That way it is possible to build complex and flexible
21 Debian traditionally uses the `ifup` and `ifdown` commands to
22 configure the network. The file `/etc/network/interfaces` contains the
23 whole network setup. Please refer to to manual page (`man interfaces`)
24 for a complete format description.
26 NOTE: {pve} does not write changes directly to
27 `/etc/network/interfaces`. Instead, we write into a temporary file
28 called `/etc/network/interfaces.new`, and commit those changes when
31 It is worth mentioning that you can directly edit the configuration
32 file. All {pve} tools tries hard to keep such direct user
33 modifications. Using the GUI is still preferable, because it
34 protect you from errors.
40 We currently use the following naming conventions for device names:
42 * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
44 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
46 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
48 * VLANs: Simply add the VLAN number to the device name,
49 separated by a period (`eth0.50`, `bond1.30`)
51 This makes it easier to debug networks problems, because the device
52 names implies the device type.
54 Default Configuration using a Bridge
55 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
57 The installation program creates a single bridge named `vmbr0`, which
58 is connected to the first ethernet card `eth0`. The corresponding
59 configuration in `/etc/network/interfaces` looks like this:
63 iface lo inet loopback
65 iface eth0 inet manual
68 iface vmbr0 inet static
77 Virtual machines behave as if they were directly connected to the
78 physical network. The network, in turn, sees each virtual machine as
79 having its own MAC, even though there is only one network cable
80 connecting all of these VMs to the network.
86 Most hosting providers do not support the above setup. For security
87 reasons, they disable networking as soon as they detect multiple MAC
88 addresses on a single interface.
90 TIP: Some providers allows you to register additional MACs on there
91 management interface. This avoids the problem, but is clumsy to
92 configure because you need to register a MAC for each of your VMs.
94 You can avoid the problem by ``routing'' all traffic via a single
95 interface. This makes sure that all network packets use the same MAC
98 A common scenario is that you have a public IP (assume `192.168.10.2`
99 for this example), and an additional IP block for your VMs
100 (`10.10.10.1/255.255.255.0`). We recommend the following setup for such
105 iface lo inet loopback
108 iface eth0 inet static
110 netmask 255.255.255.0
112 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
116 iface vmbr0 inet static
118 netmask 255.255.255.0
125 Masquerading (NAT) with `iptables`
126 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
128 In some cases you may want to use private IPs behind your Proxmox
129 host's true IP, and masquerade the traffic using NAT:
133 iface lo inet loopback
137 iface eth0 inet static
139 netmask 255.255.255.0
144 iface vmbr0 inet static
146 netmask 255.255.255.0
151 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
152 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
153 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
160 Bonding (also called NIC teaming or Link Aggregation) is a technique
161 for binding multiple NIC's to a single network device. It is possible
162 to achieve different goals, like make the network fault-tolerant,
163 increase the performance or both together.
165 High-speed hardware like Fibre Channel and the associated switching
166 hardware can be quite expensive. By doing link aggregation, two NICs
167 can appear as one logical interface, resulting in double speed. This
168 is a native Linux kernel feature that is supported by most
169 switches. If your nodes have multiple Ethernet ports, you can
170 distribute your points of failure by running network cables to
171 different switches and the bonded connection will failover to one
172 cable or the other in case of network trouble.
174 Aggregated links can improve live-migration delays and improve the
175 speed of replication of data between Proxmox VE Cluster nodes.
177 There are 7 modes for bonding:
179 * *Round-robin (balance-rr):* Transmit network packets in sequential
180 order from the first available network interface (NIC) slave through
181 the last. This mode provides load balancing and fault tolerance.
183 * *Active-backup (active-backup):* Only one NIC slave in the bond is
184 active. A different slave becomes active if, and only if, the active
185 slave fails. The single logical bonded interface's MAC address is
186 externally visible on only one NIC (port) to avoid distortion in the
187 network switch. This mode provides fault tolerance.
189 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
190 address XOR'd with destination MAC address) modulo NIC slave
191 count]. This selects the same NIC slave for each destination MAC
192 address. This mode provides load balancing and fault tolerance.
194 * *Broadcast (broadcast):* Transmit network packets on all slave
195 network interfaces. This mode provides fault tolerance.
197 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
198 aggregation groups that share the same speed and duplex
199 settings. Utilizes all slave network interfaces in the active
200 aggregator group according to the 802.3ad specification.
202 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
203 driver mode that does not require any special network-switch
204 support. The outgoing network packet traffic is distributed according
205 to the current load (computed relative to the speed) on each network
206 interface slave. Incoming traffic is received by one currently
207 designated slave network interface. If this receiving slave fails,
208 another slave takes over the MAC address of the failed receiving
211 * *Adaptive load balancing (balanceIEEE 802.3ad Dynamic link
212 aggregation (802.3ad)(LACP):-alb):* Includes balance-tlb plus receive
213 load balancing (rlb) for IPV4 traffic, and does not require any
214 special network switch support. The receive load balancing is achieved
215 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
216 by the local system on their way out and overwrites the source
217 hardware address with the unique hardware address of one of the NIC
218 slaves in the single logical bonded interface such that different
219 network-peers use different MAC addresses for their network packet
222 For the most setups the active-backup are the best choice or if your
223 switch support LACP "IEEE 802.3ad" this mode should be preferred.
225 The following bond configuration can be used as distributed/shared
226 storage network. The benefit would be that you get more speed and the
227 network will be fault-tolerant.
229 .Example: Use bond with fixed IP address
232 iface lo inet loopback
234 iface eth1 inet manual
236 iface eth2 inet manual
239 iface bond0 inet static
242 netmask 255.255.255.0
245 bond_xmit_hash_policy layer2+3
248 iface vmbr0 inet static
250 netmask 255.255.255.0
259 Another possibility it to use the bond directly as bridge port.
260 This can be used to make the guest network fault-tolerant.
262 .Example: Use a bond as bridge port
265 iface lo inet loopback
267 iface eth1 inet manual
269 iface eth2 inet manual
272 iface bond0 inet maunal
276 bond_xmit_hash_policy layer2+3
279 iface vmbr0 inet static
281 netmask 255.255.255.0
290 TODO: explain IPv6 support?