Merge Bonding form Wiki
[pve-docs.git] / pve-network.adoc
CommitLineData
0bcd1f7f
DM
1Network Configuration
2---------------------
3include::attributes.txt[]
4
5{pve} uses a bridged networking model. Each host can have up to 4094
6bridges. Bridges are like physical network switches implemented in
7software. All VMs can share a single bridge, as if
8virtual network cables from each guest were all plugged into the same
9switch. But you can also create multiple bridges to separate network
10domains.
11
12For connecting VMs to the outside world, bridges are attached to
13physical network cards. For further flexibility, you can configure
14VLANs (IEEE 802.1q) and network bonding, also known as "link
15aggregation". That way it is possible to build complex and flexible
16virtual networks.
17
8c1189b6
FG
18Debian traditionally uses the `ifup` and `ifdown` commands to
19configure the network. The file `/etc/network/interfaces` contains the
20whole network setup. Please refer to to manual page (`man interfaces`)
0bcd1f7f
DM
21for a complete format description.
22
23NOTE: {pve} does not write changes directly to
8c1189b6
FG
24`/etc/network/interfaces`. Instead, we write into a temporary file
25called `/etc/network/interfaces.new`, and commit those changes when
0bcd1f7f
DM
26you reboot the node.
27
28It is worth mentioning that you can directly edit the configuration
29file. All {pve} tools tries hard to keep such direct user
30modifications. Using the GUI is still preferable, because it
31protect you from errors.
32
5eba0743 33
0bcd1f7f
DM
34Naming Conventions
35~~~~~~~~~~~~~~~~~~
36
37We currently use the following naming conventions for device names:
38
39* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
40
41* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
42
43* Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
44
45* VLANs: Simply add the VLAN number to the device name,
46 separated by a period (`eth0.50`, `bond1.30`)
47
48This makes it easier to debug networks problems, because the device
49names implies the device type.
50
51Default Configuration using a Bridge
52~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
53
54The installation program creates a single bridge named `vmbr0`, which
55is connected to the first ethernet card `eth0`. The corresponding
8c1189b6 56configuration in `/etc/network/interfaces` looks like this:
0bcd1f7f
DM
57
58----
59auto lo
60iface lo inet loopback
61
62iface eth0 inet manual
63
64auto vmbr0
65iface vmbr0 inet static
66 address 192.168.10.2
67 netmask 255.255.255.0
68 gateway 192.168.10.1
69 bridge_ports eth0
70 bridge_stp off
71 bridge_fd 0
72----
73
74Virtual machines behave as if they were directly connected to the
75physical network. The network, in turn, sees each virtual machine as
76having its own MAC, even though there is only one network cable
77connecting all of these VMs to the network.
78
79
80Routed Configuration
81~~~~~~~~~~~~~~~~~~~~
82
83Most hosting providers do not support the above setup. For security
84reasons, they disable networking as soon as they detect multiple MAC
85addresses on a single interface.
86
87TIP: Some providers allows you to register additional MACs on there
88management interface. This avoids the problem, but is clumsy to
89configure because you need to register a MAC for each of your VMs.
90
8c1189b6 91You can avoid the problem by ``routing'' all traffic via a single
0bcd1f7f
DM
92interface. This makes sure that all network packets use the same MAC
93address.
94
8c1189b6 95A common scenario is that you have a public IP (assume `192.168.10.2`
0bcd1f7f 96for this example), and an additional IP block for your VMs
8c1189b6 97(`10.10.10.1/255.255.255.0`). We recommend the following setup for such
0bcd1f7f
DM
98situations:
99
100----
101auto lo
102iface lo inet loopback
103
104auto eth0
105iface eth0 inet static
106 address 192.168.10.2
107 netmask 255.255.255.0
108 gateway 192.168.10.1
109 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
110
111
112auto vmbr0
113iface vmbr0 inet static
114 address 10.10.10.1
115 netmask 255.255.255.0
116 bridge_ports none
117 bridge_stp off
118 bridge_fd 0
119----
120
121
8c1189b6
FG
122Masquerading (NAT) with `iptables`
123~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0bcd1f7f
DM
124
125In some cases you may want to use private IPs behind your Proxmox
126host's true IP, and masquerade the traffic using NAT:
127
128----
129auto lo
130iface lo inet loopback
131
132auto eth0
133#real IP adress
134iface eth0 inet static
135 address 192.168.10.2
136 netmask 255.255.255.0
137 gateway 192.168.10.1
138
139auto vmbr0
140#private sub network
141iface vmbr0 inet static
142 address 10.10.10.1
143 netmask 255.255.255.0
144 bridge_ports none
145 bridge_stp off
146 bridge_fd 0
147
148 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
149 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
150 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
151----
152
b4c06a93
WL
153
154Linux Bond
155~~~~~~~~~~
156
3eafe338
WL
157Bonding (also called NIC teaming or Link Aggregation) is a technique
158for binding multiple NIC's to a single network device. It is possible
159to achieve different goals, like make the network fault-tolerant,
160increase the performance or both together.
161
162High-speed hardware like Fibre Channel and the associated switching
163hardware can be quite expensive. By doing link aggregation, two NICs
164can appear as one logical interface, resulting in double speed. This
165is a native Linux kernel feature that is supported by most
166switches. If your nodes have multiple Ethernet ports, you can
167distribute your points of failure by running network cables to
168different switches and the bonded connection will failover to one
169cable or the other in case of network trouble.
170
171Aggregated links can improve live-migration delays and improve the
172speed of replication of data between Proxmox VE Cluster nodes.
b4c06a93
WL
173
174There are 7 modes for bonding:
175
176* *Round-robin (balance-rr):* Transmit network packets in sequential
177order from the first available network interface (NIC) slave through
178the last. This mode provides load balancing and fault tolerance.
179
180* *Active-backup (active-backup):* Only one NIC slave in the bond is
181active. A different slave becomes active if, and only if, the active
182slave fails. The single logical bonded interface's MAC address is
183externally visible on only one NIC (port) to avoid distortion in the
184network switch. This mode provides fault tolerance.
185
186* *XOR (balance-xor):* Transmit network packets based on [(source MAC
187address XOR'd with destination MAC address) modulo NIC slave
188count]. This selects the same NIC slave for each destination MAC
189address. This mode provides load balancing and fault tolerance.
190
191* *Broadcast (broadcast):* Transmit network packets on all slave
192network interfaces. This mode provides fault tolerance.
193
194* *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
195aggregation groups that share the same speed and duplex
196settings. Utilizes all slave network interfaces in the active
197aggregator group according to the 802.3ad specification.
198
199* *Adaptive transmit load balancing (balance-tlb):* Linux bonding
200driver mode that does not require any special network-switch
201support. The outgoing network packet traffic is distributed according
202to the current load (computed relative to the speed) on each network
203interface slave. Incoming traffic is received by one currently
204designated slave network interface. If this receiving slave fails,
205another slave takes over the MAC address of the failed receiving
206slave.
207
208* *Adaptive load balancing (balanceIEEE 802.3ad Dynamic link
209aggregation (802.3ad)(LACP):-alb):* Includes balance-tlb plus receive
210load balancing (rlb) for IPV4 traffic, and does not require any
211special network switch support. The receive load balancing is achieved
212by ARP negotiation. The bonding driver intercepts the ARP Replies sent
213by the local system on their way out and overwrites the source
214hardware address with the unique hardware address of one of the NIC
215slaves in the single logical bonded interface such that different
216network-peers use different MAC addresses for their network packet
217traffic.
218
219For the most setups the active-backup are the best choice or if your
220switch support LACP "IEEE 802.3ad" this mode should be preferred.
221
cd1de2c2
WL
222The following bond configuration can be used as distributed/shared
223storage network. The benefit would be that you get more speed and the
224network will be fault-tolerant.
225
b4c06a93
WL
226.Example: Use bond with fixed IP address
227----
228auto lo
229iface lo inet loopback
230
231iface eth1 inet manual
232
233iface eth2 inet manual
234
235auto bond0
236iface bond0 inet static
237 slaves eth1 eth2
238 address 192.168.1.2
239 netmask 255.255.255.0
240 bond_miimon 100
241 bond_mode 802.3ad
242 bond_xmit_hash_policy layer2+3
243
244auto vmbr0
245iface vmbr0 inet static
246 address 10.10.10.2
247 netmask 255.255.255.0
248 gateway 10.10.10.1
249 bridge_ports eth0
250 bridge_stp off
251 bridge_fd 0
252
253----
254
cd1de2c2
WL
255
256Another possibility it to use the bond directly as bridge port.
257This can be used to make the guest network fault-tolerant.
258
259.Example: Use a bond as bridge port
b4c06a93
WL
260----
261auto lo
262iface lo inet loopback
263
264iface eth1 inet manual
265
266iface eth2 inet manual
267
268auto bond0
269iface bond0 inet maunal
270 slaves eth1 eth2
271 bond_miimon 100
272 bond_mode 802.3ad
273 bond_xmit_hash_policy layer2+3
274
275auto vmbr0
276iface vmbr0 inet static
277 address 10.10.10.2
278 netmask 255.255.255.0
279 gateway 10.10.10.1
280 bridge_ports bond0
281 bridge_stp off
282 bridge_fd 0
283
284----
285
0bcd1f7f
DM
286////
287TODO: explain IPv6 support?
288TODO: explan OVS
289////