]> git.proxmox.com Git - pve-docs.git/blame - pve-network.adoc
scan-adoc-refs: improve title parser, store doctype
[pve-docs.git] / pve-network.adoc
CommitLineData
0bcd1f7f
DM
1Network Configuration
2---------------------
3include::attributes.txt[]
4
5f09af76
DM
5ifdef::wiki[]
6:pve-toplevel:
7endif::wiki[]
8
0bcd1f7f
DM
9{pve} uses a bridged networking model. Each host can have up to 4094
10bridges. Bridges are like physical network switches implemented in
11software. All VMs can share a single bridge, as if
12virtual network cables from each guest were all plugged into the same
13switch. But you can also create multiple bridges to separate network
14domains.
15
16For connecting VMs to the outside world, bridges are attached to
17physical network cards. For further flexibility, you can configure
18VLANs (IEEE 802.1q) and network bonding, also known as "link
19aggregation". That way it is possible to build complex and flexible
20virtual networks.
21
8c1189b6
FG
22Debian traditionally uses the `ifup` and `ifdown` commands to
23configure the network. The file `/etc/network/interfaces` contains the
24whole network setup. Please refer to to manual page (`man interfaces`)
0bcd1f7f
DM
25for a complete format description.
26
27NOTE: {pve} does not write changes directly to
8c1189b6
FG
28`/etc/network/interfaces`. Instead, we write into a temporary file
29called `/etc/network/interfaces.new`, and commit those changes when
0bcd1f7f
DM
30you reboot the node.
31
32It is worth mentioning that you can directly edit the configuration
33file. All {pve} tools tries hard to keep such direct user
34modifications. Using the GUI is still preferable, because it
35protect you from errors.
36
5eba0743 37
0bcd1f7f
DM
38Naming Conventions
39~~~~~~~~~~~~~~~~~~
40
41We currently use the following naming conventions for device names:
42
43* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
44
45* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
46
47* Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
48
49* VLANs: Simply add the VLAN number to the device name,
50 separated by a period (`eth0.50`, `bond1.30`)
51
52This makes it easier to debug networks problems, because the device
53names implies the device type.
54
55Default Configuration using a Bridge
56~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
57
58The installation program creates a single bridge named `vmbr0`, which
59is connected to the first ethernet card `eth0`. The corresponding
8c1189b6 60configuration in `/etc/network/interfaces` looks like this:
0bcd1f7f
DM
61
62----
63auto lo
64iface lo inet loopback
65
66iface eth0 inet manual
67
68auto vmbr0
69iface vmbr0 inet static
70 address 192.168.10.2
71 netmask 255.255.255.0
72 gateway 192.168.10.1
73 bridge_ports eth0
74 bridge_stp off
75 bridge_fd 0
76----
77
78Virtual machines behave as if they were directly connected to the
79physical network. The network, in turn, sees each virtual machine as
80having its own MAC, even though there is only one network cable
81connecting all of these VMs to the network.
82
83
84Routed Configuration
85~~~~~~~~~~~~~~~~~~~~
86
87Most hosting providers do not support the above setup. For security
88reasons, they disable networking as soon as they detect multiple MAC
89addresses on a single interface.
90
91TIP: Some providers allows you to register additional MACs on there
92management interface. This avoids the problem, but is clumsy to
93configure because you need to register a MAC for each of your VMs.
94
8c1189b6 95You can avoid the problem by ``routing'' all traffic via a single
0bcd1f7f
DM
96interface. This makes sure that all network packets use the same MAC
97address.
98
8c1189b6 99A common scenario is that you have a public IP (assume `192.168.10.2`
0bcd1f7f 100for this example), and an additional IP block for your VMs
8c1189b6 101(`10.10.10.1/255.255.255.0`). We recommend the following setup for such
0bcd1f7f
DM
102situations:
103
104----
105auto lo
106iface lo inet loopback
107
108auto eth0
109iface eth0 inet static
110 address 192.168.10.2
111 netmask 255.255.255.0
112 gateway 192.168.10.1
113 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
114
115
116auto vmbr0
117iface vmbr0 inet static
118 address 10.10.10.1
119 netmask 255.255.255.0
120 bridge_ports none
121 bridge_stp off
122 bridge_fd 0
123----
124
125
8c1189b6
FG
126Masquerading (NAT) with `iptables`
127~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0bcd1f7f
DM
128
129In some cases you may want to use private IPs behind your Proxmox
130host's true IP, and masquerade the traffic using NAT:
131
132----
133auto lo
134iface lo inet loopback
135
136auto eth0
137#real IP adress
138iface eth0 inet static
139 address 192.168.10.2
140 netmask 255.255.255.0
141 gateway 192.168.10.1
142
143auto vmbr0
144#private sub network
145iface vmbr0 inet static
146 address 10.10.10.1
147 netmask 255.255.255.0
148 bridge_ports none
149 bridge_stp off
150 bridge_fd 0
151
152 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
153 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
154 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
155----
156
b4c06a93
WL
157
158Linux Bond
159~~~~~~~~~~
160
3eafe338
WL
161Bonding (also called NIC teaming or Link Aggregation) is a technique
162for binding multiple NIC's to a single network device. It is possible
163to achieve different goals, like make the network fault-tolerant,
164increase the performance or both together.
165
166High-speed hardware like Fibre Channel and the associated switching
167hardware can be quite expensive. By doing link aggregation, two NICs
168can appear as one logical interface, resulting in double speed. This
169is a native Linux kernel feature that is supported by most
170switches. If your nodes have multiple Ethernet ports, you can
171distribute your points of failure by running network cables to
172different switches and the bonded connection will failover to one
173cable or the other in case of network trouble.
174
175Aggregated links can improve live-migration delays and improve the
176speed of replication of data between Proxmox VE Cluster nodes.
b4c06a93
WL
177
178There are 7 modes for bonding:
179
180* *Round-robin (balance-rr):* Transmit network packets in sequential
181order from the first available network interface (NIC) slave through
182the last. This mode provides load balancing and fault tolerance.
183
184* *Active-backup (active-backup):* Only one NIC slave in the bond is
185active. A different slave becomes active if, and only if, the active
186slave fails. The single logical bonded interface's MAC address is
187externally visible on only one NIC (port) to avoid distortion in the
188network switch. This mode provides fault tolerance.
189
190* *XOR (balance-xor):* Transmit network packets based on [(source MAC
191address XOR'd with destination MAC address) modulo NIC slave
192count]. This selects the same NIC slave for each destination MAC
193address. This mode provides load balancing and fault tolerance.
194
195* *Broadcast (broadcast):* Transmit network packets on all slave
196network interfaces. This mode provides fault tolerance.
197
198* *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
199aggregation groups that share the same speed and duplex
200settings. Utilizes all slave network interfaces in the active
201aggregator group according to the 802.3ad specification.
202
203* *Adaptive transmit load balancing (balance-tlb):* Linux bonding
204driver mode that does not require any special network-switch
205support. The outgoing network packet traffic is distributed according
206to the current load (computed relative to the speed) on each network
207interface slave. Incoming traffic is received by one currently
208designated slave network interface. If this receiving slave fails,
209another slave takes over the MAC address of the failed receiving
210slave.
211
212* *Adaptive load balancing (balanceIEEE 802.3ad Dynamic link
213aggregation (802.3ad)(LACP):-alb):* Includes balance-tlb plus receive
214load balancing (rlb) for IPV4 traffic, and does not require any
215special network switch support. The receive load balancing is achieved
216by ARP negotiation. The bonding driver intercepts the ARP Replies sent
217by the local system on their way out and overwrites the source
218hardware address with the unique hardware address of one of the NIC
219slaves in the single logical bonded interface such that different
220network-peers use different MAC addresses for their network packet
221traffic.
222
223For the most setups the active-backup are the best choice or if your
224switch support LACP "IEEE 802.3ad" this mode should be preferred.
225
cd1de2c2
WL
226The following bond configuration can be used as distributed/shared
227storage network. The benefit would be that you get more speed and the
228network will be fault-tolerant.
229
b4c06a93
WL
230.Example: Use bond with fixed IP address
231----
232auto lo
233iface lo inet loopback
234
235iface eth1 inet manual
236
237iface eth2 inet manual
238
239auto bond0
240iface bond0 inet static
241 slaves eth1 eth2
242 address 192.168.1.2
243 netmask 255.255.255.0
244 bond_miimon 100
245 bond_mode 802.3ad
246 bond_xmit_hash_policy layer2+3
247
248auto vmbr0
249iface vmbr0 inet static
250 address 10.10.10.2
251 netmask 255.255.255.0
252 gateway 10.10.10.1
253 bridge_ports eth0
254 bridge_stp off
255 bridge_fd 0
256
257----
258
cd1de2c2
WL
259
260Another possibility it to use the bond directly as bridge port.
261This can be used to make the guest network fault-tolerant.
262
263.Example: Use a bond as bridge port
b4c06a93
WL
264----
265auto lo
266iface lo inet loopback
267
268iface eth1 inet manual
269
270iface eth2 inet manual
271
272auto bond0
273iface bond0 inet maunal
274 slaves eth1 eth2
275 bond_miimon 100
276 bond_mode 802.3ad
277 bond_xmit_hash_policy layer2+3
278
279auto vmbr0
280iface vmbr0 inet static
281 address 10.10.10.2
282 netmask 255.255.255.0
283 gateway 10.10.10.1
284 bridge_ports bond0
285 bridge_stp off
286 bridge_fd 0
287
288----
289
0bcd1f7f
DM
290////
291TODO: explain IPv6 support?
292TODO: explan OVS
293////