]> git.proxmox.com Git - pve-docs.git/blame - pve-network.adoc
pve-software-stack.svg: adopt colors
[pve-docs.git] / pve-network.adoc
CommitLineData
80c0adcb 1[[sysadmin_network_configuration]]
0bcd1f7f
DM
2Network Configuration
3---------------------
5f09af76
DM
4ifdef::wiki[]
5:pve-toplevel:
6endif::wiki[]
7
0bcd1f7f
DM
8{pve} uses a bridged networking model. Each host can have up to 4094
9bridges. Bridges are like physical network switches implemented in
10software. All VMs can share a single bridge, as if
11virtual network cables from each guest were all plugged into the same
12switch. But you can also create multiple bridges to separate network
13domains.
14
15For connecting VMs to the outside world, bridges are attached to
16physical network cards. For further flexibility, you can configure
17VLANs (IEEE 802.1q) and network bonding, also known as "link
18aggregation". That way it is possible to build complex and flexible
19virtual networks.
20
8c1189b6
FG
21Debian traditionally uses the `ifup` and `ifdown` commands to
22configure the network. The file `/etc/network/interfaces` contains the
23whole network setup. Please refer to to manual page (`man interfaces`)
0bcd1f7f
DM
24for a complete format description.
25
26NOTE: {pve} does not write changes directly to
8c1189b6
FG
27`/etc/network/interfaces`. Instead, we write into a temporary file
28called `/etc/network/interfaces.new`, and commit those changes when
0bcd1f7f
DM
29you reboot the node.
30
31It is worth mentioning that you can directly edit the configuration
32file. All {pve} tools tries hard to keep such direct user
33modifications. Using the GUI is still preferable, because it
34protect you from errors.
35
5eba0743 36
0bcd1f7f
DM
37Naming Conventions
38~~~~~~~~~~~~~~~~~~
39
40We currently use the following naming conventions for device names:
41
42* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
43
44* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
45
46* Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
47
48* VLANs: Simply add the VLAN number to the device name,
49 separated by a period (`eth0.50`, `bond1.30`)
50
51This makes it easier to debug networks problems, because the device
52names implies the device type.
53
54Default Configuration using a Bridge
55~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56
57The installation program creates a single bridge named `vmbr0`, which
58is connected to the first ethernet card `eth0`. The corresponding
8c1189b6 59configuration in `/etc/network/interfaces` looks like this:
0bcd1f7f
DM
60
61----
62auto lo
63iface lo inet loopback
64
65iface eth0 inet manual
66
67auto vmbr0
68iface vmbr0 inet static
69 address 192.168.10.2
70 netmask 255.255.255.0
71 gateway 192.168.10.1
72 bridge_ports eth0
73 bridge_stp off
74 bridge_fd 0
75----
76
77Virtual machines behave as if they were directly connected to the
78physical network. The network, in turn, sees each virtual machine as
79having its own MAC, even though there is only one network cable
80connecting all of these VMs to the network.
81
82
83Routed Configuration
84~~~~~~~~~~~~~~~~~~~~
85
86Most hosting providers do not support the above setup. For security
87reasons, they disable networking as soon as they detect multiple MAC
88addresses on a single interface.
89
90TIP: Some providers allows you to register additional MACs on there
91management interface. This avoids the problem, but is clumsy to
92configure because you need to register a MAC for each of your VMs.
93
8c1189b6 94You can avoid the problem by ``routing'' all traffic via a single
0bcd1f7f
DM
95interface. This makes sure that all network packets use the same MAC
96address.
97
8c1189b6 98A common scenario is that you have a public IP (assume `192.168.10.2`
0bcd1f7f 99for this example), and an additional IP block for your VMs
8c1189b6 100(`10.10.10.1/255.255.255.0`). We recommend the following setup for such
0bcd1f7f
DM
101situations:
102
103----
104auto lo
105iface lo inet loopback
106
107auto eth0
108iface eth0 inet static
109 address 192.168.10.2
110 netmask 255.255.255.0
111 gateway 192.168.10.1
112 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
113
114
115auto vmbr0
116iface vmbr0 inet static
117 address 10.10.10.1
118 netmask 255.255.255.0
119 bridge_ports none
120 bridge_stp off
121 bridge_fd 0
122----
123
124
8c1189b6
FG
125Masquerading (NAT) with `iptables`
126~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0bcd1f7f
DM
127
128In some cases you may want to use private IPs behind your Proxmox
129host's true IP, and masquerade the traffic using NAT:
130
131----
132auto lo
133iface lo inet loopback
134
135auto eth0
136#real IP adress
137iface eth0 inet static
138 address 192.168.10.2
139 netmask 255.255.255.0
140 gateway 192.168.10.1
141
142auto vmbr0
143#private sub network
144iface vmbr0 inet static
145 address 10.10.10.1
146 netmask 255.255.255.0
147 bridge_ports none
148 bridge_stp off
149 bridge_fd 0
150
151 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
152 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
153 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
154----
155
b4c06a93
WL
156
157Linux Bond
158~~~~~~~~~~
159
3eafe338
WL
160Bonding (also called NIC teaming or Link Aggregation) is a technique
161for binding multiple NIC's to a single network device. It is possible
162to achieve different goals, like make the network fault-tolerant,
163increase the performance or both together.
164
165High-speed hardware like Fibre Channel and the associated switching
166hardware can be quite expensive. By doing link aggregation, two NICs
167can appear as one logical interface, resulting in double speed. This
168is a native Linux kernel feature that is supported by most
169switches. If your nodes have multiple Ethernet ports, you can
170distribute your points of failure by running network cables to
171different switches and the bonded connection will failover to one
172cable or the other in case of network trouble.
173
174Aggregated links can improve live-migration delays and improve the
175speed of replication of data between Proxmox VE Cluster nodes.
b4c06a93
WL
176
177There are 7 modes for bonding:
178
179* *Round-robin (balance-rr):* Transmit network packets in sequential
180order from the first available network interface (NIC) slave through
181the last. This mode provides load balancing and fault tolerance.
182
183* *Active-backup (active-backup):* Only one NIC slave in the bond is
184active. A different slave becomes active if, and only if, the active
185slave fails. The single logical bonded interface's MAC address is
186externally visible on only one NIC (port) to avoid distortion in the
187network switch. This mode provides fault tolerance.
188
189* *XOR (balance-xor):* Transmit network packets based on [(source MAC
190address XOR'd with destination MAC address) modulo NIC slave
191count]. This selects the same NIC slave for each destination MAC
192address. This mode provides load balancing and fault tolerance.
193
194* *Broadcast (broadcast):* Transmit network packets on all slave
195network interfaces. This mode provides fault tolerance.
196
197* *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
198aggregation groups that share the same speed and duplex
199settings. Utilizes all slave network interfaces in the active
200aggregator group according to the 802.3ad specification.
201
202* *Adaptive transmit load balancing (balance-tlb):* Linux bonding
203driver mode that does not require any special network-switch
204support. The outgoing network packet traffic is distributed according
205to the current load (computed relative to the speed) on each network
206interface slave. Incoming traffic is received by one currently
207designated slave network interface. If this receiving slave fails,
208another slave takes over the MAC address of the failed receiving
209slave.
210
211* *Adaptive load balancing (balanceIEEE 802.3ad Dynamic link
212aggregation (802.3ad)(LACP):-alb):* Includes balance-tlb plus receive
213load balancing (rlb) for IPV4 traffic, and does not require any
214special network switch support. The receive load balancing is achieved
215by ARP negotiation. The bonding driver intercepts the ARP Replies sent
216by the local system on their way out and overwrites the source
217hardware address with the unique hardware address of one of the NIC
218slaves in the single logical bonded interface such that different
219network-peers use different MAC addresses for their network packet
220traffic.
221
222For the most setups the active-backup are the best choice or if your
223switch support LACP "IEEE 802.3ad" this mode should be preferred.
224
cd1de2c2
WL
225The following bond configuration can be used as distributed/shared
226storage network. The benefit would be that you get more speed and the
227network will be fault-tolerant.
228
b4c06a93
WL
229.Example: Use bond with fixed IP address
230----
231auto lo
232iface lo inet loopback
233
234iface eth1 inet manual
235
236iface eth2 inet manual
237
238auto bond0
239iface bond0 inet static
240 slaves eth1 eth2
241 address 192.168.1.2
242 netmask 255.255.255.0
243 bond_miimon 100
244 bond_mode 802.3ad
245 bond_xmit_hash_policy layer2+3
246
247auto vmbr0
248iface vmbr0 inet static
249 address 10.10.10.2
250 netmask 255.255.255.0
251 gateway 10.10.10.1
252 bridge_ports eth0
253 bridge_stp off
254 bridge_fd 0
255
256----
257
cd1de2c2
WL
258
259Another possibility it to use the bond directly as bridge port.
260This can be used to make the guest network fault-tolerant.
261
262.Example: Use a bond as bridge port
b4c06a93
WL
263----
264auto lo
265iface lo inet loopback
266
267iface eth1 inet manual
268
269iface eth2 inet manual
270
271auto bond0
272iface bond0 inet maunal
273 slaves eth1 eth2
274 bond_miimon 100
275 bond_mode 802.3ad
276 bond_xmit_hash_policy layer2+3
277
278auto vmbr0
279iface vmbr0 inet static
280 address 10.10.10.2
281 netmask 255.255.255.0
282 gateway 10.10.10.1
283 bridge_ports bond0
284 bridge_stp off
285 bridge_fd 0
286
287----
288
0bcd1f7f
DM
289////
290TODO: explain IPv6 support?
291TODO: explan OVS
292////