]> git.proxmox.com Git - pve-docs.git/blob - pve-network.adoc
some small spelling/grammar fixes
[pve-docs.git] / pve-network.adoc
1 [[sysadmin_network_configuration]]
2 Network Configuration
3 ---------------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 endif::wiki[]
7
8 {pve} uses a bridged networking model. Each host can have up to 4094
9 bridges. Bridges are like physical network switches implemented in
10 software. All VMs can share a single bridge, as if
11 virtual network cables from each guest were all plugged into the same
12 switch. But you can also create multiple bridges to separate network
13 domains.
14
15 For connecting VMs to the outside world, bridges are attached to
16 physical network cards. For further flexibility, you can configure
17 VLANs (IEEE 802.1q) and network bonding, also known as "link
18 aggregation". That way it is possible to build complex and flexible
19 virtual networks.
20
21 Debian traditionally uses the `ifup` and `ifdown` commands to
22 configure the network. The file `/etc/network/interfaces` contains the
23 whole network setup. Please refer to to manual page (`man interfaces`)
24 for a complete format description.
25
26 NOTE: {pve} does not write changes directly to
27 `/etc/network/interfaces`. Instead, we write into a temporary file
28 called `/etc/network/interfaces.new`, and commit those changes when
29 you reboot the node.
30
31 It is worth mentioning that you can directly edit the configuration
32 file. All {pve} tools tries hard to keep such direct user
33 modifications. Using the GUI is still preferable, because it
34 protect you from errors.
35
36
37 Naming Conventions
38 ~~~~~~~~~~~~~~~~~~
39
40 We currently use the following naming conventions for device names:
41
42 * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
43
44 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
45
46 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
47
48 * VLANs: Simply add the VLAN number to the device name,
49 separated by a period (`eth0.50`, `bond1.30`)
50
51 This makes it easier to debug networks problems, because the device
52 names implies the device type.
53
54 Default Configuration using a Bridge
55 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56
57 The installation program creates a single bridge named `vmbr0`, which
58 is connected to the first ethernet card `eth0`. The corresponding
59 configuration in `/etc/network/interfaces` looks like this:
60
61 ----
62 auto lo
63 iface lo inet loopback
64
65 iface eth0 inet manual
66
67 auto vmbr0
68 iface vmbr0 inet static
69 address 192.168.10.2
70 netmask 255.255.255.0
71 gateway 192.168.10.1
72 bridge_ports eth0
73 bridge_stp off
74 bridge_fd 0
75 ----
76
77 Virtual machines behave as if they were directly connected to the
78 physical network. The network, in turn, sees each virtual machine as
79 having its own MAC, even though there is only one network cable
80 connecting all of these VMs to the network.
81
82
83 Routed Configuration
84 ~~~~~~~~~~~~~~~~~~~~
85
86 Most hosting providers do not support the above setup. For security
87 reasons, they disable networking as soon as they detect multiple MAC
88 addresses on a single interface.
89
90 TIP: Some providers allows you to register additional MACs on there
91 management interface. This avoids the problem, but is clumsy to
92 configure because you need to register a MAC for each of your VMs.
93
94 You can avoid the problem by ``routing'' all traffic via a single
95 interface. This makes sure that all network packets use the same MAC
96 address.
97
98 A common scenario is that you have a public IP (assume `192.168.10.2`
99 for this example), and an additional IP block for your VMs
100 (`10.10.10.1/255.255.255.0`). We recommend the following setup for such
101 situations:
102
103 ----
104 auto lo
105 iface lo inet loopback
106
107 auto eth0
108 iface eth0 inet static
109 address 192.168.10.2
110 netmask 255.255.255.0
111 gateway 192.168.10.1
112 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
113
114
115 auto vmbr0
116 iface vmbr0 inet static
117 address 10.10.10.1
118 netmask 255.255.255.0
119 bridge_ports none
120 bridge_stp off
121 bridge_fd 0
122 ----
123
124
125 Masquerading (NAT) with `iptables`
126 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
127
128 In some cases you may want to use private IPs behind your Proxmox
129 host's true IP, and masquerade the traffic using NAT:
130
131 ----
132 auto lo
133 iface lo inet loopback
134
135 auto eth0
136 #real IP adress
137 iface eth0 inet static
138 address 192.168.10.2
139 netmask 255.255.255.0
140 gateway 192.168.10.1
141
142 auto vmbr0
143 #private sub network
144 iface vmbr0 inet static
145 address 10.10.10.1
146 netmask 255.255.255.0
147 bridge_ports none
148 bridge_stp off
149 bridge_fd 0
150
151 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
152 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
153 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
154 ----
155
156
157 Linux Bond
158 ~~~~~~~~~~
159
160 Bonding (also called NIC teaming or Link Aggregation) is a technique
161 for binding multiple NIC's to a single network device. It is possible
162 to achieve different goals, like make the network fault-tolerant,
163 increase the performance or both together.
164
165 High-speed hardware like Fibre Channel and the associated switching
166 hardware can be quite expensive. By doing link aggregation, two NICs
167 can appear as one logical interface, resulting in double speed. This
168 is a native Linux kernel feature that is supported by most
169 switches. If your nodes have multiple Ethernet ports, you can
170 distribute your points of failure by running network cables to
171 different switches and the bonded connection will failover to one
172 cable or the other in case of network trouble.
173
174 Aggregated links can improve live-migration delays and improve the
175 speed of replication of data between Proxmox VE Cluster nodes.
176
177 There are 7 modes for bonding:
178
179 * *Round-robin (balance-rr):* Transmit network packets in sequential
180 order from the first available network interface (NIC) slave through
181 the last. This mode provides load balancing and fault tolerance.
182
183 * *Active-backup (active-backup):* Only one NIC slave in the bond is
184 active. A different slave becomes active if, and only if, the active
185 slave fails. The single logical bonded interface's MAC address is
186 externally visible on only one NIC (port) to avoid distortion in the
187 network switch. This mode provides fault tolerance.
188
189 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
190 address XOR'd with destination MAC address) modulo NIC slave
191 count]. This selects the same NIC slave for each destination MAC
192 address. This mode provides load balancing and fault tolerance.
193
194 * *Broadcast (broadcast):* Transmit network packets on all slave
195 network interfaces. This mode provides fault tolerance.
196
197 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
198 aggregation groups that share the same speed and duplex
199 settings. Utilizes all slave network interfaces in the active
200 aggregator group according to the 802.3ad specification.
201
202 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
203 driver mode that does not require any special network-switch
204 support. The outgoing network packet traffic is distributed according
205 to the current load (computed relative to the speed) on each network
206 interface slave. Incoming traffic is received by one currently
207 designated slave network interface. If this receiving slave fails,
208 another slave takes over the MAC address of the failed receiving
209 slave.
210
211 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
212 load balancing (rlb) for IPV4 traffic, and does not require any
213 special network switch support. The receive load balancing is achieved
214 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
215 by the local system on their way out and overwrites the source
216 hardware address with the unique hardware address of one of the NIC
217 slaves in the single logical bonded interface such that different
218 network-peers use different MAC addresses for their network packet
219 traffic.
220
221 For the most setups the active-backup are the best choice or if your
222 switch support LACP "IEEE 802.3ad" this mode should be preferred.
223
224 The following bond configuration can be used as distributed/shared
225 storage network. The benefit would be that you get more speed and the
226 network will be fault-tolerant.
227
228 .Example: Use bond with fixed IP address
229 ----
230 auto lo
231 iface lo inet loopback
232
233 iface eth1 inet manual
234
235 iface eth2 inet manual
236
237 auto bond0
238 iface bond0 inet static
239 slaves eth1 eth2
240 address 192.168.1.2
241 netmask 255.255.255.0
242 bond_miimon 100
243 bond_mode 802.3ad
244 bond_xmit_hash_policy layer2+3
245
246 auto vmbr0
247 iface vmbr0 inet static
248 address 10.10.10.2
249 netmask 255.255.255.0
250 gateway 10.10.10.1
251 bridge_ports eth0
252 bridge_stp off
253 bridge_fd 0
254
255 ----
256
257
258 Another possibility it to use the bond directly as bridge port.
259 This can be used to make the guest network fault-tolerant.
260
261 .Example: Use a bond as bridge port
262 ----
263 auto lo
264 iface lo inet loopback
265
266 iface eth1 inet manual
267
268 iface eth2 inet manual
269
270 auto bond0
271 iface bond0 inet maunal
272 slaves eth1 eth2
273 bond_miimon 100
274 bond_mode 802.3ad
275 bond_xmit_hash_policy layer2+3
276
277 auto vmbr0
278 iface vmbr0 inet static
279 address 10.10.10.2
280 netmask 255.255.255.0
281 gateway 10.10.10.1
282 bridge_ports bond0
283 bridge_stp off
284 bridge_fd 0
285
286 ----
287
288 ////
289 TODO: explain IPv6 support?
290 TODO: explan OVS
291 ////