]> git.proxmox.com Git - pve-docs.git/blob - pve-network.adoc
pve-network.adoc: fix spelling
[pve-docs.git] / pve-network.adoc
1 [[sysadmin_network_configuration]]
2 Network Configuration
3 ---------------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 endif::wiki[]
7
8 {pve} uses a bridged networking model. Each host can have up to 4094
9 bridges. Bridges are like physical network switches implemented in
10 software. All VMs can share a single bridge, as if
11 virtual network cables from each guest were all plugged into the same
12 switch. But you can also create multiple bridges to separate network
13 domains.
14
15 For connecting VMs to the outside world, bridges are attached to
16 physical network cards. For further flexibility, you can configure
17 VLANs (IEEE 802.1q) and network bonding, also known as "link
18 aggregation". That way it is possible to build complex and flexible
19 virtual networks.
20
21 Debian traditionally uses the `ifup` and `ifdown` commands to
22 configure the network. The file `/etc/network/interfaces` contains the
23 whole network setup. Please refer to to manual page (`man interfaces`)
24 for a complete format description.
25
26 NOTE: {pve} does not write changes directly to
27 `/etc/network/interfaces`. Instead, we write into a temporary file
28 called `/etc/network/interfaces.new`, and commit those changes when
29 you reboot the node.
30
31 It is worth mentioning that you can directly edit the configuration
32 file. All {pve} tools tries hard to keep such direct user
33 modifications. Using the GUI is still preferable, because it
34 protect you from errors.
35
36
37 Naming Conventions
38 ~~~~~~~~~~~~~~~~~~
39
40 We currently use the following naming conventions for device names:
41
42 * New Ethernet devices: en*, systemd network interface names.
43
44 * Legacy Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
45 They are available when Proxmox VE has been updated by an earlier version.
46
47 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
48
49 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
50
51 * VLANs: Simply add the VLAN number to the device name,
52 separated by a period (`eno1.50`, `bond1.30`)
53
54 This makes it easier to debug networks problems, because the device
55 names implies the device type.
56
57
58 Systemd Network Interface Names
59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
60
61 Systemd uses the two character prefix 'en' for Ethernet network
62 devices. The next characters depends on the device driver and the fact
63 which schema matches first.
64
65 * o<index>[n<phys_port_name>|d<dev_port>] — devices on board
66
67 * s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
68
69 * [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
70
71 * x<MAC> — device by MAC address
72
73 The most common patterns are:
74
75 * eno1 — is the first on board NIC
76
77 * enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
78
79 For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
80
81
82 Default Configuration using a Bridge
83 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
84
85 The installation program creates a single bridge named `vmbr0`, which
86 is connected to the first Ethernet card `eno0`. The corresponding
87 configuration in `/etc/network/interfaces` looks like this:
88
89 ----
90 auto lo
91 iface lo inet loopback
92
93 iface eno1 inet manual
94
95 auto vmbr0
96 iface vmbr0 inet static
97 address 192.168.10.2
98 netmask 255.255.255.0
99 gateway 192.168.10.1
100 bridge_ports eno1
101 bridge_stp off
102 bridge_fd 0
103 ----
104
105 Virtual machines behave as if they were directly connected to the
106 physical network. The network, in turn, sees each virtual machine as
107 having its own MAC, even though there is only one network cable
108 connecting all of these VMs to the network.
109
110
111 Routed Configuration
112 ~~~~~~~~~~~~~~~~~~~~
113
114 Most hosting providers do not support the above setup. For security
115 reasons, they disable networking as soon as they detect multiple MAC
116 addresses on a single interface.
117
118 TIP: Some providers allows you to register additional MACs on there
119 management interface. This avoids the problem, but is clumsy to
120 configure because you need to register a MAC for each of your VMs.
121
122 You can avoid the problem by ``routing'' all traffic via a single
123 interface. This makes sure that all network packets use the same MAC
124 address.
125
126 A common scenario is that you have a public IP (assume `192.168.10.2`
127 for this example), and an additional IP block for your VMs
128 (`10.10.10.1/255.255.255.0`). We recommend the following setup for such
129 situations:
130
131 ----
132 auto lo
133 iface lo inet loopback
134
135 auto eno1
136 iface eno1 inet static
137 address 192.168.10.2
138 netmask 255.255.255.0
139 gateway 192.168.10.1
140 post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
141
142
143 auto vmbr0
144 iface vmbr0 inet static
145 address 10.10.10.1
146 netmask 255.255.255.0
147 bridge_ports none
148 bridge_stp off
149 bridge_fd 0
150 ----
151
152
153 Masquerading (NAT) with `iptables`
154 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
155
156 In some cases you may want to use private IPs behind your Proxmox
157 host's true IP, and masquerade the traffic using NAT:
158
159 ----
160 auto lo
161 iface lo inet loopback
162
163 auto eno0
164 #real IP adress
165 iface eno1 inet static
166 address 192.168.10.2
167 netmask 255.255.255.0
168 gateway 192.168.10.1
169
170 auto vmbr0
171 #private sub network
172 iface vmbr0 inet static
173 address 10.10.10.1
174 netmask 255.255.255.0
175 bridge_ports none
176 bridge_stp off
177 bridge_fd 0
178
179 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
180 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
181 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
182 ----
183
184
185 Linux Bond
186 ~~~~~~~~~~
187
188 Bonding (also called NIC teaming or Link Aggregation) is a technique
189 for binding multiple NIC's to a single network device. It is possible
190 to achieve different goals, like make the network fault-tolerant,
191 increase the performance or both together.
192
193 High-speed hardware like Fibre Channel and the associated switching
194 hardware can be quite expensive. By doing link aggregation, two NICs
195 can appear as one logical interface, resulting in double speed. This
196 is a native Linux kernel feature that is supported by most
197 switches. If your nodes have multiple Ethernet ports, you can
198 distribute your points of failure by running network cables to
199 different switches and the bonded connection will failover to one
200 cable or the other in case of network trouble.
201
202 Aggregated links can improve live-migration delays and improve the
203 speed of replication of data between Proxmox VE Cluster nodes.
204
205 There are 7 modes for bonding:
206
207 * *Round-robin (balance-rr):* Transmit network packets in sequential
208 order from the first available network interface (NIC) slave through
209 the last. This mode provides load balancing and fault tolerance.
210
211 * *Active-backup (active-backup):* Only one NIC slave in the bond is
212 active. A different slave becomes active if, and only if, the active
213 slave fails. The single logical bonded interface's MAC address is
214 externally visible on only one NIC (port) to avoid distortion in the
215 network switch. This mode provides fault tolerance.
216
217 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
218 address XOR'd with destination MAC address) modulo NIC slave
219 count]. This selects the same NIC slave for each destination MAC
220 address. This mode provides load balancing and fault tolerance.
221
222 * *Broadcast (broadcast):* Transmit network packets on all slave
223 network interfaces. This mode provides fault tolerance.
224
225 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
226 aggregation groups that share the same speed and duplex
227 settings. Utilizes all slave network interfaces in the active
228 aggregator group according to the 802.3ad specification.
229
230 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
231 driver mode that does not require any special network-switch
232 support. The outgoing network packet traffic is distributed according
233 to the current load (computed relative to the speed) on each network
234 interface slave. Incoming traffic is received by one currently
235 designated slave network interface. If this receiving slave fails,
236 another slave takes over the MAC address of the failed receiving
237 slave.
238
239 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
240 load balancing (rlb) for IPV4 traffic, and does not require any
241 special network switch support. The receive load balancing is achieved
242 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
243 by the local system on their way out and overwrites the source
244 hardware address with the unique hardware address of one of the NIC
245 slaves in the single logical bonded interface such that different
246 network-peers use different MAC addresses for their network packet
247 traffic.
248
249 For the most setups the active-backup are the best choice or if your
250 switch support LACP "IEEE 802.3ad" this mode should be preferred.
251
252 The following bond configuration can be used as distributed/shared
253 storage network. The benefit would be that you get more speed and the
254 network will be fault-tolerant.
255
256 .Example: Use bond with fixed IP address
257 ----
258 auto lo
259 iface lo inet loopback
260
261 iface eno1 inet manual
262
263 iface eno2 inet manual
264
265 auto bond0
266 iface bond0 inet static
267 slaves eno1 eno2
268 address 192.168.1.2
269 netmask 255.255.255.0
270 bond_miimon 100
271 bond_mode 802.3ad
272 bond_xmit_hash_policy layer2+3
273
274 auto vmbr0
275 iface vmbr0 inet static
276 address 10.10.10.2
277 netmask 255.255.255.0
278 gateway 10.10.10.1
279 bridge_ports eno1
280 bridge_stp off
281 bridge_fd 0
282
283 ----
284
285
286 Another possibility it to use the bond directly as bridge port.
287 This can be used to make the guest network fault-tolerant.
288
289 .Example: Use a bond as bridge port
290 ----
291 auto lo
292 iface lo inet loopback
293
294 iface eno1 inet manual
295
296 iface eno2 inet manual
297
298 auto bond0
299 iface bond0 inet maunal
300 slaves eno1 eno2
301 bond_miimon 100
302 bond_mode 802.3ad
303 bond_xmit_hash_policy layer2+3
304
305 auto vmbr0
306 iface vmbr0 inet static
307 address 10.10.10.2
308 netmask 255.255.255.0
309 gateway 10.10.10.1
310 bridge_ports bond0
311 bridge_stp off
312 bridge_fd 0
313
314 ----
315
316 ////
317 TODO: explain IPv6 support?
318 TODO: explan OVS
319 ////