]> git.proxmox.com Git - pve-docs.git/blob - pve-network.adoc
Update Dokumentation to Systemd Network Interface Names
[pve-docs.git] / pve-network.adoc
1 [[sysadmin_network_configuration]]
2 Network Configuration
3 ---------------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 endif::wiki[]
7
8 {pve} uses a bridged networking model. Each host can have up to 4094
9 bridges. Bridges are like physical network switches implemented in
10 software. All VMs can share a single bridge, as if
11 virtual network cables from each guest were all plugged into the same
12 switch. But you can also create multiple bridges to separate network
13 domains.
14
15 For connecting VMs to the outside world, bridges are attached to
16 physical network cards. For further flexibility, you can configure
17 VLANs (IEEE 802.1q) and network bonding, also known as "link
18 aggregation". That way it is possible to build complex and flexible
19 virtual networks.
20
21 Debian traditionally uses the `ifup` and `ifdown` commands to
22 configure the network. The file `/etc/network/interfaces` contains the
23 whole network setup. Please refer to to manual page (`man interfaces`)
24 for a complete format description.
25
26 NOTE: {pve} does not write changes directly to
27 `/etc/network/interfaces`. Instead, we write into a temporary file
28 called `/etc/network/interfaces.new`, and commit those changes when
29 you reboot the node.
30
31 It is worth mentioning that you can directly edit the configuration
32 file. All {pve} tools tries hard to keep such direct user
33 modifications. Using the GUI is still preferable, because it
34 protect you from errors.
35
36
37 Naming Conventions
38 ~~~~~~~~~~~~~~~~~~
39
40 We currently use the following naming conventions for device names:
41
42 * New Ethernet devices: en*, systemd network interface names.
43
44 * Lagacy Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
45 They are available when Proxmox VE has been updated by an earlier version.
46
47 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
48
49 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
50
51 * VLANs: Simply add the VLAN number to the device name,
52 separated by a period (`eno1.50`, `bond1.30`)
53
54 This makes it easier to debug networks problems, because the device
55 names implies the device type.
56
57 Systemd Network Interface Names
58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
59
60 Two character prefixes based on the type of interface:
61
62 * en — Enoernet
63
64 * sl — serial line IP (slip)
65
66 * wl — wlan
67
68 * ww — wwan
69
70 The next characters depence on the device driver and the fact which schema matches first.
71
72 * o<index>[n<phys_port_name>|d<dev_port>] — devices on board
73
74 * s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
75
76 * [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
77
78 * x<MAC> — device by MAC address
79
80 The most common patterns are
81
82 * eno1 — is the first on board NIC
83
84 * enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
85
86 For more information see link:https://github.com/systemd/systemd/blob/master/src/udev/udev-builtin-net_id.c#L20[Systemd Network Interface Names]
87
88 Default Configuration using a Bridge
89 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
90
91 The installation program creates a single bridge named `vmbr0`, which
92 is connected to the first ethernet card `eno0`. The corresponding
93 configuration in `/etc/network/interfaces` looks like this:
94
95 ----
96 auto lo
97 iface lo inet loopback
98
99 iface eno1 inet manual
100
101 auto vmbr0
102 iface vmbr0 inet static
103 address 192.168.10.2
104 netmask 255.255.255.0
105 gateway 192.168.10.1
106 bridge_ports eno1
107 bridge_stp off
108 bridge_fd 0
109 ----
110
111 Virtual machines behave as if they were directly connected to the
112 physical network. The network, in turn, sees each virtual machine as
113 having its own MAC, even though there is only one network cable
114 connecting all of these VMs to the network.
115
116
117 Routed Configuration
118 ~~~~~~~~~~~~~~~~~~~~
119
120 Most hosting providers do not support the above setup. For security
121 reasons, they disable networking as soon as they detect multiple MAC
122 addresses on a single interface.
123
124 TIP: Some providers allows you to register additional MACs on there
125 management interface. This avoids the problem, but is clumsy to
126 configure because you need to register a MAC for each of your VMs.
127
128 You can avoid the problem by ``routing'' all traffic via a single
129 interface. This makes sure that all network packets use the same MAC
130 address.
131
132 A common scenario is that you have a public IP (assume `192.168.10.2`
133 for this example), and an additional IP block for your VMs
134 (`10.10.10.1/255.255.255.0`). We recommend the following setup for such
135 situations:
136
137 ----
138 auto lo
139 iface lo inet loopback
140
141 auto eno1
142 iface eno1 inet static
143 address 192.168.10.2
144 netmask 255.255.255.0
145 gateway 192.168.10.1
146 post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
147
148
149 auto vmbr0
150 iface vmbr0 inet static
151 address 10.10.10.1
152 netmask 255.255.255.0
153 bridge_ports none
154 bridge_stp off
155 bridge_fd 0
156 ----
157
158
159 Masquerading (NAT) with `iptables`
160 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
161
162 In some cases you may want to use private IPs behind your Proxmox
163 host's true IP, and masquerade the traffic using NAT:
164
165 ----
166 auto lo
167 iface lo inet loopback
168
169 auto eno0
170 #real IP adress
171 iface eno1 inet static
172 address 192.168.10.2
173 netmask 255.255.255.0
174 gateway 192.168.10.1
175
176 auto vmbr0
177 #private sub network
178 iface vmbr0 inet static
179 address 10.10.10.1
180 netmask 255.255.255.0
181 bridge_ports none
182 bridge_stp off
183 bridge_fd 0
184
185 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
186 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
187 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
188 ----
189
190
191 Linux Bond
192 ~~~~~~~~~~
193
194 Bonding (also called NIC teaming or Link Aggregation) is a technique
195 for binding multiple NIC's to a single network device. It is possible
196 to achieve different goals, like make the network fault-tolerant,
197 increase the performance or both together.
198
199 High-speed hardware like Fibre Channel and the associated switching
200 hardware can be quite expensive. By doing link aggregation, two NICs
201 can appear as one logical interface, resulting in double speed. This
202 is a native Linux kernel feature that is supported by most
203 switches. If your nodes have multiple Ethernet ports, you can
204 distribute your points of failure by running network cables to
205 different switches and the bonded connection will failover to one
206 cable or the other in case of network trouble.
207
208 Aggregated links can improve live-migration delays and improve the
209 speed of replication of data between Proxmox VE Cluster nodes.
210
211 There are 7 modes for bonding:
212
213 * *Round-robin (balance-rr):* Transmit network packets in sequential
214 order from the first available network interface (NIC) slave through
215 the last. This mode provides load balancing and fault tolerance.
216
217 * *Active-backup (active-backup):* Only one NIC slave in the bond is
218 active. A different slave becomes active if, and only if, the active
219 slave fails. The single logical bonded interface's MAC address is
220 externally visible on only one NIC (port) to avoid distortion in the
221 network switch. This mode provides fault tolerance.
222
223 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
224 address XOR'd with destination MAC address) modulo NIC slave
225 count]. This selects the same NIC slave for each destination MAC
226 address. This mode provides load balancing and fault tolerance.
227
228 * *Broadcast (broadcast):* Transmit network packets on all slave
229 network interfaces. This mode provides fault tolerance.
230
231 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
232 aggregation groups that share the same speed and duplex
233 settings. Utilizes all slave network interfaces in the active
234 aggregator group according to the 802.3ad specification.
235
236 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
237 driver mode that does not require any special network-switch
238 support. The outgoing network packet traffic is distributed according
239 to the current load (computed relative to the speed) on each network
240 interface slave. Incoming traffic is received by one currently
241 designated slave network interface. If this receiving slave fails,
242 another slave takes over the MAC address of the failed receiving
243 slave.
244
245 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
246 load balancing (rlb) for IPV4 traffic, and does not require any
247 special network switch support. The receive load balancing is achieved
248 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
249 by the local system on their way out and overwrites the source
250 hardware address with the unique hardware address of one of the NIC
251 slaves in the single logical bonded interface such that different
252 network-peers use different MAC addresses for their network packet
253 traffic.
254
255 For the most setups the active-backup are the best choice or if your
256 switch support LACP "IEEE 802.3ad" this mode should be preferred.
257
258 The following bond configuration can be used as distributed/shared
259 storage network. The benefit would be that you get more speed and the
260 network will be fault-tolerant.
261
262 .Example: Use bond with fixed IP address
263 ----
264 auto lo
265 iface lo inet loopback
266
267 iface eno1 inet manual
268
269 iface eno2 inet manual
270
271 auto bond0
272 iface bond0 inet static
273 slaves eno1 eno2
274 address 192.168.1.2
275 netmask 255.255.255.0
276 bond_miimon 100
277 bond_mode 802.3ad
278 bond_xmit_hash_policy layer2+3
279
280 auto vmbr0
281 iface vmbr0 inet static
282 address 10.10.10.2
283 netmask 255.255.255.0
284 gateway 10.10.10.1
285 bridge_ports eno1
286 bridge_stp off
287 bridge_fd 0
288
289 ----
290
291
292 Another possibility it to use the bond directly as bridge port.
293 This can be used to make the guest network fault-tolerant.
294
295 .Example: Use a bond as bridge port
296 ----
297 auto lo
298 iface lo inet loopback
299
300 iface eno1 inet manual
301
302 iface eno2 inet manual
303
304 auto bond0
305 iface bond0 inet maunal
306 slaves eno1 eno2
307 bond_miimon 100
308 bond_mode 802.3ad
309 bond_xmit_hash_policy layer2+3
310
311 auto vmbr0
312 iface vmbr0 inet static
313 address 10.10.10.2
314 netmask 255.255.255.0
315 gateway 10.10.10.1
316 bridge_ports bond0
317 bridge_stp off
318 bridge_fd 0
319
320 ----
321
322 ////
323 TODO: explain IPv6 support?
324 TODO: explan OVS
325 ////