]> git.proxmox.com Git - pve-docs.git/blame_incremental - pve-network.adoc
network: shortly document disabling ipv6 support
[pve-docs.git] / pve-network.adoc
... / ...
CommitLineData
1[[sysadmin_network_configuration]]
2Network Configuration
3---------------------
4ifdef::wiki[]
5:pve-toplevel:
6endif::wiki[]
7
8Network configuration can be done either via the GUI, or by manually
9editing the file `/etc/network/interfaces`, which contains the
10whole network configuration. The `interfaces(5)` manual page contains the
11complete format description. All {pve} tools try hard to keep direct
12user modifications, but using the GUI is still preferable, because it
13protects you from errors.
14
15Once the network is configured, you can use the Debian traditional tools `ifup`
16and `ifdown` commands to bring interfaces up and down.
17
18Apply Network Changes
19~~~~~~~~~~~~~~~~~~~~~
20
21{pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
22write into a temporary file called `/etc/network/interfaces.new`, this way you
23can do many related changes at once. This also allows to ensure your changes
24are correct before applying, as a wrong network configuration may render a node
25inaccessible.
26
27Reboot Node to apply
28^^^^^^^^^^^^^^^^^^^^
29
30With the default installed `ifupdown` network managing package you need to
31reboot to commit any pending network changes. Most of the time, the basic {pve}
32network setup is stable and does not change often, so rebooting should not be
33required often.
34
35Reload Network with ifupdown2
36^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
37
38With the optional `ifupdown2` network managing package you also can reload the
39network configuration live, without requiring a reboot.
40
41Since {pve} 6.1 you can apply pending network changes over the web-interface,
42using the 'Apply Configuration' button in the 'Network' panel of a node.
43
44To install 'ifupdown2' ensure you have the latest {pve} updates installed, then
45
46WARNING: installing 'ifupdown2' will remove 'ifupdown', but as the removal
47scripts of 'ifupdown' before version '0.8.35+pve1' have a issue where network
48is fully stopped on removal footnote:[Introduced with Debian Buster:
49https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=945877] you *must* ensure
50that you have a up to date 'ifupdown' package version.
51
52For the installation itself you can then simply do:
53
54 apt install ifupdown2
55
56With that you're all set. You can also switch back to the 'ifupdown' variant at
57any time, if you run into issues.
58
59Naming Conventions
60~~~~~~~~~~~~~~~~~~
61
62We currently use the following naming conventions for device names:
63
64* Ethernet devices: en*, systemd network interface names. This naming scheme is
65 used for new {pve} installations since version 5.0.
66
67* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) This naming
68scheme is used for {pve} hosts which were installed before the 5.0
69release. When upgrading to 5.0, the names are kept as-is.
70
71* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
72
73* Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
74
75* VLANs: Simply add the VLAN number to the device name,
76 separated by a period (`eno1.50`, `bond1.30`)
77
78This makes it easier to debug networks problems, because the device
79name implies the device type.
80
81Systemd Network Interface Names
82^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
83
84Systemd uses the two character prefix 'en' for Ethernet network
85devices. The next characters depends on the device driver and the fact
86which schema matches first.
87
88* o<index>[n<phys_port_name>|d<dev_port>] — devices on board
89
90* s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
91
92* [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
93
94* x<MAC> — device by MAC address
95
96The most common patterns are:
97
98* eno1 — is the first on board NIC
99
100* enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
101
102For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
103
104Choosing a network configuration
105~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
106
107Depending on your current network organization and your resources you can
108choose either a bridged, routed, or masquerading networking setup.
109
110{pve} server in a private LAN, using an external gateway to reach the internet
111^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
112
113The *Bridged* model makes the most sense in this case, and this is also
114the default mode on new {pve} installations.
115Each of your Guest system will have a virtual interface attached to the
116{pve} bridge. This is similar in effect to having the Guest network card
117directly connected to a new switch on your LAN, the {pve} host playing the role
118of the switch.
119
120{pve} server at hosting provider, with public IP ranges for Guests
121^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
122
123For this setup, you can use either a *Bridged* or *Routed* model, depending on
124what your provider allows.
125
126{pve} server at hosting provider, with a single public IP address
127^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
128
129In that case the only way to get outgoing network accesses for your guest
130systems is to use *Masquerading*. For incoming network access to your guests,
131you will need to configure *Port Forwarding*.
132
133For further flexibility, you can configure
134VLANs (IEEE 802.1q) and network bonding, also known as "link
135aggregation". That way it is possible to build complex and flexible
136virtual networks.
137
138Default Configuration using a Bridge
139~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
140
141[thumbnail="default-network-setup-bridge.svg"]
142Bridges are like physical network switches implemented in software.
143All virtual guests can share a single bridge, or you can create multiple
144bridges to separate network domains. Each host can have up to 4094 bridges.
145
146The installation program creates a single bridge named `vmbr0`, which
147is connected to the first Ethernet card. The corresponding
148configuration in `/etc/network/interfaces` might look like this:
149
150----
151auto lo
152iface lo inet loopback
153
154iface eno1 inet manual
155
156auto vmbr0
157iface vmbr0 inet static
158 address 192.168.10.2
159 netmask 255.255.255.0
160 gateway 192.168.10.1
161 bridge-ports eno1
162 bridge-stp off
163 bridge-fd 0
164----
165
166Virtual machines behave as if they were directly connected to the
167physical network. The network, in turn, sees each virtual machine as
168having its own MAC, even though there is only one network cable
169connecting all of these VMs to the network.
170
171Routed Configuration
172~~~~~~~~~~~~~~~~~~~~
173
174Most hosting providers do not support the above setup. For security
175reasons, they disable networking as soon as they detect multiple MAC
176addresses on a single interface.
177
178TIP: Some providers allow you to register additional MACs through their
179management interface. This avoids the problem, but can be clumsy to
180configure because you need to register a MAC for each of your VMs.
181
182You can avoid the problem by ``routing'' all traffic via a single
183interface. This makes sure that all network packets use the same MAC
184address.
185
186[thumbnail="default-network-setup-routed.svg"]
187A common scenario is that you have a public IP (assume `198.51.100.5`
188for this example), and an additional IP block for your VMs
189(`203.0.113.16/29`). We recommend the following setup for such
190situations:
191
192----
193auto lo
194iface lo inet loopback
195
196auto eno1
197iface eno1 inet static
198 address 198.51.100.5
199 netmask 255.255.255.0
200 gateway 198.51.100.1
201 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
202 post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
203
204
205auto vmbr0
206iface vmbr0 inet static
207 address 203.0.113.17
208 netmask 255.255.255.248
209 bridge-ports none
210 bridge-stp off
211 bridge-fd 0
212----
213
214
215Masquerading (NAT) with `iptables`
216~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
217
218Masquerading allows guests having only a private IP address to access the
219network by using the host IP address for outgoing traffic. Each outgoing
220packet is rewritten by `iptables` to appear as originating from the host,
221and responses are rewritten accordingly to be routed to the original sender.
222
223----
224auto lo
225iface lo inet loopback
226
227auto eno1
228#real IP address
229iface eno1 inet static
230 address 198.51.100.5
231 netmask 255.255.255.0
232 gateway 198.51.100.1
233
234auto vmbr0
235#private sub network
236iface vmbr0 inet static
237 address 10.10.10.1
238 netmask 255.255.255.0
239 bridge-ports none
240 bridge-stp off
241 bridge-fd 0
242
243 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
244 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
245 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
246----
247
248NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
249needed for outgoing connections. Otherwise the firewall could block outgoing
250connections since they will prefer the `POSTROUTING` of the VM bridge (and not
251`MASQUERADE`).
252
253Adding these lines in the `/etc/network/interfaces` can fix this problem:
254
255----
256post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
257post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
258----
259
260For more information about this, refer to the following links:
261
262https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
263
264https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
265
266https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
267
268
269
270Linux Bond
271~~~~~~~~~~
272
273Bonding (also called NIC teaming or Link Aggregation) is a technique
274for binding multiple NIC's to a single network device. It is possible
275to achieve different goals, like make the network fault-tolerant,
276increase the performance or both together.
277
278High-speed hardware like Fibre Channel and the associated switching
279hardware can be quite expensive. By doing link aggregation, two NICs
280can appear as one logical interface, resulting in double speed. This
281is a native Linux kernel feature that is supported by most
282switches. If your nodes have multiple Ethernet ports, you can
283distribute your points of failure by running network cables to
284different switches and the bonded connection will failover to one
285cable or the other in case of network trouble.
286
287Aggregated links can improve live-migration delays and improve the
288speed of replication of data between Proxmox VE Cluster nodes.
289
290There are 7 modes for bonding:
291
292* *Round-robin (balance-rr):* Transmit network packets in sequential
293order from the first available network interface (NIC) slave through
294the last. This mode provides load balancing and fault tolerance.
295
296* *Active-backup (active-backup):* Only one NIC slave in the bond is
297active. A different slave becomes active if, and only if, the active
298slave fails. The single logical bonded interface's MAC address is
299externally visible on only one NIC (port) to avoid distortion in the
300network switch. This mode provides fault tolerance.
301
302* *XOR (balance-xor):* Transmit network packets based on [(source MAC
303address XOR'd with destination MAC address) modulo NIC slave
304count]. This selects the same NIC slave for each destination MAC
305address. This mode provides load balancing and fault tolerance.
306
307* *Broadcast (broadcast):* Transmit network packets on all slave
308network interfaces. This mode provides fault tolerance.
309
310* *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
311aggregation groups that share the same speed and duplex
312settings. Utilizes all slave network interfaces in the active
313aggregator group according to the 802.3ad specification.
314
315* *Adaptive transmit load balancing (balance-tlb):* Linux bonding
316driver mode that does not require any special network-switch
317support. The outgoing network packet traffic is distributed according
318to the current load (computed relative to the speed) on each network
319interface slave. Incoming traffic is received by one currently
320designated slave network interface. If this receiving slave fails,
321another slave takes over the MAC address of the failed receiving
322slave.
323
324* *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
325load balancing (rlb) for IPV4 traffic, and does not require any
326special network switch support. The receive load balancing is achieved
327by ARP negotiation. The bonding driver intercepts the ARP Replies sent
328by the local system on their way out and overwrites the source
329hardware address with the unique hardware address of one of the NIC
330slaves in the single logical bonded interface such that different
331network-peers use different MAC addresses for their network packet
332traffic.
333
334If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
335the corresponding bonding mode (802.3ad). Otherwise you should generally use the
336active-backup mode. +
337// http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
338If you intend to run your cluster network on the bonding interfaces, then you
339have to use active-passive mode on the bonding interfaces, other modes are
340unsupported.
341
342The following bond configuration can be used as distributed/shared
343storage network. The benefit would be that you get more speed and the
344network will be fault-tolerant.
345
346.Example: Use bond with fixed IP address
347----
348auto lo
349iface lo inet loopback
350
351iface eno1 inet manual
352
353iface eno2 inet manual
354
355iface eno3 inet manual
356
357auto bond0
358iface bond0 inet static
359 bond-slaves eno1 eno2
360 address 192.168.1.2
361 netmask 255.255.255.0
362 bond-miimon 100
363 bond-mode 802.3ad
364 bond-xmit-hash-policy layer2+3
365
366auto vmbr0
367iface vmbr0 inet static
368 address 10.10.10.2
369 netmask 255.255.255.0
370 gateway 10.10.10.1
371 bridge-ports eno3
372 bridge-stp off
373 bridge-fd 0
374
375----
376
377
378[thumbnail="default-network-setup-bond.svg"]
379Another possibility it to use the bond directly as bridge port.
380This can be used to make the guest network fault-tolerant.
381
382.Example: Use a bond as bridge port
383----
384auto lo
385iface lo inet loopback
386
387iface eno1 inet manual
388
389iface eno2 inet manual
390
391auto bond0
392iface bond0 inet manual
393 bond-slaves eno1 eno2
394 bond-miimon 100
395 bond-mode 802.3ad
396 bond-xmit-hash-policy layer2+3
397
398auto vmbr0
399iface vmbr0 inet static
400 address 10.10.10.2
401 netmask 255.255.255.0
402 gateway 10.10.10.1
403 bridge-ports bond0
404 bridge-stp off
405 bridge-fd 0
406
407----
408
409
410VLAN 802.1Q
411~~~~~~~~~~~
412
413A virtual LAN (VLAN) is a broadcast domain that is partitioned and
414isolated in the network at layer two. So it is possible to have
415multiple networks (4096) in a physical network, each independent of
416the other ones.
417
418Each VLAN network is identified by a number often called 'tag'.
419Network packages are then 'tagged' to identify which virtual network
420they belong to.
421
422
423VLAN for Guest Networks
424^^^^^^^^^^^^^^^^^^^^^^^
425
426{pve} supports this setup out of the box. You can specify the VLAN tag
427when you create a VM. The VLAN tag is part of the guest network
428configuration. The networking layer supports different modes to
429implement VLANs, depending on the bridge configuration:
430
431* *VLAN awareness on the Linux bridge:*
432In this case, each guest's virtual network card is assigned to a VLAN tag,
433which is transparently supported by the Linux bridge.
434Trunk mode is also possible, but that makes configuration
435in the guest necessary.
436
437* *"traditional" VLAN on the Linux bridge:*
438In contrast to the VLAN awareness method, this method is not transparent
439and creates a VLAN device with associated bridge for each VLAN.
440That is, creating a guest on VLAN 5 for example, would create two
441interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
442
443* *Open vSwitch VLAN:*
444This mode uses the OVS VLAN feature.
445
446* *Guest configured VLAN:*
447VLANs are assigned inside the guest. In this case, the setup is
448completely done inside the guest and can not be influenced from the
449outside. The benefit is that you can use more than one VLAN on a
450single virtual NIC.
451
452
453VLAN on the Host
454^^^^^^^^^^^^^^^^
455
456To allow host communication with an isolated network. It is possible
457to apply VLAN tags to any network device (NIC, Bond, Bridge). In
458general, you should configure the VLAN on the interface with the least
459abstraction layers between itself and the physical NIC.
460
461For example, in a default configuration where you want to place
462the host management address on a separate VLAN.
463
464
465.Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
466----
467auto lo
468iface lo inet loopback
469
470iface eno1 inet manual
471
472iface eno1.5 inet manual
473
474auto vmbr0v5
475iface vmbr0v5 inet static
476 address 10.10.10.2
477 netmask 255.255.255.0
478 gateway 10.10.10.1
479 bridge-ports eno1.5
480 bridge-stp off
481 bridge-fd 0
482
483auto vmbr0
484iface vmbr0 inet manual
485 bridge-ports eno1
486 bridge-stp off
487 bridge-fd 0
488
489----
490
491.Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
492----
493auto lo
494iface lo inet loopback
495
496iface eno1 inet manual
497
498
499auto vmbr0.5
500iface vmbr0.5 inet static
501 address 10.10.10.2
502 netmask 255.255.255.0
503 gateway 10.10.10.1
504
505auto vmbr0
506iface vmbr0 inet manual
507 bridge-ports eno1
508 bridge-stp off
509 bridge-fd 0
510 bridge-vlan-aware yes
511----
512
513The next example is the same setup but a bond is used to
514make this network fail-safe.
515
516.Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
517----
518auto lo
519iface lo inet loopback
520
521iface eno1 inet manual
522
523iface eno2 inet manual
524
525auto bond0
526iface bond0 inet manual
527 bond-slaves eno1 eno2
528 bond-miimon 100
529 bond-mode 802.3ad
530 bond-xmit-hash-policy layer2+3
531
532iface bond0.5 inet manual
533
534auto vmbr0v5
535iface vmbr0v5 inet static
536 address 10.10.10.2
537 netmask 255.255.255.0
538 gateway 10.10.10.1
539 bridge-ports bond0.5
540 bridge-stp off
541 bridge-fd 0
542
543auto vmbr0
544iface vmbr0 inet manual
545 bridge-ports bond0
546 bridge-stp off
547 bridge-fd 0
548
549----
550
551Disabling IPv6 on the Node
552~~~~~~~~~~~~~~~~~~~~~~~~~~
553
554{pve} works correctly in all environments, irrespective of whether IPv6 is
555deployed or not. We recommend leaving all settings at the provided defaults.
556
557Should you still need to disable support for IPv6 on your node, do so by
558creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
559https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
560for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
561
562----
563net.ipv6.conf.all.disable_ipv6 = 1
564net.ipv6.conf.default.disable_ipv6 = 1
565----
566
567This method is preferred to disabling the loading of the IPv6 module on the
568https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
569
570////
571TODO: explain IPv6 support?
572TODO: explain OVS
573////