]> git.proxmox.com Git - pve-docs.git/blob - pve-network.adoc
network: override device names: suggest running update-initramfs
[pve-docs.git] / pve-network.adoc
1 [[sysadmin_network_configuration]]
2 Network Configuration
3 ---------------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 endif::wiki[]
7
8 {pve} is using the Linux network stack. This provides a lot of flexibility on
9 how to set up the network on the {pve} nodes. The configuration can be done
10 either via the GUI, or by manually editing the file `/etc/network/interfaces`,
11 which contains the whole network configuration. The `interfaces(5)` manual
12 page contains the complete format description. All {pve} tools try hard to keep
13 direct user modifications, but using the GUI is still preferable, because it
14 protects you from errors.
15
16 A Linux bridge interface (commonly called 'vmbrX') is needed to connect guests
17 to the underlying physical network. It can be thought of as a virtual switch
18 which the guests and physical interfaces are connected to. This section provides
19 some examples on how the network can be set up to accomodate different use cases
20 like redundancy with a xref:sysadmin_network_bond['bond'],
21 xref:sysadmin_network_vlan['vlans'] or
22 xref:sysadmin_network_routed['routed'] and
23 xref:sysadmin_network_masquerading['NAT'] setups.
24
25 The xref:chapter_pvesdn[Software Defined Network] is an option for more complex
26 virtual networks in {pve} clusters.
27
28 WARNING: It's discouraged to use the traditional Debian tools `ifup` and `ifdown`
29 if unsure, as they have some pitfalls like interupting all guest traffic on
30 `ifdown vmbrX` but not reconnecting those guest again when doing `ifup` on the
31 same bridge later.
32
33 Apply Network Changes
34 ~~~~~~~~~~~~~~~~~~~~~
35
36 {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
37 write into a temporary file called `/etc/network/interfaces.new`, this way you
38 can do many related changes at once. This also allows to ensure your changes
39 are correct before applying, as a wrong network configuration may render a node
40 inaccessible.
41
42 Live-Reload Network with ifupdown2
43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
44
45 With the recommended 'ifupdown2' package (default for new installations since
46 {pve} 7.0), it is possible to apply network configuration changes without a
47 reboot. If you change the network configuration via the GUI, you can click the
48 'Apply Configuration' button. This will move changes from the staging
49 `interfaces.new` file to `/etc/network/interfaces` and apply them live.
50
51 If you made manual changes directly to the `/etc/network/interfaces` file, you
52 can apply them by running `ifreload -a`
53
54 NOTE: If you installed {pve} on top of Debian, or upgraded to {pve} 7.0 from an
55 older {pve} installation, make sure 'ifupdown2' is installed: `apt install
56 ifupdown2`
57
58 Reboot Node to Apply
59 ^^^^^^^^^^^^^^^^^^^^
60
61 Another way to apply a new network configuration is to reboot the node.
62 In that case the systemd service `pvenetcommit` will activate the staging
63 `interfaces.new` file before the `networking` service will apply that
64 configuration.
65
66 Naming Conventions
67 ~~~~~~~~~~~~~~~~~~
68
69 We currently use the following naming conventions for device names:
70
71 * Ethernet devices: `en*`, systemd network interface names. This naming scheme is
72 used for new {pve} installations since version 5.0.
73
74 * Ethernet devices: `eth[N]`, where 0 ≤ N (`eth0`, `eth1`, ...) This naming
75 scheme is used for {pve} hosts which were installed before the 5.0
76 release. When upgrading to 5.0, the names are kept as-is.
77
78 * Bridge names: Commonly `vmbr[N]`, where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`),
79 but you can use any alphanumeric string that starts with a character and is at
80 most 10 characters long.
81
82 * Bonds: `bond[N]`, where 0 ≤ N (`bond0`, `bond1`, ...)
83
84 * VLANs: Simply add the VLAN number to the device name,
85 separated by a period (`eno1.50`, `bond1.30`)
86
87 This makes it easier to debug networks problems, because the device
88 name implies the device type.
89
90 [[systemd_network_interface_names]]
91 Systemd Network Interface Names
92 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
93
94 Systemd defines a versioned naming scheme for network device names. The
95 scheme uses the two-character prefix `en` for Ethernet network devices. The
96 next characters depends on the device driver, device location and other
97 attributes. Some possible patterns are:
98
99 * `o<index>[n<phys_port_name>|d<dev_port>]` — devices on board
100
101 * `s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` — devices by hotplug id
102
103 * `[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` —
104 devices by bus id
105
106 * `x<MAC>` — devices by MAC address
107
108 Some examples for the most common patterns are:
109
110 * `eno1` — is the first on-board NIC
111
112 * `enp3s0f1` — is function 1 of the NIC on PCI bus 3, slot 0
113
114 For a full list of possible device name patterns, see the
115 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
116 systemd.net-naming-scheme(7) manpage].
117
118 A new version of systemd may define a new version of the network device naming
119 scheme, which it then uses by default. Consequently, updating to a newer
120 systemd version, for example during a major {pve} upgrade, can change the names
121 of network devices and require adjusting the network configuration. To avoid
122 name changes due to a new version of the naming scheme, you can manually pin a
123 particular naming scheme version (see
124 xref:network_pin_naming_scheme_version[below]).
125
126 However, even with a pinned naming scheme version, network device names can
127 still change due to kernel or driver updates. In order to avoid name changes
128 for a particular network device altogether, you can manually override its name
129 using a link file (see xref:network_override_device_names[below]).
130
131 For more information on network interface names, see
132 https://systemd.io/PREDICTABLE_INTERFACE_NAMES/[Predictable Network Interface
133 Names].
134
135 [[network_pin_naming_scheme_version]]
136 Pinning a specific naming scheme version
137 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
138
139 You can pin a specific version of the naming scheme for network devices by
140 adding the `net.naming-scheme=<version>` parameter to the
141 xref:sysboot_edit_kernel_cmdline[kernel command line]. For a list of naming
142 scheme versions, see the
143 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
144 systemd.net-naming-scheme(7) manpage].
145
146 For example, to pin the version `v252`, which is the latest naming scheme
147 version for a fresh {pve} 8.0 installation, add the following kernel
148 command-line parameter:
149
150 ----
151 net.naming-scheme=v252
152 ----
153
154 See also xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel
155 command line. You need to reboot for the changes to take effect.
156
157 [[network_override_device_names]]
158 Overriding network device names
159 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
160
161 You can manually assign a name to a particular network device using a custom
162 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link
163 file]. This overrides the name that would be assigned according to the latest
164 network device naming scheme. This way, you can avoid naming changes due to
165 kernel updates, driver updates or newer versions of the naming scheme.
166
167 Custom link files should be placed in `/etc/systemd/network/` and named
168 `<n>-<id>.link`, where `n` is a priority smaller than `99` and `id` is some
169 identifier. A link file has two sections: `[Match]` determines which interfaces
170 the file will apply to; `[Link]` determines how these interfaces should be
171 configured, including their naming.
172
173 To assign a name to a particular network device, you need a way to uniquely and
174 permanently identify that device in the `[Match]` section. One possibility is
175 to match the device's MAC address using the `MACAddress` option, as it is
176 unlikely to change.
177
178 The `[Match]` section should also contain a `Type` option to make sure it only
179 matches the expected physical interface, and not bridge/bond/VLAN interfaces
180 with the same MAC address. In most setups, `Type` should be set to `ether` to
181 match only Ethernet devices, but some setups may require other choices. See the
182 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link(5)
183 manpage] for more details.
184
185 Then, you can assign a name using the `Name` option in the `[Link]` section.
186
187 Link files are copied to the `initramfs`, so it is recommended to refresh the
188 `initramfs` after adding, modifying, or removing a link file:
189
190 ----
191 # update-initramfs -u -k all
192 ----
193
194 For example, to assign the name `enwan0` to the Ethernet device with MAC
195 address `aa:bb:cc:dd:ee:ff`, create a file
196 `/etc/systemd/network/10-enwan0.link` with the following contents:
197
198 ----
199 [Match]
200 MACAddress=aa:bb:cc:dd:ee:ff
201 Type=ether
202
203 [Link]
204 Name=enwan0
205 ----
206
207 Do not forget to adjust `/etc/network/interfaces` to use the new name, and
208 refresh your `initramfs` as described above. You need to reboot the node for
209 the change to take effect.
210
211 NOTE: It is recommended to assign a name starting with `en` or `eth` so that
212 {pve} recognizes the interface as a physical network device which can then be
213 configured via the GUI. Also, you should ensure that the name will not clash
214 with other interface names in the future. One possibility is to assign a name
215 that does not match any name pattern that systemd uses for network interfaces
216 (xref:systemd_network_interface_names[see above]), such as `enwan0` in the
217 example above.
218
219 For more information on link files, see the
220 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link(5)
221 manpage].
222
223 Choosing a network configuration
224 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
225
226 Depending on your current network organization and your resources you can
227 choose either a bridged, routed, or masquerading networking setup.
228
229 {pve} server in a private LAN, using an external gateway to reach the internet
230 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
231
232 The *Bridged* model makes the most sense in this case, and this is also
233 the default mode on new {pve} installations.
234 Each of your Guest system will have a virtual interface attached to the
235 {pve} bridge. This is similar in effect to having the Guest network card
236 directly connected to a new switch on your LAN, the {pve} host playing the role
237 of the switch.
238
239 {pve} server at hosting provider, with public IP ranges for Guests
240 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
241
242 For this setup, you can use either a *Bridged* or *Routed* model, depending on
243 what your provider allows.
244
245 {pve} server at hosting provider, with a single public IP address
246 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
247
248 In that case the only way to get outgoing network accesses for your guest
249 systems is to use *Masquerading*. For incoming network access to your guests,
250 you will need to configure *Port Forwarding*.
251
252 For further flexibility, you can configure
253 VLANs (IEEE 802.1q) and network bonding, also known as "link
254 aggregation". That way it is possible to build complex and flexible
255 virtual networks.
256
257 Default Configuration using a Bridge
258 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
259
260 [thumbnail="default-network-setup-bridge.svg"]
261 Bridges are like physical network switches implemented in software.
262 All virtual guests can share a single bridge, or you can create multiple
263 bridges to separate network domains. Each host can have up to 4094 bridges.
264
265 The installation program creates a single bridge named `vmbr0`, which
266 is connected to the first Ethernet card. The corresponding
267 configuration in `/etc/network/interfaces` might look like this:
268
269 ----
270 auto lo
271 iface lo inet loopback
272
273 iface eno1 inet manual
274
275 auto vmbr0
276 iface vmbr0 inet static
277 address 192.168.10.2/24
278 gateway 192.168.10.1
279 bridge-ports eno1
280 bridge-stp off
281 bridge-fd 0
282 ----
283
284 Virtual machines behave as if they were directly connected to the
285 physical network. The network, in turn, sees each virtual machine as
286 having its own MAC, even though there is only one network cable
287 connecting all of these VMs to the network.
288
289 [[sysadmin_network_routed]]
290 Routed Configuration
291 ~~~~~~~~~~~~~~~~~~~~
292
293 Most hosting providers do not support the above setup. For security
294 reasons, they disable networking as soon as they detect multiple MAC
295 addresses on a single interface.
296
297 TIP: Some providers allow you to register additional MACs through their
298 management interface. This avoids the problem, but can be clumsy to
299 configure because you need to register a MAC for each of your VMs.
300
301 You can avoid the problem by ``routing'' all traffic via a single
302 interface. This makes sure that all network packets use the same MAC
303 address.
304
305 [thumbnail="default-network-setup-routed.svg"]
306 A common scenario is that you have a public IP (assume `198.51.100.5`
307 for this example), and an additional IP block for your VMs
308 (`203.0.113.16/28`). We recommend the following setup for such
309 situations:
310
311 ----
312 auto lo
313 iface lo inet loopback
314
315 auto eno0
316 iface eno0 inet static
317 address 198.51.100.5/29
318 gateway 198.51.100.1
319 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
320 post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
321
322
323 auto vmbr0
324 iface vmbr0 inet static
325 address 203.0.113.17/28
326 bridge-ports none
327 bridge-stp off
328 bridge-fd 0
329 ----
330
331
332 [[sysadmin_network_masquerading]]
333 Masquerading (NAT) with `iptables`
334 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
335
336 Masquerading allows guests having only a private IP address to access the
337 network by using the host IP address for outgoing traffic. Each outgoing
338 packet is rewritten by `iptables` to appear as originating from the host,
339 and responses are rewritten accordingly to be routed to the original sender.
340
341 ----
342 auto lo
343 iface lo inet loopback
344
345 auto eno1
346 #real IP address
347 iface eno1 inet static
348 address 198.51.100.5/24
349 gateway 198.51.100.1
350
351 auto vmbr0
352 #private sub network
353 iface vmbr0 inet static
354 address 10.10.10.1/24
355 bridge-ports none
356 bridge-stp off
357 bridge-fd 0
358
359 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
360 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
361 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
362 ----
363
364 NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
365 needed for outgoing connections. Otherwise the firewall could block outgoing
366 connections since they will prefer the `POSTROUTING` of the VM bridge (and not
367 `MASQUERADE`).
368
369 Adding these lines in the `/etc/network/interfaces` can fix this problem:
370
371 ----
372 post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
373 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
374 ----
375
376 For more information about this, refer to the following links:
377
378 https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
379
380 https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
381
382 https://web.archive.org/web/20220610151210/https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
383
384
385 [[sysadmin_network_bond]]
386 Linux Bond
387 ~~~~~~~~~~
388
389 Bonding (also called NIC teaming or Link Aggregation) is a technique
390 for binding multiple NIC's to a single network device. It is possible
391 to achieve different goals, like make the network fault-tolerant,
392 increase the performance or both together.
393
394 High-speed hardware like Fibre Channel and the associated switching
395 hardware can be quite expensive. By doing link aggregation, two NICs
396 can appear as one logical interface, resulting in double speed. This
397 is a native Linux kernel feature that is supported by most
398 switches. If your nodes have multiple Ethernet ports, you can
399 distribute your points of failure by running network cables to
400 different switches and the bonded connection will failover to one
401 cable or the other in case of network trouble.
402
403 Aggregated links can improve live-migration delays and improve the
404 speed of replication of data between Proxmox VE Cluster nodes.
405
406 There are 7 modes for bonding:
407
408 * *Round-robin (balance-rr):* Transmit network packets in sequential
409 order from the first available network interface (NIC) slave through
410 the last. This mode provides load balancing and fault tolerance.
411
412 * *Active-backup (active-backup):* Only one NIC slave in the bond is
413 active. A different slave becomes active if, and only if, the active
414 slave fails. The single logical bonded interface's MAC address is
415 externally visible on only one NIC (port) to avoid distortion in the
416 network switch. This mode provides fault tolerance.
417
418 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
419 address XOR'd with destination MAC address) modulo NIC slave
420 count]. This selects the same NIC slave for each destination MAC
421 address. This mode provides load balancing and fault tolerance.
422
423 * *Broadcast (broadcast):* Transmit network packets on all slave
424 network interfaces. This mode provides fault tolerance.
425
426 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
427 aggregation groups that share the same speed and duplex
428 settings. Utilizes all slave network interfaces in the active
429 aggregator group according to the 802.3ad specification.
430
431 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
432 driver mode that does not require any special network-switch
433 support. The outgoing network packet traffic is distributed according
434 to the current load (computed relative to the speed) on each network
435 interface slave. Incoming traffic is received by one currently
436 designated slave network interface. If this receiving slave fails,
437 another slave takes over the MAC address of the failed receiving
438 slave.
439
440 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
441 load balancing (rlb) for IPV4 traffic, and does not require any
442 special network switch support. The receive load balancing is achieved
443 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
444 by the local system on their way out and overwrites the source
445 hardware address with the unique hardware address of one of the NIC
446 slaves in the single logical bonded interface such that different
447 network-peers use different MAC addresses for their network packet
448 traffic.
449
450 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
451 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
452 active-backup mode.
453
454 For the cluster network (Corosync) we recommend configuring it with multiple
455 networks. Corosync does not need a bond for network reduncancy as it can switch
456 between networks by itself, if one becomes unusable.
457
458 The following bond configuration can be used as distributed/shared
459 storage network. The benefit would be that you get more speed and the
460 network will be fault-tolerant.
461
462 .Example: Use bond with fixed IP address
463 ----
464 auto lo
465 iface lo inet loopback
466
467 iface eno1 inet manual
468
469 iface eno2 inet manual
470
471 iface eno3 inet manual
472
473 auto bond0
474 iface bond0 inet static
475 bond-slaves eno1 eno2
476 address 192.168.1.2/24
477 bond-miimon 100
478 bond-mode 802.3ad
479 bond-xmit-hash-policy layer2+3
480
481 auto vmbr0
482 iface vmbr0 inet static
483 address 10.10.10.2/24
484 gateway 10.10.10.1
485 bridge-ports eno3
486 bridge-stp off
487 bridge-fd 0
488
489 ----
490
491
492 [thumbnail="default-network-setup-bond.svg"]
493 Another possibility it to use the bond directly as bridge port.
494 This can be used to make the guest network fault-tolerant.
495
496 .Example: Use a bond as bridge port
497 ----
498 auto lo
499 iface lo inet loopback
500
501 iface eno1 inet manual
502
503 iface eno2 inet manual
504
505 auto bond0
506 iface bond0 inet manual
507 bond-slaves eno1 eno2
508 bond-miimon 100
509 bond-mode 802.3ad
510 bond-xmit-hash-policy layer2+3
511
512 auto vmbr0
513 iface vmbr0 inet static
514 address 10.10.10.2/24
515 gateway 10.10.10.1
516 bridge-ports bond0
517 bridge-stp off
518 bridge-fd 0
519
520 ----
521
522
523 [[sysadmin_network_vlan]]
524 VLAN 802.1Q
525 ~~~~~~~~~~~
526
527 A virtual LAN (VLAN) is a broadcast domain that is partitioned and
528 isolated in the network at layer two. So it is possible to have
529 multiple networks (4096) in a physical network, each independent of
530 the other ones.
531
532 Each VLAN network is identified by a number often called 'tag'.
533 Network packages are then 'tagged' to identify which virtual network
534 they belong to.
535
536
537 VLAN for Guest Networks
538 ^^^^^^^^^^^^^^^^^^^^^^^
539
540 {pve} supports this setup out of the box. You can specify the VLAN tag
541 when you create a VM. The VLAN tag is part of the guest network
542 configuration. The networking layer supports different modes to
543 implement VLANs, depending on the bridge configuration:
544
545 * *VLAN awareness on the Linux bridge:*
546 In this case, each guest's virtual network card is assigned to a VLAN tag,
547 which is transparently supported by the Linux bridge.
548 Trunk mode is also possible, but that makes configuration
549 in the guest necessary.
550
551 * *"traditional" VLAN on the Linux bridge:*
552 In contrast to the VLAN awareness method, this method is not transparent
553 and creates a VLAN device with associated bridge for each VLAN.
554 That is, creating a guest on VLAN 5 for example, would create two
555 interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
556
557 * *Open vSwitch VLAN:*
558 This mode uses the OVS VLAN feature.
559
560 * *Guest configured VLAN:*
561 VLANs are assigned inside the guest. In this case, the setup is
562 completely done inside the guest and can not be influenced from the
563 outside. The benefit is that you can use more than one VLAN on a
564 single virtual NIC.
565
566
567 VLAN on the Host
568 ^^^^^^^^^^^^^^^^
569
570 To allow host communication with an isolated network. It is possible
571 to apply VLAN tags to any network device (NIC, Bond, Bridge). In
572 general, you should configure the VLAN on the interface with the least
573 abstraction layers between itself and the physical NIC.
574
575 For example, in a default configuration where you want to place
576 the host management address on a separate VLAN.
577
578
579 .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
580 ----
581 auto lo
582 iface lo inet loopback
583
584 iface eno1 inet manual
585
586 iface eno1.5 inet manual
587
588 auto vmbr0v5
589 iface vmbr0v5 inet static
590 address 10.10.10.2/24
591 gateway 10.10.10.1
592 bridge-ports eno1.5
593 bridge-stp off
594 bridge-fd 0
595
596 auto vmbr0
597 iface vmbr0 inet manual
598 bridge-ports eno1
599 bridge-stp off
600 bridge-fd 0
601
602 ----
603
604 .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
605 ----
606 auto lo
607 iface lo inet loopback
608
609 iface eno1 inet manual
610
611
612 auto vmbr0.5
613 iface vmbr0.5 inet static
614 address 10.10.10.2/24
615 gateway 10.10.10.1
616
617 auto vmbr0
618 iface vmbr0 inet manual
619 bridge-ports eno1
620 bridge-stp off
621 bridge-fd 0
622 bridge-vlan-aware yes
623 bridge-vids 2-4094
624 ----
625
626 The next example is the same setup but a bond is used to
627 make this network fail-safe.
628
629 .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
630 ----
631 auto lo
632 iface lo inet loopback
633
634 iface eno1 inet manual
635
636 iface eno2 inet manual
637
638 auto bond0
639 iface bond0 inet manual
640 bond-slaves eno1 eno2
641 bond-miimon 100
642 bond-mode 802.3ad
643 bond-xmit-hash-policy layer2+3
644
645 iface bond0.5 inet manual
646
647 auto vmbr0v5
648 iface vmbr0v5 inet static
649 address 10.10.10.2/24
650 gateway 10.10.10.1
651 bridge-ports bond0.5
652 bridge-stp off
653 bridge-fd 0
654
655 auto vmbr0
656 iface vmbr0 inet manual
657 bridge-ports bond0
658 bridge-stp off
659 bridge-fd 0
660
661 ----
662
663 Disabling IPv6 on the Node
664 ~~~~~~~~~~~~~~~~~~~~~~~~~~
665
666 {pve} works correctly in all environments, irrespective of whether IPv6 is
667 deployed or not. We recommend leaving all settings at the provided defaults.
668
669 Should you still need to disable support for IPv6 on your node, do so by
670 creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
671 https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
672 for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
673
674 ----
675 net.ipv6.conf.all.disable_ipv6 = 1
676 net.ipv6.conf.default.disable_ipv6 = 1
677 ----
678
679 This method is preferred to disabling the loading of the IPv6 module on the
680 https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
681
682
683 Disabling MAC Learning on a Bridge
684 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
685
686 By default, MAC learning is enabled on a bridge to ensure a smooth experience
687 with virtual guests and their networks.
688
689 But in some environments this can be undesired. Since {pve} 7.3 you can disable
690 MAC learning on the bridge by setting the `bridge-disable-mac-learning 1`
691 configuration on a bridge in `/etc/network/interfaces', for example:
692
693 ----
694 # ...
695
696 auto vmbr0
697 iface vmbr0 inet static
698 address 10.10.10.2/24
699 gateway 10.10.10.1
700 bridge-ports ens18
701 bridge-stp off
702 bridge-fd 0
703 bridge-disable-mac-learning 1
704 ----
705
706 Once enabled, {pve} will manually add the configured MAC address from VMs and
707 Containers to the bridges forwarding database to ensure that guest can still
708 use the network - but only when they are using their actual MAC address.
709
710 ////
711 TODO: explain IPv6 support?
712 TODO: explain OVS
713 ////