]> git.proxmox.com Git - pve-docs.git/blob - pve-network.adoc
fix #4847: network: extend section on interface naming scheme
[pve-docs.git] / pve-network.adoc
1 [[sysadmin_network_configuration]]
2 Network Configuration
3 ---------------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 endif::wiki[]
7
8 {pve} is using the Linux network stack. This provides a lot of flexibility on
9 how to set up the network on the {pve} nodes. The configuration can be done
10 either via the GUI, or by manually editing the file `/etc/network/interfaces`,
11 which contains the whole network configuration. The `interfaces(5)` manual
12 page contains the complete format description. All {pve} tools try hard to keep
13 direct user modifications, but using the GUI is still preferable, because it
14 protects you from errors.
15
16 A 'vmbr' interface is needed to connect guests to the underlying physical
17 network. They are a Linux bridge which can be thought of as a virtual switch
18 to which the guests and physical interfaces are connected to. This section
19 provides some examples on how the network can be set up to accomodate different
20 use cases like redundancy with a xref:sysadmin_network_bond['bond'],
21 xref:sysadmin_network_vlan['vlans'] or
22 xref:sysadmin_network_routed['routed'] and
23 xref:sysadmin_network_masquerading['NAT'] setups.
24
25 The xref:chapter_pvesdn[Software Defined Network] is an option for more complex
26 virtual networks in {pve} clusters.
27
28 WARNING: It's discouraged to use the traditional Debian tools `ifup` and `ifdown`
29 if unsure, as they have some pitfalls like interupting all guest traffic on
30 `ifdown vmbrX` but not reconnecting those guest again when doing `ifup` on the
31 same bridge later.
32
33 Apply Network Changes
34 ~~~~~~~~~~~~~~~~~~~~~
35
36 {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
37 write into a temporary file called `/etc/network/interfaces.new`, this way you
38 can do many related changes at once. This also allows to ensure your changes
39 are correct before applying, as a wrong network configuration may render a node
40 inaccessible.
41
42 Live-Reload Network with ifupdown2
43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
44
45 With the recommended 'ifupdown2' package (default for new installations since
46 {pve} 7.0), it is possible to apply network configuration changes without a
47 reboot. If you change the network configuration via the GUI, you can click the
48 'Apply Configuration' button. This will move changes from the staging
49 `interfaces.new` file to `/etc/network/interfaces` and apply them live.
50
51 If you made manual changes directly to the `/etc/network/interfaces` file, you
52 can apply them by running `ifreload -a`
53
54 NOTE: If you installed {pve} on top of Debian, or upgraded to {pve} 7.0 from an
55 older {pve} installation, make sure 'ifupdown2' is installed: `apt install
56 ifupdown2`
57
58 Reboot Node to Apply
59 ^^^^^^^^^^^^^^^^^^^^
60
61 Another way to apply a new network configuration is to reboot the node.
62 In that case the systemd service `pvenetcommit` will activate the staging
63 `interfaces.new` file before the `networking` service will apply that
64 configuration.
65
66 Naming Conventions
67 ~~~~~~~~~~~~~~~~~~
68
69 We currently use the following naming conventions for device names:
70
71 * Ethernet devices: `en*`, systemd network interface names. This naming scheme is
72 used for new {pve} installations since version 5.0.
73
74 * Ethernet devices: `eth[N]`, where 0 ≤ N (`eth0`, `eth1`, ...) This naming
75 scheme is used for {pve} hosts which were installed before the 5.0
76 release. When upgrading to 5.0, the names are kept as-is.
77
78 * Bridge names: `vmbr[N]`, where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
79
80 * Bonds: `bond[N]`, where 0 ≤ N (`bond0`, `bond1`, ...)
81
82 * VLANs: Simply add the VLAN number to the device name,
83 separated by a period (`eno1.50`, `bond1.30`)
84
85 This makes it easier to debug networks problems, because the device
86 name implies the device type.
87
88 Systemd Network Interface Names
89 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
90
91 Systemd defines a versioned naming scheme for network device names. The
92 scheme uses the two-character prefix `en` for Ethernet network devices. The
93 next characters depends on the device driver, device location and other
94 attributes. Some possible patterns are:
95
96 * `o<index>[n<phys_port_name>|d<dev_port>]` — devices on board
97
98 * `s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` — devices by hotplug id
99
100 * `[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` —
101 devices by bus id
102
103 * `x<MAC>` — devices by MAC address
104
105 Some examples for the most common patterns are:
106
107 * `eno1` — is the first on-board NIC
108
109 * `enp3s0f1` — is function 1 of the NIC on PCI bus 3, slot 0
110
111 For a full list of possible device name patterns, see the
112 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
113 systemd.net-naming-scheme(7) manpage].
114
115 A new version of systemd may define a new version of the network device naming
116 scheme, which it then uses by default. Consequently, updating to a newer
117 systemd version, for example during a major {pve} upgrade, can change the names
118 of network devices and require adjusting the network configuration. To avoid
119 name changes due to a new version of the naming scheme, you can manually pin a
120 particular naming scheme version (see
121 xref:network_pin_naming_scheme_version[below]).
122
123 However, even with a pinned naming scheme version, network device names can
124 still change due to kernel or driver updates. In order to avoid name changes
125 for a particular network device altogether, you can manually override its name
126 using a link file (see xref:network_override_device_names[below]).
127
128 For more information on network interface names, see
129 https://systemd.io/PREDICTABLE_INTERFACE_NAMES/[Predictable Network Interface
130 Names].
131
132 [[network_pin_naming_scheme_version]]
133 Pinning a specific naming scheme version
134 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
135
136 You can pin a specific version of the naming scheme for network devices by
137 adding the `net.naming-scheme=<version>` parameter to the
138 xref:sysboot_edit_kernel_cmdline[kernel command line]. For a list of naming
139 scheme versions, see the
140 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
141 systemd.net-naming-scheme(7) manpage].
142
143 For example, to pin the version `v252`, which is the latest naming scheme
144 version for a fresh {pve} 8.0 installation, add the following kernel
145 command-line parameter:
146
147 ----
148 net.naming-scheme=v252
149 ----
150
151 See also xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel
152 command line. You need to reboot for the changes to take effect.
153
154 [[network_override_device_names]]
155 Overriding network device names
156 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
157
158 You can manually assign a name to a particular network device using a custom
159 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link
160 file]. This overrides the name that would be assigned according to the latest
161 network device naming scheme. This way, you can avoid naming changes due to
162 kernel updates, driver updates or newer versions of the naming scheme.
163
164 Custom link files should be placed in `/etc/systemd/network/` and named
165 `<n>-<id>.link`, where `n` is a priority smaller than `99` and `id` is some
166 identifier. A link file has two sections: `[Match]` determines which interfaces
167 the file will apply to; `[Link]` determines how these interfaces should be
168 configured, including their naming.
169
170 To assign a name to a particular network device, you need a way to uniquely and
171 permanently identify that device in the `[Match]` section. One possibility is
172 to match the device's MAC address using the `MACAddress` option, as it is
173 unlikely to change. Then, you can assign a name using the `Name` option in the
174 `[Link]` section.
175
176 For example, to assign the name `enwan0` to the device with MAC address
177 `aa:bb:cc:dd:ee:ff`, create a file `/etc/systemd/network/10-enwan0.link` with
178 the following contents:
179
180 ----
181 [Match]
182 MACAddress=aa:bb:cc:dd:ee:ff
183
184 [Link]
185 Name=enwan0
186 ----
187
188 Do not forget to adjust `/etc/network/interfaces` to use the new name.
189 You need to reboot the node for the change to take effect.
190
191 NOTE: It is recommended to assign a name starting with `en` or `eth` so that
192 {pve} recognizes the interface as a physical network device which can then be
193 configured via the GUI. Also, you should ensure that the name will not clash
194 with other interface names in the future. One possibility is to assign a name
195 that does not match any name pattern that systemd uses for network interfaces
196 (xref:systemd_network_interface_names[see above]), such as `enwan0` in the
197 example above.
198
199 For more information on link files, see the
200 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link(5)
201 manpage].
202
203 Choosing a network configuration
204 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
205
206 Depending on your current network organization and your resources you can
207 choose either a bridged, routed, or masquerading networking setup.
208
209 {pve} server in a private LAN, using an external gateway to reach the internet
210 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
211
212 The *Bridged* model makes the most sense in this case, and this is also
213 the default mode on new {pve} installations.
214 Each of your Guest system will have a virtual interface attached to the
215 {pve} bridge. This is similar in effect to having the Guest network card
216 directly connected to a new switch on your LAN, the {pve} host playing the role
217 of the switch.
218
219 {pve} server at hosting provider, with public IP ranges for Guests
220 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
221
222 For this setup, you can use either a *Bridged* or *Routed* model, depending on
223 what your provider allows.
224
225 {pve} server at hosting provider, with a single public IP address
226 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
227
228 In that case the only way to get outgoing network accesses for your guest
229 systems is to use *Masquerading*. For incoming network access to your guests,
230 you will need to configure *Port Forwarding*.
231
232 For further flexibility, you can configure
233 VLANs (IEEE 802.1q) and network bonding, also known as "link
234 aggregation". That way it is possible to build complex and flexible
235 virtual networks.
236
237 Default Configuration using a Bridge
238 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
239
240 [thumbnail="default-network-setup-bridge.svg"]
241 Bridges are like physical network switches implemented in software.
242 All virtual guests can share a single bridge, or you can create multiple
243 bridges to separate network domains. Each host can have up to 4094 bridges.
244
245 The installation program creates a single bridge named `vmbr0`, which
246 is connected to the first Ethernet card. The corresponding
247 configuration in `/etc/network/interfaces` might look like this:
248
249 ----
250 auto lo
251 iface lo inet loopback
252
253 iface eno1 inet manual
254
255 auto vmbr0
256 iface vmbr0 inet static
257 address 192.168.10.2/24
258 gateway 192.168.10.1
259 bridge-ports eno1
260 bridge-stp off
261 bridge-fd 0
262 ----
263
264 Virtual machines behave as if they were directly connected to the
265 physical network. The network, in turn, sees each virtual machine as
266 having its own MAC, even though there is only one network cable
267 connecting all of these VMs to the network.
268
269 [[sysadmin_network_routed]]
270 Routed Configuration
271 ~~~~~~~~~~~~~~~~~~~~
272
273 Most hosting providers do not support the above setup. For security
274 reasons, they disable networking as soon as they detect multiple MAC
275 addresses on a single interface.
276
277 TIP: Some providers allow you to register additional MACs through their
278 management interface. This avoids the problem, but can be clumsy to
279 configure because you need to register a MAC for each of your VMs.
280
281 You can avoid the problem by ``routing'' all traffic via a single
282 interface. This makes sure that all network packets use the same MAC
283 address.
284
285 [thumbnail="default-network-setup-routed.svg"]
286 A common scenario is that you have a public IP (assume `198.51.100.5`
287 for this example), and an additional IP block for your VMs
288 (`203.0.113.16/28`). We recommend the following setup for such
289 situations:
290
291 ----
292 auto lo
293 iface lo inet loopback
294
295 auto eno0
296 iface eno0 inet static
297 address 198.51.100.5/29
298 gateway 198.51.100.1
299 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
300 post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
301
302
303 auto vmbr0
304 iface vmbr0 inet static
305 address 203.0.113.17/28
306 bridge-ports none
307 bridge-stp off
308 bridge-fd 0
309 ----
310
311
312 [[sysadmin_network_masquerading]]
313 Masquerading (NAT) with `iptables`
314 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
315
316 Masquerading allows guests having only a private IP address to access the
317 network by using the host IP address for outgoing traffic. Each outgoing
318 packet is rewritten by `iptables` to appear as originating from the host,
319 and responses are rewritten accordingly to be routed to the original sender.
320
321 ----
322 auto lo
323 iface lo inet loopback
324
325 auto eno1
326 #real IP address
327 iface eno1 inet static
328 address 198.51.100.5/24
329 gateway 198.51.100.1
330
331 auto vmbr0
332 #private sub network
333 iface vmbr0 inet static
334 address 10.10.10.1/24
335 bridge-ports none
336 bridge-stp off
337 bridge-fd 0
338
339 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
340 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
341 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
342 ----
343
344 NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
345 needed for outgoing connections. Otherwise the firewall could block outgoing
346 connections since they will prefer the `POSTROUTING` of the VM bridge (and not
347 `MASQUERADE`).
348
349 Adding these lines in the `/etc/network/interfaces` can fix this problem:
350
351 ----
352 post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
353 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
354 ----
355
356 For more information about this, refer to the following links:
357
358 https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
359
360 https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
361
362 https://web.archive.org/web/20220610151210/https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
363
364
365 [[sysadmin_network_bond]]
366 Linux Bond
367 ~~~~~~~~~~
368
369 Bonding (also called NIC teaming or Link Aggregation) is a technique
370 for binding multiple NIC's to a single network device. It is possible
371 to achieve different goals, like make the network fault-tolerant,
372 increase the performance or both together.
373
374 High-speed hardware like Fibre Channel and the associated switching
375 hardware can be quite expensive. By doing link aggregation, two NICs
376 can appear as one logical interface, resulting in double speed. This
377 is a native Linux kernel feature that is supported by most
378 switches. If your nodes have multiple Ethernet ports, you can
379 distribute your points of failure by running network cables to
380 different switches and the bonded connection will failover to one
381 cable or the other in case of network trouble.
382
383 Aggregated links can improve live-migration delays and improve the
384 speed of replication of data between Proxmox VE Cluster nodes.
385
386 There are 7 modes for bonding:
387
388 * *Round-robin (balance-rr):* Transmit network packets in sequential
389 order from the first available network interface (NIC) slave through
390 the last. This mode provides load balancing and fault tolerance.
391
392 * *Active-backup (active-backup):* Only one NIC slave in the bond is
393 active. A different slave becomes active if, and only if, the active
394 slave fails. The single logical bonded interface's MAC address is
395 externally visible on only one NIC (port) to avoid distortion in the
396 network switch. This mode provides fault tolerance.
397
398 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
399 address XOR'd with destination MAC address) modulo NIC slave
400 count]. This selects the same NIC slave for each destination MAC
401 address. This mode provides load balancing and fault tolerance.
402
403 * *Broadcast (broadcast):* Transmit network packets on all slave
404 network interfaces. This mode provides fault tolerance.
405
406 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
407 aggregation groups that share the same speed and duplex
408 settings. Utilizes all slave network interfaces in the active
409 aggregator group according to the 802.3ad specification.
410
411 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
412 driver mode that does not require any special network-switch
413 support. The outgoing network packet traffic is distributed according
414 to the current load (computed relative to the speed) on each network
415 interface slave. Incoming traffic is received by one currently
416 designated slave network interface. If this receiving slave fails,
417 another slave takes over the MAC address of the failed receiving
418 slave.
419
420 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
421 load balancing (rlb) for IPV4 traffic, and does not require any
422 special network switch support. The receive load balancing is achieved
423 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
424 by the local system on their way out and overwrites the source
425 hardware address with the unique hardware address of one of the NIC
426 slaves in the single logical bonded interface such that different
427 network-peers use different MAC addresses for their network packet
428 traffic.
429
430 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
431 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
432 active-backup mode.
433
434 For the cluster network (Corosync) we recommend configuring it with multiple
435 networks. Corosync does not need a bond for network reduncancy as it can switch
436 between networks by itself, if one becomes unusable.
437
438 The following bond configuration can be used as distributed/shared
439 storage network. The benefit would be that you get more speed and the
440 network will be fault-tolerant.
441
442 .Example: Use bond with fixed IP address
443 ----
444 auto lo
445 iface lo inet loopback
446
447 iface eno1 inet manual
448
449 iface eno2 inet manual
450
451 iface eno3 inet manual
452
453 auto bond0
454 iface bond0 inet static
455 bond-slaves eno1 eno2
456 address 192.168.1.2/24
457 bond-miimon 100
458 bond-mode 802.3ad
459 bond-xmit-hash-policy layer2+3
460
461 auto vmbr0
462 iface vmbr0 inet static
463 address 10.10.10.2/24
464 gateway 10.10.10.1
465 bridge-ports eno3
466 bridge-stp off
467 bridge-fd 0
468
469 ----
470
471
472 [thumbnail="default-network-setup-bond.svg"]
473 Another possibility it to use the bond directly as bridge port.
474 This can be used to make the guest network fault-tolerant.
475
476 .Example: Use a bond as bridge port
477 ----
478 auto lo
479 iface lo inet loopback
480
481 iface eno1 inet manual
482
483 iface eno2 inet manual
484
485 auto bond0
486 iface bond0 inet manual
487 bond-slaves eno1 eno2
488 bond-miimon 100
489 bond-mode 802.3ad
490 bond-xmit-hash-policy layer2+3
491
492 auto vmbr0
493 iface vmbr0 inet static
494 address 10.10.10.2/24
495 gateway 10.10.10.1
496 bridge-ports bond0
497 bridge-stp off
498 bridge-fd 0
499
500 ----
501
502
503 [[sysadmin_network_vlan]]
504 VLAN 802.1Q
505 ~~~~~~~~~~~
506
507 A virtual LAN (VLAN) is a broadcast domain that is partitioned and
508 isolated in the network at layer two. So it is possible to have
509 multiple networks (4096) in a physical network, each independent of
510 the other ones.
511
512 Each VLAN network is identified by a number often called 'tag'.
513 Network packages are then 'tagged' to identify which virtual network
514 they belong to.
515
516
517 VLAN for Guest Networks
518 ^^^^^^^^^^^^^^^^^^^^^^^
519
520 {pve} supports this setup out of the box. You can specify the VLAN tag
521 when you create a VM. The VLAN tag is part of the guest network
522 configuration. The networking layer supports different modes to
523 implement VLANs, depending on the bridge configuration:
524
525 * *VLAN awareness on the Linux bridge:*
526 In this case, each guest's virtual network card is assigned to a VLAN tag,
527 which is transparently supported by the Linux bridge.
528 Trunk mode is also possible, but that makes configuration
529 in the guest necessary.
530
531 * *"traditional" VLAN on the Linux bridge:*
532 In contrast to the VLAN awareness method, this method is not transparent
533 and creates a VLAN device with associated bridge for each VLAN.
534 That is, creating a guest on VLAN 5 for example, would create two
535 interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
536
537 * *Open vSwitch VLAN:*
538 This mode uses the OVS VLAN feature.
539
540 * *Guest configured VLAN:*
541 VLANs are assigned inside the guest. In this case, the setup is
542 completely done inside the guest and can not be influenced from the
543 outside. The benefit is that you can use more than one VLAN on a
544 single virtual NIC.
545
546
547 VLAN on the Host
548 ^^^^^^^^^^^^^^^^
549
550 To allow host communication with an isolated network. It is possible
551 to apply VLAN tags to any network device (NIC, Bond, Bridge). In
552 general, you should configure the VLAN on the interface with the least
553 abstraction layers between itself and the physical NIC.
554
555 For example, in a default configuration where you want to place
556 the host management address on a separate VLAN.
557
558
559 .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
560 ----
561 auto lo
562 iface lo inet loopback
563
564 iface eno1 inet manual
565
566 iface eno1.5 inet manual
567
568 auto vmbr0v5
569 iface vmbr0v5 inet static
570 address 10.10.10.2/24
571 gateway 10.10.10.1
572 bridge-ports eno1.5
573 bridge-stp off
574 bridge-fd 0
575
576 auto vmbr0
577 iface vmbr0 inet manual
578 bridge-ports eno1
579 bridge-stp off
580 bridge-fd 0
581
582 ----
583
584 .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
585 ----
586 auto lo
587 iface lo inet loopback
588
589 iface eno1 inet manual
590
591
592 auto vmbr0.5
593 iface vmbr0.5 inet static
594 address 10.10.10.2/24
595 gateway 10.10.10.1
596
597 auto vmbr0
598 iface vmbr0 inet manual
599 bridge-ports eno1
600 bridge-stp off
601 bridge-fd 0
602 bridge-vlan-aware yes
603 bridge-vids 2-4094
604 ----
605
606 The next example is the same setup but a bond is used to
607 make this network fail-safe.
608
609 .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
610 ----
611 auto lo
612 iface lo inet loopback
613
614 iface eno1 inet manual
615
616 iface eno2 inet manual
617
618 auto bond0
619 iface bond0 inet manual
620 bond-slaves eno1 eno2
621 bond-miimon 100
622 bond-mode 802.3ad
623 bond-xmit-hash-policy layer2+3
624
625 iface bond0.5 inet manual
626
627 auto vmbr0v5
628 iface vmbr0v5 inet static
629 address 10.10.10.2/24
630 gateway 10.10.10.1
631 bridge-ports bond0.5
632 bridge-stp off
633 bridge-fd 0
634
635 auto vmbr0
636 iface vmbr0 inet manual
637 bridge-ports bond0
638 bridge-stp off
639 bridge-fd 0
640
641 ----
642
643 Disabling IPv6 on the Node
644 ~~~~~~~~~~~~~~~~~~~~~~~~~~
645
646 {pve} works correctly in all environments, irrespective of whether IPv6 is
647 deployed or not. We recommend leaving all settings at the provided defaults.
648
649 Should you still need to disable support for IPv6 on your node, do so by
650 creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
651 https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
652 for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
653
654 ----
655 net.ipv6.conf.all.disable_ipv6 = 1
656 net.ipv6.conf.default.disable_ipv6 = 1
657 ----
658
659 This method is preferred to disabling the loading of the IPv6 module on the
660 https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
661
662
663 Disabling MAC Learning on a Bridge
664 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
665
666 By default, MAC learning is enabled on a bridge to ensure a smooth experience
667 with virtual guests and their networks.
668
669 But in some environments this can be undesired. Since {pve} 7.3 you can disable
670 MAC learning on the bridge by setting the `bridge-disable-mac-learning 1`
671 configuration on a bridge in `/etc/network/interfaces', for example:
672
673 ----
674 # ...
675
676 auto vmbr0
677 iface vmbr0 inet static
678 address 10.10.10.2/24
679 gateway 10.10.10.1
680 bridge-ports ens18
681 bridge-stp off
682 bridge-fd 0
683 bridge-disable-mac-learning 1
684 ----
685
686 Once enabled, {pve} will manually add the configured MAC address from VMs and
687 Containers to the bridges forwarding database to ensure that guest can still
688 use the network - but only when they are using their actual MAC address.
689
690 ////
691 TODO: explain IPv6 support?
692 TODO: explain OVS
693 ////