]> git.proxmox.com Git - pve-docs.git/blob - pve-network.adoc
network: add missing reference
[pve-docs.git] / pve-network.adoc
1 [[sysadmin_network_configuration]]
2 Network Configuration
3 ---------------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 endif::wiki[]
7
8 {pve} is using the Linux network stack. This provides a lot of flexibility on
9 how to set up the network on the {pve} nodes. The configuration can be done
10 either via the GUI, or by manually editing the file `/etc/network/interfaces`,
11 which contains the whole network configuration. The `interfaces(5)` manual
12 page contains the complete format description. All {pve} tools try hard to keep
13 direct user modifications, but using the GUI is still preferable, because it
14 protects you from errors.
15
16 A 'vmbr' interface is needed to connect guests to the underlying physical
17 network. They are a Linux bridge which can be thought of as a virtual switch
18 to which the guests and physical interfaces are connected to. This section
19 provides some examples on how the network can be set up to accomodate different
20 use cases like redundancy with a xref:sysadmin_network_bond['bond'],
21 xref:sysadmin_network_vlan['vlans'] or
22 xref:sysadmin_network_routed['routed'] and
23 xref:sysadmin_network_masquerading['NAT'] setups.
24
25 The xref:chapter_pvesdn[Software Defined Network] is an option for more complex
26 virtual networks in {pve} clusters.
27
28 WARNING: It's discouraged to use the traditional Debian tools `ifup` and `ifdown`
29 if unsure, as they have some pitfalls like interupting all guest traffic on
30 `ifdown vmbrX` but not reconnecting those guest again when doing `ifup` on the
31 same bridge later.
32
33 Apply Network Changes
34 ~~~~~~~~~~~~~~~~~~~~~
35
36 {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
37 write into a temporary file called `/etc/network/interfaces.new`, this way you
38 can do many related changes at once. This also allows to ensure your changes
39 are correct before applying, as a wrong network configuration may render a node
40 inaccessible.
41
42 Live-Reload Network with ifupdown2
43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
44
45 With the recommended 'ifupdown2' package (default for new installations since
46 {pve} 7.0), it is possible to apply network configuration changes without a
47 reboot. If you change the network configuration via the GUI, you can click the
48 'Apply Configuration' button. This will move changes from the staging
49 `interfaces.new` file to `/etc/network/interfaces` and apply them live.
50
51 If you made manual changes directly to the `/etc/network/interfaces` file, you
52 can apply them by running `ifreload -a`
53
54 NOTE: If you installed {pve} on top of Debian, or upgraded to {pve} 7.0 from an
55 older {pve} installation, make sure 'ifupdown2' is installed: `apt install
56 ifupdown2`
57
58 Reboot Node to Apply
59 ^^^^^^^^^^^^^^^^^^^^
60
61 Another way to apply a new network configuration is to reboot the node.
62 In that case the systemd service `pvenetcommit` will activate the staging
63 `interfaces.new` file before the `networking` service will apply that
64 configuration.
65
66 Naming Conventions
67 ~~~~~~~~~~~~~~~~~~
68
69 We currently use the following naming conventions for device names:
70
71 * Ethernet devices: `en*`, systemd network interface names. This naming scheme is
72 used for new {pve} installations since version 5.0.
73
74 * Ethernet devices: `eth[N]`, where 0 ≤ N (`eth0`, `eth1`, ...) This naming
75 scheme is used for {pve} hosts which were installed before the 5.0
76 release. When upgrading to 5.0, the names are kept as-is.
77
78 * Bridge names: `vmbr[N]`, where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
79
80 * Bonds: `bond[N]`, where 0 ≤ N (`bond0`, `bond1`, ...)
81
82 * VLANs: Simply add the VLAN number to the device name,
83 separated by a period (`eno1.50`, `bond1.30`)
84
85 This makes it easier to debug networks problems, because the device
86 name implies the device type.
87
88 [[systemd_network_interface_names]]
89 Systemd Network Interface Names
90 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
91
92 Systemd defines a versioned naming scheme for network device names. The
93 scheme uses the two-character prefix `en` for Ethernet network devices. The
94 next characters depends on the device driver, device location and other
95 attributes. Some possible patterns are:
96
97 * `o<index>[n<phys_port_name>|d<dev_port>]` — devices on board
98
99 * `s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` — devices by hotplug id
100
101 * `[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` —
102 devices by bus id
103
104 * `x<MAC>` — devices by MAC address
105
106 Some examples for the most common patterns are:
107
108 * `eno1` — is the first on-board NIC
109
110 * `enp3s0f1` — is function 1 of the NIC on PCI bus 3, slot 0
111
112 For a full list of possible device name patterns, see the
113 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
114 systemd.net-naming-scheme(7) manpage].
115
116 A new version of systemd may define a new version of the network device naming
117 scheme, which it then uses by default. Consequently, updating to a newer
118 systemd version, for example during a major {pve} upgrade, can change the names
119 of network devices and require adjusting the network configuration. To avoid
120 name changes due to a new version of the naming scheme, you can manually pin a
121 particular naming scheme version (see
122 xref:network_pin_naming_scheme_version[below]).
123
124 However, even with a pinned naming scheme version, network device names can
125 still change due to kernel or driver updates. In order to avoid name changes
126 for a particular network device altogether, you can manually override its name
127 using a link file (see xref:network_override_device_names[below]).
128
129 For more information on network interface names, see
130 https://systemd.io/PREDICTABLE_INTERFACE_NAMES/[Predictable Network Interface
131 Names].
132
133 [[network_pin_naming_scheme_version]]
134 Pinning a specific naming scheme version
135 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
136
137 You can pin a specific version of the naming scheme for network devices by
138 adding the `net.naming-scheme=<version>` parameter to the
139 xref:sysboot_edit_kernel_cmdline[kernel command line]. For a list of naming
140 scheme versions, see the
141 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
142 systemd.net-naming-scheme(7) manpage].
143
144 For example, to pin the version `v252`, which is the latest naming scheme
145 version for a fresh {pve} 8.0 installation, add the following kernel
146 command-line parameter:
147
148 ----
149 net.naming-scheme=v252
150 ----
151
152 See also xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel
153 command line. You need to reboot for the changes to take effect.
154
155 [[network_override_device_names]]
156 Overriding network device names
157 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
158
159 You can manually assign a name to a particular network device using a custom
160 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link
161 file]. This overrides the name that would be assigned according to the latest
162 network device naming scheme. This way, you can avoid naming changes due to
163 kernel updates, driver updates or newer versions of the naming scheme.
164
165 Custom link files should be placed in `/etc/systemd/network/` and named
166 `<n>-<id>.link`, where `n` is a priority smaller than `99` and `id` is some
167 identifier. A link file has two sections: `[Match]` determines which interfaces
168 the file will apply to; `[Link]` determines how these interfaces should be
169 configured, including their naming.
170
171 To assign a name to a particular network device, you need a way to uniquely and
172 permanently identify that device in the `[Match]` section. One possibility is
173 to match the device's MAC address using the `MACAddress` option, as it is
174 unlikely to change. Then, you can assign a name using the `Name` option in the
175 `[Link]` section.
176
177 For example, to assign the name `enwan0` to the device with MAC address
178 `aa:bb:cc:dd:ee:ff`, create a file `/etc/systemd/network/10-enwan0.link` with
179 the following contents:
180
181 ----
182 [Match]
183 MACAddress=aa:bb:cc:dd:ee:ff
184
185 [Link]
186 Name=enwan0
187 ----
188
189 Do not forget to adjust `/etc/network/interfaces` to use the new name.
190 You need to reboot the node for the change to take effect.
191
192 NOTE: It is recommended to assign a name starting with `en` or `eth` so that
193 {pve} recognizes the interface as a physical network device which can then be
194 configured via the GUI. Also, you should ensure that the name will not clash
195 with other interface names in the future. One possibility is to assign a name
196 that does not match any name pattern that systemd uses for network interfaces
197 (xref:systemd_network_interface_names[see above]), such as `enwan0` in the
198 example above.
199
200 For more information on link files, see the
201 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link(5)
202 manpage].
203
204 Choosing a network configuration
205 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
206
207 Depending on your current network organization and your resources you can
208 choose either a bridged, routed, or masquerading networking setup.
209
210 {pve} server in a private LAN, using an external gateway to reach the internet
211 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
212
213 The *Bridged* model makes the most sense in this case, and this is also
214 the default mode on new {pve} installations.
215 Each of your Guest system will have a virtual interface attached to the
216 {pve} bridge. This is similar in effect to having the Guest network card
217 directly connected to a new switch on your LAN, the {pve} host playing the role
218 of the switch.
219
220 {pve} server at hosting provider, with public IP ranges for Guests
221 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
222
223 For this setup, you can use either a *Bridged* or *Routed* model, depending on
224 what your provider allows.
225
226 {pve} server at hosting provider, with a single public IP address
227 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
228
229 In that case the only way to get outgoing network accesses for your guest
230 systems is to use *Masquerading*. For incoming network access to your guests,
231 you will need to configure *Port Forwarding*.
232
233 For further flexibility, you can configure
234 VLANs (IEEE 802.1q) and network bonding, also known as "link
235 aggregation". That way it is possible to build complex and flexible
236 virtual networks.
237
238 Default Configuration using a Bridge
239 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
240
241 [thumbnail="default-network-setup-bridge.svg"]
242 Bridges are like physical network switches implemented in software.
243 All virtual guests can share a single bridge, or you can create multiple
244 bridges to separate network domains. Each host can have up to 4094 bridges.
245
246 The installation program creates a single bridge named `vmbr0`, which
247 is connected to the first Ethernet card. The corresponding
248 configuration in `/etc/network/interfaces` might look like this:
249
250 ----
251 auto lo
252 iface lo inet loopback
253
254 iface eno1 inet manual
255
256 auto vmbr0
257 iface vmbr0 inet static
258 address 192.168.10.2/24
259 gateway 192.168.10.1
260 bridge-ports eno1
261 bridge-stp off
262 bridge-fd 0
263 ----
264
265 Virtual machines behave as if they were directly connected to the
266 physical network. The network, in turn, sees each virtual machine as
267 having its own MAC, even though there is only one network cable
268 connecting all of these VMs to the network.
269
270 [[sysadmin_network_routed]]
271 Routed Configuration
272 ~~~~~~~~~~~~~~~~~~~~
273
274 Most hosting providers do not support the above setup. For security
275 reasons, they disable networking as soon as they detect multiple MAC
276 addresses on a single interface.
277
278 TIP: Some providers allow you to register additional MACs through their
279 management interface. This avoids the problem, but can be clumsy to
280 configure because you need to register a MAC for each of your VMs.
281
282 You can avoid the problem by ``routing'' all traffic via a single
283 interface. This makes sure that all network packets use the same MAC
284 address.
285
286 [thumbnail="default-network-setup-routed.svg"]
287 A common scenario is that you have a public IP (assume `198.51.100.5`
288 for this example), and an additional IP block for your VMs
289 (`203.0.113.16/28`). We recommend the following setup for such
290 situations:
291
292 ----
293 auto lo
294 iface lo inet loopback
295
296 auto eno0
297 iface eno0 inet static
298 address 198.51.100.5/29
299 gateway 198.51.100.1
300 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
301 post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
302
303
304 auto vmbr0
305 iface vmbr0 inet static
306 address 203.0.113.17/28
307 bridge-ports none
308 bridge-stp off
309 bridge-fd 0
310 ----
311
312
313 [[sysadmin_network_masquerading]]
314 Masquerading (NAT) with `iptables`
315 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
316
317 Masquerading allows guests having only a private IP address to access the
318 network by using the host IP address for outgoing traffic. Each outgoing
319 packet is rewritten by `iptables` to appear as originating from the host,
320 and responses are rewritten accordingly to be routed to the original sender.
321
322 ----
323 auto lo
324 iface lo inet loopback
325
326 auto eno1
327 #real IP address
328 iface eno1 inet static
329 address 198.51.100.5/24
330 gateway 198.51.100.1
331
332 auto vmbr0
333 #private sub network
334 iface vmbr0 inet static
335 address 10.10.10.1/24
336 bridge-ports none
337 bridge-stp off
338 bridge-fd 0
339
340 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
341 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
342 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
343 ----
344
345 NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
346 needed for outgoing connections. Otherwise the firewall could block outgoing
347 connections since they will prefer the `POSTROUTING` of the VM bridge (and not
348 `MASQUERADE`).
349
350 Adding these lines in the `/etc/network/interfaces` can fix this problem:
351
352 ----
353 post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
354 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
355 ----
356
357 For more information about this, refer to the following links:
358
359 https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
360
361 https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
362
363 https://web.archive.org/web/20220610151210/https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
364
365
366 [[sysadmin_network_bond]]
367 Linux Bond
368 ~~~~~~~~~~
369
370 Bonding (also called NIC teaming or Link Aggregation) is a technique
371 for binding multiple NIC's to a single network device. It is possible
372 to achieve different goals, like make the network fault-tolerant,
373 increase the performance or both together.
374
375 High-speed hardware like Fibre Channel and the associated switching
376 hardware can be quite expensive. By doing link aggregation, two NICs
377 can appear as one logical interface, resulting in double speed. This
378 is a native Linux kernel feature that is supported by most
379 switches. If your nodes have multiple Ethernet ports, you can
380 distribute your points of failure by running network cables to
381 different switches and the bonded connection will failover to one
382 cable or the other in case of network trouble.
383
384 Aggregated links can improve live-migration delays and improve the
385 speed of replication of data between Proxmox VE Cluster nodes.
386
387 There are 7 modes for bonding:
388
389 * *Round-robin (balance-rr):* Transmit network packets in sequential
390 order from the first available network interface (NIC) slave through
391 the last. This mode provides load balancing and fault tolerance.
392
393 * *Active-backup (active-backup):* Only one NIC slave in the bond is
394 active. A different slave becomes active if, and only if, the active
395 slave fails. The single logical bonded interface's MAC address is
396 externally visible on only one NIC (port) to avoid distortion in the
397 network switch. This mode provides fault tolerance.
398
399 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
400 address XOR'd with destination MAC address) modulo NIC slave
401 count]. This selects the same NIC slave for each destination MAC
402 address. This mode provides load balancing and fault tolerance.
403
404 * *Broadcast (broadcast):* Transmit network packets on all slave
405 network interfaces. This mode provides fault tolerance.
406
407 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
408 aggregation groups that share the same speed and duplex
409 settings. Utilizes all slave network interfaces in the active
410 aggregator group according to the 802.3ad specification.
411
412 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
413 driver mode that does not require any special network-switch
414 support. The outgoing network packet traffic is distributed according
415 to the current load (computed relative to the speed) on each network
416 interface slave. Incoming traffic is received by one currently
417 designated slave network interface. If this receiving slave fails,
418 another slave takes over the MAC address of the failed receiving
419 slave.
420
421 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
422 load balancing (rlb) for IPV4 traffic, and does not require any
423 special network switch support. The receive load balancing is achieved
424 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
425 by the local system on their way out and overwrites the source
426 hardware address with the unique hardware address of one of the NIC
427 slaves in the single logical bonded interface such that different
428 network-peers use different MAC addresses for their network packet
429 traffic.
430
431 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
432 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
433 active-backup mode.
434
435 For the cluster network (Corosync) we recommend configuring it with multiple
436 networks. Corosync does not need a bond for network reduncancy as it can switch
437 between networks by itself, if one becomes unusable.
438
439 The following bond configuration can be used as distributed/shared
440 storage network. The benefit would be that you get more speed and the
441 network will be fault-tolerant.
442
443 .Example: Use bond with fixed IP address
444 ----
445 auto lo
446 iface lo inet loopback
447
448 iface eno1 inet manual
449
450 iface eno2 inet manual
451
452 iface eno3 inet manual
453
454 auto bond0
455 iface bond0 inet static
456 bond-slaves eno1 eno2
457 address 192.168.1.2/24
458 bond-miimon 100
459 bond-mode 802.3ad
460 bond-xmit-hash-policy layer2+3
461
462 auto vmbr0
463 iface vmbr0 inet static
464 address 10.10.10.2/24
465 gateway 10.10.10.1
466 bridge-ports eno3
467 bridge-stp off
468 bridge-fd 0
469
470 ----
471
472
473 [thumbnail="default-network-setup-bond.svg"]
474 Another possibility it to use the bond directly as bridge port.
475 This can be used to make the guest network fault-tolerant.
476
477 .Example: Use a bond as bridge port
478 ----
479 auto lo
480 iface lo inet loopback
481
482 iface eno1 inet manual
483
484 iface eno2 inet manual
485
486 auto bond0
487 iface bond0 inet manual
488 bond-slaves eno1 eno2
489 bond-miimon 100
490 bond-mode 802.3ad
491 bond-xmit-hash-policy layer2+3
492
493 auto vmbr0
494 iface vmbr0 inet static
495 address 10.10.10.2/24
496 gateway 10.10.10.1
497 bridge-ports bond0
498 bridge-stp off
499 bridge-fd 0
500
501 ----
502
503
504 [[sysadmin_network_vlan]]
505 VLAN 802.1Q
506 ~~~~~~~~~~~
507
508 A virtual LAN (VLAN) is a broadcast domain that is partitioned and
509 isolated in the network at layer two. So it is possible to have
510 multiple networks (4096) in a physical network, each independent of
511 the other ones.
512
513 Each VLAN network is identified by a number often called 'tag'.
514 Network packages are then 'tagged' to identify which virtual network
515 they belong to.
516
517
518 VLAN for Guest Networks
519 ^^^^^^^^^^^^^^^^^^^^^^^
520
521 {pve} supports this setup out of the box. You can specify the VLAN tag
522 when you create a VM. The VLAN tag is part of the guest network
523 configuration. The networking layer supports different modes to
524 implement VLANs, depending on the bridge configuration:
525
526 * *VLAN awareness on the Linux bridge:*
527 In this case, each guest's virtual network card is assigned to a VLAN tag,
528 which is transparently supported by the Linux bridge.
529 Trunk mode is also possible, but that makes configuration
530 in the guest necessary.
531
532 * *"traditional" VLAN on the Linux bridge:*
533 In contrast to the VLAN awareness method, this method is not transparent
534 and creates a VLAN device with associated bridge for each VLAN.
535 That is, creating a guest on VLAN 5 for example, would create two
536 interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
537
538 * *Open vSwitch VLAN:*
539 This mode uses the OVS VLAN feature.
540
541 * *Guest configured VLAN:*
542 VLANs are assigned inside the guest. In this case, the setup is
543 completely done inside the guest and can not be influenced from the
544 outside. The benefit is that you can use more than one VLAN on a
545 single virtual NIC.
546
547
548 VLAN on the Host
549 ^^^^^^^^^^^^^^^^
550
551 To allow host communication with an isolated network. It is possible
552 to apply VLAN tags to any network device (NIC, Bond, Bridge). In
553 general, you should configure the VLAN on the interface with the least
554 abstraction layers between itself and the physical NIC.
555
556 For example, in a default configuration where you want to place
557 the host management address on a separate VLAN.
558
559
560 .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
561 ----
562 auto lo
563 iface lo inet loopback
564
565 iface eno1 inet manual
566
567 iface eno1.5 inet manual
568
569 auto vmbr0v5
570 iface vmbr0v5 inet static
571 address 10.10.10.2/24
572 gateway 10.10.10.1
573 bridge-ports eno1.5
574 bridge-stp off
575 bridge-fd 0
576
577 auto vmbr0
578 iface vmbr0 inet manual
579 bridge-ports eno1
580 bridge-stp off
581 bridge-fd 0
582
583 ----
584
585 .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
586 ----
587 auto lo
588 iface lo inet loopback
589
590 iface eno1 inet manual
591
592
593 auto vmbr0.5
594 iface vmbr0.5 inet static
595 address 10.10.10.2/24
596 gateway 10.10.10.1
597
598 auto vmbr0
599 iface vmbr0 inet manual
600 bridge-ports eno1
601 bridge-stp off
602 bridge-fd 0
603 bridge-vlan-aware yes
604 bridge-vids 2-4094
605 ----
606
607 The next example is the same setup but a bond is used to
608 make this network fail-safe.
609
610 .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
611 ----
612 auto lo
613 iface lo inet loopback
614
615 iface eno1 inet manual
616
617 iface eno2 inet manual
618
619 auto bond0
620 iface bond0 inet manual
621 bond-slaves eno1 eno2
622 bond-miimon 100
623 bond-mode 802.3ad
624 bond-xmit-hash-policy layer2+3
625
626 iface bond0.5 inet manual
627
628 auto vmbr0v5
629 iface vmbr0v5 inet static
630 address 10.10.10.2/24
631 gateway 10.10.10.1
632 bridge-ports bond0.5
633 bridge-stp off
634 bridge-fd 0
635
636 auto vmbr0
637 iface vmbr0 inet manual
638 bridge-ports bond0
639 bridge-stp off
640 bridge-fd 0
641
642 ----
643
644 Disabling IPv6 on the Node
645 ~~~~~~~~~~~~~~~~~~~~~~~~~~
646
647 {pve} works correctly in all environments, irrespective of whether IPv6 is
648 deployed or not. We recommend leaving all settings at the provided defaults.
649
650 Should you still need to disable support for IPv6 on your node, do so by
651 creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
652 https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
653 for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
654
655 ----
656 net.ipv6.conf.all.disable_ipv6 = 1
657 net.ipv6.conf.default.disable_ipv6 = 1
658 ----
659
660 This method is preferred to disabling the loading of the IPv6 module on the
661 https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
662
663
664 Disabling MAC Learning on a Bridge
665 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
666
667 By default, MAC learning is enabled on a bridge to ensure a smooth experience
668 with virtual guests and their networks.
669
670 But in some environments this can be undesired. Since {pve} 7.3 you can disable
671 MAC learning on the bridge by setting the `bridge-disable-mac-learning 1`
672 configuration on a bridge in `/etc/network/interfaces', for example:
673
674 ----
675 # ...
676
677 auto vmbr0
678 iface vmbr0 inet static
679 address 10.10.10.2/24
680 gateway 10.10.10.1
681 bridge-ports ens18
682 bridge-stp off
683 bridge-fd 0
684 bridge-disable-mac-learning 1
685 ----
686
687 Once enabled, {pve} will manually add the configured MAC address from VMs and
688 Containers to the bridges forwarding database to ensure that guest can still
689 use the network - but only when they are using their actual MAC address.
690
691 ////
692 TODO: explain IPv6 support?
693 TODO: explain OVS
694 ////