]> git.proxmox.com Git - pve-docs.git/blob - pve-network.adoc
bump version to 8.2.1
[pve-docs.git] / pve-network.adoc
1 [[sysadmin_network_configuration]]
2 Network Configuration
3 ---------------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 endif::wiki[]
7
8 {pve} is using the Linux network stack. This provides a lot of flexibility on
9 how to set up the network on the {pve} nodes. The configuration can be done
10 either via the GUI, or by manually editing the file `/etc/network/interfaces`,
11 which contains the whole network configuration. The `interfaces(5)` manual
12 page contains the complete format description. All {pve} tools try hard to keep
13 direct user modifications, but using the GUI is still preferable, because it
14 protects you from errors.
15
16 A Linux bridge interface (commonly called 'vmbrX') is needed to connect guests
17 to the underlying physical network. It can be thought of as a virtual switch
18 which the guests and physical interfaces are connected to. This section provides
19 some examples on how the network can be set up to accomodate different use cases
20 like redundancy with a xref:sysadmin_network_bond['bond'],
21 xref:sysadmin_network_vlan['vlans'] or
22 xref:sysadmin_network_routed['routed'] and
23 xref:sysadmin_network_masquerading['NAT'] setups.
24
25 The xref:chapter_pvesdn[Software Defined Network] is an option for more complex
26 virtual networks in {pve} clusters.
27
28 WARNING: It's discouraged to use the traditional Debian tools `ifup` and `ifdown`
29 if unsure, as they have some pitfalls like interupting all guest traffic on
30 `ifdown vmbrX` but not reconnecting those guest again when doing `ifup` on the
31 same bridge later.
32
33 Apply Network Changes
34 ~~~~~~~~~~~~~~~~~~~~~
35
36 {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
37 write into a temporary file called `/etc/network/interfaces.new`, this way you
38 can do many related changes at once. This also allows to ensure your changes
39 are correct before applying, as a wrong network configuration may render a node
40 inaccessible.
41
42 Live-Reload Network with ifupdown2
43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
44
45 With the recommended 'ifupdown2' package (default for new installations since
46 {pve} 7.0), it is possible to apply network configuration changes without a
47 reboot. If you change the network configuration via the GUI, you can click the
48 'Apply Configuration' button. This will move changes from the staging
49 `interfaces.new` file to `/etc/network/interfaces` and apply them live.
50
51 If you made manual changes directly to the `/etc/network/interfaces` file, you
52 can apply them by running `ifreload -a`
53
54 NOTE: If you installed {pve} on top of Debian, or upgraded to {pve} 7.0 from an
55 older {pve} installation, make sure 'ifupdown2' is installed: `apt install
56 ifupdown2`
57
58 Reboot Node to Apply
59 ^^^^^^^^^^^^^^^^^^^^
60
61 Another way to apply a new network configuration is to reboot the node.
62 In that case the systemd service `pvenetcommit` will activate the staging
63 `interfaces.new` file before the `networking` service will apply that
64 configuration.
65
66 Naming Conventions
67 ~~~~~~~~~~~~~~~~~~
68
69 We currently use the following naming conventions for device names:
70
71 * Ethernet devices: `en*`, systemd network interface names. This naming scheme is
72 used for new {pve} installations since version 5.0.
73
74 * Ethernet devices: `eth[N]`, where 0 ≤ N (`eth0`, `eth1`, ...) This naming
75 scheme is used for {pve} hosts which were installed before the 5.0
76 release. When upgrading to 5.0, the names are kept as-is.
77
78 * Bridge names: Commonly `vmbr[N]`, where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`),
79 but you can use any alphanumeric string that starts with a character and is at
80 most 10 characters long.
81
82 * Bonds: `bond[N]`, where 0 ≤ N (`bond0`, `bond1`, ...)
83
84 * VLANs: Simply add the VLAN number to the device name,
85 separated by a period (`eno1.50`, `bond1.30`)
86
87 This makes it easier to debug networks problems, because the device
88 name implies the device type.
89
90 [[systemd_network_interface_names]]
91 Systemd Network Interface Names
92 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
93
94 Systemd defines a versioned naming scheme for network device names. The
95 scheme uses the two-character prefix `en` for Ethernet network devices. The
96 next characters depends on the device driver, device location and other
97 attributes. Some possible patterns are:
98
99 * `o<index>[n<phys_port_name>|d<dev_port>]` — devices on board
100
101 * `s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` — devices by hotplug id
102
103 * `[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` —
104 devices by bus id
105
106 * `x<MAC>` — devices by MAC address
107
108 Some examples for the most common patterns are:
109
110 * `eno1` — is the first on-board NIC
111
112 * `enp3s0f1` — is function 1 of the NIC on PCI bus 3, slot 0
113
114 For a full list of possible device name patterns, see the
115 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
116 systemd.net-naming-scheme(7) manpage].
117
118 A new version of systemd may define a new version of the network device naming
119 scheme, which it then uses by default. Consequently, updating to a newer
120 systemd version, for example during a major {pve} upgrade, can change the names
121 of network devices and require adjusting the network configuration. To avoid
122 name changes due to a new version of the naming scheme, you can manually pin a
123 particular naming scheme version (see
124 xref:network_pin_naming_scheme_version[below]).
125
126 However, even with a pinned naming scheme version, network device names can
127 still change due to kernel or driver updates. In order to avoid name changes
128 for a particular network device altogether, you can manually override its name
129 using a link file (see xref:network_override_device_names[below]).
130
131 For more information on network interface names, see
132 https://systemd.io/PREDICTABLE_INTERFACE_NAMES/[Predictable Network Interface
133 Names].
134
135 [[network_pin_naming_scheme_version]]
136 Pinning a specific naming scheme version
137 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
138
139 You can pin a specific version of the naming scheme for network devices by
140 adding the `net.naming-scheme=<version>` parameter to the
141 xref:sysboot_edit_kernel_cmdline[kernel command line]. For a list of naming
142 scheme versions, see the
143 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
144 systemd.net-naming-scheme(7) manpage].
145
146 For example, to pin the version `v252`, which is the latest naming scheme
147 version for a fresh {pve} 8.0 installation, add the following kernel
148 command-line parameter:
149
150 ----
151 net.naming-scheme=v252
152 ----
153
154 See also xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel
155 command line. You need to reboot for the changes to take effect.
156
157 [[network_override_device_names]]
158 Overriding network device names
159 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
160
161 You can manually assign a name to a particular network device using a custom
162 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link
163 file]. This overrides the name that would be assigned according to the latest
164 network device naming scheme. This way, you can avoid naming changes due to
165 kernel updates, driver updates or newer versions of the naming scheme.
166
167 Custom link files should be placed in `/etc/systemd/network/` and named
168 `<n>-<id>.link`, where `n` is a priority smaller than `99` and `id` is some
169 identifier. A link file has two sections: `[Match]` determines which interfaces
170 the file will apply to; `[Link]` determines how these interfaces should be
171 configured, including their naming.
172
173 To assign a name to a particular network device, you need a way to uniquely and
174 permanently identify that device in the `[Match]` section. One possibility is
175 to match the device's MAC address using the `MACAddress` option, as it is
176 unlikely to change. Then, you can assign a name using the `Name` option in the
177 `[Link]` section.
178
179 For example, to assign the name `enwan0` to the device with MAC address
180 `aa:bb:cc:dd:ee:ff`, create a file `/etc/systemd/network/10-enwan0.link` with
181 the following contents:
182
183 ----
184 [Match]
185 MACAddress=aa:bb:cc:dd:ee:ff
186
187 [Link]
188 Name=enwan0
189 ----
190
191 Do not forget to adjust `/etc/network/interfaces` to use the new name.
192 You need to reboot the node for the change to take effect.
193
194 NOTE: It is recommended to assign a name starting with `en` or `eth` so that
195 {pve} recognizes the interface as a physical network device which can then be
196 configured via the GUI. Also, you should ensure that the name will not clash
197 with other interface names in the future. One possibility is to assign a name
198 that does not match any name pattern that systemd uses for network interfaces
199 (xref:systemd_network_interface_names[see above]), such as `enwan0` in the
200 example above.
201
202 For more information on link files, see the
203 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link(5)
204 manpage].
205
206 Choosing a network configuration
207 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
208
209 Depending on your current network organization and your resources you can
210 choose either a bridged, routed, or masquerading networking setup.
211
212 {pve} server in a private LAN, using an external gateway to reach the internet
213 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
214
215 The *Bridged* model makes the most sense in this case, and this is also
216 the default mode on new {pve} installations.
217 Each of your Guest system will have a virtual interface attached to the
218 {pve} bridge. This is similar in effect to having the Guest network card
219 directly connected to a new switch on your LAN, the {pve} host playing the role
220 of the switch.
221
222 {pve} server at hosting provider, with public IP ranges for Guests
223 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
224
225 For this setup, you can use either a *Bridged* or *Routed* model, depending on
226 what your provider allows.
227
228 {pve} server at hosting provider, with a single public IP address
229 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
230
231 In that case the only way to get outgoing network accesses for your guest
232 systems is to use *Masquerading*. For incoming network access to your guests,
233 you will need to configure *Port Forwarding*.
234
235 For further flexibility, you can configure
236 VLANs (IEEE 802.1q) and network bonding, also known as "link
237 aggregation". That way it is possible to build complex and flexible
238 virtual networks.
239
240 Default Configuration using a Bridge
241 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
242
243 [thumbnail="default-network-setup-bridge.svg"]
244 Bridges are like physical network switches implemented in software.
245 All virtual guests can share a single bridge, or you can create multiple
246 bridges to separate network domains. Each host can have up to 4094 bridges.
247
248 The installation program creates a single bridge named `vmbr0`, which
249 is connected to the first Ethernet card. The corresponding
250 configuration in `/etc/network/interfaces` might look like this:
251
252 ----
253 auto lo
254 iface lo inet loopback
255
256 iface eno1 inet manual
257
258 auto vmbr0
259 iface vmbr0 inet static
260 address 192.168.10.2/24
261 gateway 192.168.10.1
262 bridge-ports eno1
263 bridge-stp off
264 bridge-fd 0
265 ----
266
267 Virtual machines behave as if they were directly connected to the
268 physical network. The network, in turn, sees each virtual machine as
269 having its own MAC, even though there is only one network cable
270 connecting all of these VMs to the network.
271
272 [[sysadmin_network_routed]]
273 Routed Configuration
274 ~~~~~~~~~~~~~~~~~~~~
275
276 Most hosting providers do not support the above setup. For security
277 reasons, they disable networking as soon as they detect multiple MAC
278 addresses on a single interface.
279
280 TIP: Some providers allow you to register additional MACs through their
281 management interface. This avoids the problem, but can be clumsy to
282 configure because you need to register a MAC for each of your VMs.
283
284 You can avoid the problem by ``routing'' all traffic via a single
285 interface. This makes sure that all network packets use the same MAC
286 address.
287
288 [thumbnail="default-network-setup-routed.svg"]
289 A common scenario is that you have a public IP (assume `198.51.100.5`
290 for this example), and an additional IP block for your VMs
291 (`203.0.113.16/28`). We recommend the following setup for such
292 situations:
293
294 ----
295 auto lo
296 iface lo inet loopback
297
298 auto eno0
299 iface eno0 inet static
300 address 198.51.100.5/29
301 gateway 198.51.100.1
302 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
303 post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
304
305
306 auto vmbr0
307 iface vmbr0 inet static
308 address 203.0.113.17/28
309 bridge-ports none
310 bridge-stp off
311 bridge-fd 0
312 ----
313
314
315 [[sysadmin_network_masquerading]]
316 Masquerading (NAT) with `iptables`
317 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
318
319 Masquerading allows guests having only a private IP address to access the
320 network by using the host IP address for outgoing traffic. Each outgoing
321 packet is rewritten by `iptables` to appear as originating from the host,
322 and responses are rewritten accordingly to be routed to the original sender.
323
324 ----
325 auto lo
326 iface lo inet loopback
327
328 auto eno1
329 #real IP address
330 iface eno1 inet static
331 address 198.51.100.5/24
332 gateway 198.51.100.1
333
334 auto vmbr0
335 #private sub network
336 iface vmbr0 inet static
337 address 10.10.10.1/24
338 bridge-ports none
339 bridge-stp off
340 bridge-fd 0
341
342 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
343 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
344 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
345 ----
346
347 NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
348 needed for outgoing connections. Otherwise the firewall could block outgoing
349 connections since they will prefer the `POSTROUTING` of the VM bridge (and not
350 `MASQUERADE`).
351
352 Adding these lines in the `/etc/network/interfaces` can fix this problem:
353
354 ----
355 post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
356 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
357 ----
358
359 For more information about this, refer to the following links:
360
361 https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
362
363 https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
364
365 https://web.archive.org/web/20220610151210/https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
366
367
368 [[sysadmin_network_bond]]
369 Linux Bond
370 ~~~~~~~~~~
371
372 Bonding (also called NIC teaming or Link Aggregation) is a technique
373 for binding multiple NIC's to a single network device. It is possible
374 to achieve different goals, like make the network fault-tolerant,
375 increase the performance or both together.
376
377 High-speed hardware like Fibre Channel and the associated switching
378 hardware can be quite expensive. By doing link aggregation, two NICs
379 can appear as one logical interface, resulting in double speed. This
380 is a native Linux kernel feature that is supported by most
381 switches. If your nodes have multiple Ethernet ports, you can
382 distribute your points of failure by running network cables to
383 different switches and the bonded connection will failover to one
384 cable or the other in case of network trouble.
385
386 Aggregated links can improve live-migration delays and improve the
387 speed of replication of data between Proxmox VE Cluster nodes.
388
389 There are 7 modes for bonding:
390
391 * *Round-robin (balance-rr):* Transmit network packets in sequential
392 order from the first available network interface (NIC) slave through
393 the last. This mode provides load balancing and fault tolerance.
394
395 * *Active-backup (active-backup):* Only one NIC slave in the bond is
396 active. A different slave becomes active if, and only if, the active
397 slave fails. The single logical bonded interface's MAC address is
398 externally visible on only one NIC (port) to avoid distortion in the
399 network switch. This mode provides fault tolerance.
400
401 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
402 address XOR'd with destination MAC address) modulo NIC slave
403 count]. This selects the same NIC slave for each destination MAC
404 address. This mode provides load balancing and fault tolerance.
405
406 * *Broadcast (broadcast):* Transmit network packets on all slave
407 network interfaces. This mode provides fault tolerance.
408
409 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
410 aggregation groups that share the same speed and duplex
411 settings. Utilizes all slave network interfaces in the active
412 aggregator group according to the 802.3ad specification.
413
414 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
415 driver mode that does not require any special network-switch
416 support. The outgoing network packet traffic is distributed according
417 to the current load (computed relative to the speed) on each network
418 interface slave. Incoming traffic is received by one currently
419 designated slave network interface. If this receiving slave fails,
420 another slave takes over the MAC address of the failed receiving
421 slave.
422
423 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
424 load balancing (rlb) for IPV4 traffic, and does not require any
425 special network switch support. The receive load balancing is achieved
426 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
427 by the local system on their way out and overwrites the source
428 hardware address with the unique hardware address of one of the NIC
429 slaves in the single logical bonded interface such that different
430 network-peers use different MAC addresses for their network packet
431 traffic.
432
433 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
434 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
435 active-backup mode.
436
437 For the cluster network (Corosync) we recommend configuring it with multiple
438 networks. Corosync does not need a bond for network reduncancy as it can switch
439 between networks by itself, if one becomes unusable.
440
441 The following bond configuration can be used as distributed/shared
442 storage network. The benefit would be that you get more speed and the
443 network will be fault-tolerant.
444
445 .Example: Use bond with fixed IP address
446 ----
447 auto lo
448 iface lo inet loopback
449
450 iface eno1 inet manual
451
452 iface eno2 inet manual
453
454 iface eno3 inet manual
455
456 auto bond0
457 iface bond0 inet static
458 bond-slaves eno1 eno2
459 address 192.168.1.2/24
460 bond-miimon 100
461 bond-mode 802.3ad
462 bond-xmit-hash-policy layer2+3
463
464 auto vmbr0
465 iface vmbr0 inet static
466 address 10.10.10.2/24
467 gateway 10.10.10.1
468 bridge-ports eno3
469 bridge-stp off
470 bridge-fd 0
471
472 ----
473
474
475 [thumbnail="default-network-setup-bond.svg"]
476 Another possibility it to use the bond directly as bridge port.
477 This can be used to make the guest network fault-tolerant.
478
479 .Example: Use a bond as bridge port
480 ----
481 auto lo
482 iface lo inet loopback
483
484 iface eno1 inet manual
485
486 iface eno2 inet manual
487
488 auto bond0
489 iface bond0 inet manual
490 bond-slaves eno1 eno2
491 bond-miimon 100
492 bond-mode 802.3ad
493 bond-xmit-hash-policy layer2+3
494
495 auto vmbr0
496 iface vmbr0 inet static
497 address 10.10.10.2/24
498 gateway 10.10.10.1
499 bridge-ports bond0
500 bridge-stp off
501 bridge-fd 0
502
503 ----
504
505
506 [[sysadmin_network_vlan]]
507 VLAN 802.1Q
508 ~~~~~~~~~~~
509
510 A virtual LAN (VLAN) is a broadcast domain that is partitioned and
511 isolated in the network at layer two. So it is possible to have
512 multiple networks (4096) in a physical network, each independent of
513 the other ones.
514
515 Each VLAN network is identified by a number often called 'tag'.
516 Network packages are then 'tagged' to identify which virtual network
517 they belong to.
518
519
520 VLAN for Guest Networks
521 ^^^^^^^^^^^^^^^^^^^^^^^
522
523 {pve} supports this setup out of the box. You can specify the VLAN tag
524 when you create a VM. The VLAN tag is part of the guest network
525 configuration. The networking layer supports different modes to
526 implement VLANs, depending on the bridge configuration:
527
528 * *VLAN awareness on the Linux bridge:*
529 In this case, each guest's virtual network card is assigned to a VLAN tag,
530 which is transparently supported by the Linux bridge.
531 Trunk mode is also possible, but that makes configuration
532 in the guest necessary.
533
534 * *"traditional" VLAN on the Linux bridge:*
535 In contrast to the VLAN awareness method, this method is not transparent
536 and creates a VLAN device with associated bridge for each VLAN.
537 That is, creating a guest on VLAN 5 for example, would create two
538 interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
539
540 * *Open vSwitch VLAN:*
541 This mode uses the OVS VLAN feature.
542
543 * *Guest configured VLAN:*
544 VLANs are assigned inside the guest. In this case, the setup is
545 completely done inside the guest and can not be influenced from the
546 outside. The benefit is that you can use more than one VLAN on a
547 single virtual NIC.
548
549
550 VLAN on the Host
551 ^^^^^^^^^^^^^^^^
552
553 To allow host communication with an isolated network. It is possible
554 to apply VLAN tags to any network device (NIC, Bond, Bridge). In
555 general, you should configure the VLAN on the interface with the least
556 abstraction layers between itself and the physical NIC.
557
558 For example, in a default configuration where you want to place
559 the host management address on a separate VLAN.
560
561
562 .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
563 ----
564 auto lo
565 iface lo inet loopback
566
567 iface eno1 inet manual
568
569 iface eno1.5 inet manual
570
571 auto vmbr0v5
572 iface vmbr0v5 inet static
573 address 10.10.10.2/24
574 gateway 10.10.10.1
575 bridge-ports eno1.5
576 bridge-stp off
577 bridge-fd 0
578
579 auto vmbr0
580 iface vmbr0 inet manual
581 bridge-ports eno1
582 bridge-stp off
583 bridge-fd 0
584
585 ----
586
587 .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
588 ----
589 auto lo
590 iface lo inet loopback
591
592 iface eno1 inet manual
593
594
595 auto vmbr0.5
596 iface vmbr0.5 inet static
597 address 10.10.10.2/24
598 gateway 10.10.10.1
599
600 auto vmbr0
601 iface vmbr0 inet manual
602 bridge-ports eno1
603 bridge-stp off
604 bridge-fd 0
605 bridge-vlan-aware yes
606 bridge-vids 2-4094
607 ----
608
609 The next example is the same setup but a bond is used to
610 make this network fail-safe.
611
612 .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
613 ----
614 auto lo
615 iface lo inet loopback
616
617 iface eno1 inet manual
618
619 iface eno2 inet manual
620
621 auto bond0
622 iface bond0 inet manual
623 bond-slaves eno1 eno2
624 bond-miimon 100
625 bond-mode 802.3ad
626 bond-xmit-hash-policy layer2+3
627
628 iface bond0.5 inet manual
629
630 auto vmbr0v5
631 iface vmbr0v5 inet static
632 address 10.10.10.2/24
633 gateway 10.10.10.1
634 bridge-ports bond0.5
635 bridge-stp off
636 bridge-fd 0
637
638 auto vmbr0
639 iface vmbr0 inet manual
640 bridge-ports bond0
641 bridge-stp off
642 bridge-fd 0
643
644 ----
645
646 Disabling IPv6 on the Node
647 ~~~~~~~~~~~~~~~~~~~~~~~~~~
648
649 {pve} works correctly in all environments, irrespective of whether IPv6 is
650 deployed or not. We recommend leaving all settings at the provided defaults.
651
652 Should you still need to disable support for IPv6 on your node, do so by
653 creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
654 https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
655 for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
656
657 ----
658 net.ipv6.conf.all.disable_ipv6 = 1
659 net.ipv6.conf.default.disable_ipv6 = 1
660 ----
661
662 This method is preferred to disabling the loading of the IPv6 module on the
663 https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
664
665
666 Disabling MAC Learning on a Bridge
667 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
668
669 By default, MAC learning is enabled on a bridge to ensure a smooth experience
670 with virtual guests and their networks.
671
672 But in some environments this can be undesired. Since {pve} 7.3 you can disable
673 MAC learning on the bridge by setting the `bridge-disable-mac-learning 1`
674 configuration on a bridge in `/etc/network/interfaces', for example:
675
676 ----
677 # ...
678
679 auto vmbr0
680 iface vmbr0 inet static
681 address 10.10.10.2/24
682 gateway 10.10.10.1
683 bridge-ports ens18
684 bridge-stp off
685 bridge-fd 0
686 bridge-disable-mac-learning 1
687 ----
688
689 Once enabled, {pve} will manually add the configured MAC address from VMs and
690 Containers to the bridges forwarding database to ensure that guest can still
691 use the network - but only when they are using their actual MAC address.
692
693 ////
694 TODO: explain IPv6 support?
695 TODO: explain OVS
696 ////