]> git.proxmox.com Git - pve-docs.git/blob - pve-network.adoc
totp: fix copy/paste mistake
[pve-docs.git] / pve-network.adoc
1 [[sysadmin_network_configuration]]
2 Network Configuration
3 ---------------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 endif::wiki[]
7
8 {pve} is using the Linux network stack. This provides a lot of flexibility on
9 how to set up the network on the {pve} nodes. The configuration can be done
10 either via the GUI, or by manually editing the file `/etc/network/interfaces`,
11 which contains the whole network configuration. The `interfaces(5)` manual
12 page contains the complete format description. All {pve} tools try hard to keep
13 direct user modifications, but using the GUI is still preferable, because it
14 protects you from errors.
15
16 A Linux bridge interface (commonly called 'vmbrX') is needed to connect guests
17 to the underlying physical network. It can be thought of as a virtual switch
18 which the guests and physical interfaces are connected to. This section provides
19 some examples on how the network can be set up to accomodate different use cases
20 like redundancy with a xref:sysadmin_network_bond['bond'],
21 xref:sysadmin_network_vlan['vlans'] or
22 xref:sysadmin_network_routed['routed'] and
23 xref:sysadmin_network_masquerading['NAT'] setups.
24
25 The xref:chapter_pvesdn[Software Defined Network] is an option for more complex
26 virtual networks in {pve} clusters.
27
28 WARNING: It's discouraged to use the traditional Debian tools `ifup` and `ifdown`
29 if unsure, as they have some pitfalls like interupting all guest traffic on
30 `ifdown vmbrX` but not reconnecting those guest again when doing `ifup` on the
31 same bridge later.
32
33 Apply Network Changes
34 ~~~~~~~~~~~~~~~~~~~~~
35
36 {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
37 write into a temporary file called `/etc/network/interfaces.new`, this way you
38 can do many related changes at once. This also allows to ensure your changes
39 are correct before applying, as a wrong network configuration may render a node
40 inaccessible.
41
42 Live-Reload Network with ifupdown2
43 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
44
45 With the recommended 'ifupdown2' package (default for new installations since
46 {pve} 7.0), it is possible to apply network configuration changes without a
47 reboot. If you change the network configuration via the GUI, you can click the
48 'Apply Configuration' button. This will move changes from the staging
49 `interfaces.new` file to `/etc/network/interfaces` and apply them live.
50
51 If you made manual changes directly to the `/etc/network/interfaces` file, you
52 can apply them by running `ifreload -a`
53
54 NOTE: If you installed {pve} on top of Debian, or upgraded to {pve} 7.0 from an
55 older {pve} installation, make sure 'ifupdown2' is installed: `apt install
56 ifupdown2`
57
58 Reboot Node to Apply
59 ^^^^^^^^^^^^^^^^^^^^
60
61 Another way to apply a new network configuration is to reboot the node.
62 In that case the systemd service `pvenetcommit` will activate the staging
63 `interfaces.new` file before the `networking` service will apply that
64 configuration.
65
66 Naming Conventions
67 ~~~~~~~~~~~~~~~~~~
68
69 We currently use the following naming conventions for device names:
70
71 * Ethernet devices: `en*`, systemd network interface names. This naming scheme is
72 used for new {pve} installations since version 5.0.
73
74 * Ethernet devices: `eth[N]`, where 0 ≤ N (`eth0`, `eth1`, ...) This naming
75 scheme is used for {pve} hosts which were installed before the 5.0
76 release. When upgrading to 5.0, the names are kept as-is.
77
78 * Bridge names: Commonly `vmbr[N]`, where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`),
79 but you can use any alphanumeric string that starts with a character and is at
80 most 10 characters long.
81
82 * Bonds: `bond[N]`, where 0 ≤ N (`bond0`, `bond1`, ...)
83
84 * VLANs: Simply add the VLAN number to the device name,
85 separated by a period (`eno1.50`, `bond1.30`)
86
87 This makes it easier to debug networks problems, because the device
88 name implies the device type.
89
90 [[systemd_network_interface_names]]
91 Systemd Network Interface Names
92 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
93
94 Systemd defines a versioned naming scheme for network device names. The
95 scheme uses the two-character prefix `en` for Ethernet network devices. The
96 next characters depends on the device driver, device location and other
97 attributes. Some possible patterns are:
98
99 * `o<index>[n<phys_port_name>|d<dev_port>]` — devices on board
100
101 * `s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` — devices by hotplug id
102
103 * `[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` —
104 devices by bus id
105
106 * `x<MAC>` — devices by MAC address
107
108 Some examples for the most common patterns are:
109
110 * `eno1` — is the first on-board NIC
111
112 * `enp3s0f1` — is function 1 of the NIC on PCI bus 3, slot 0
113
114 For a full list of possible device name patterns, see the
115 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
116 systemd.net-naming-scheme(7) manpage].
117
118 A new version of systemd may define a new version of the network device naming
119 scheme, which it then uses by default. Consequently, updating to a newer
120 systemd version, for example during a major {pve} upgrade, can change the names
121 of network devices and require adjusting the network configuration. To avoid
122 name changes due to a new version of the naming scheme, you can manually pin a
123 particular naming scheme version (see
124 xref:network_pin_naming_scheme_version[below]).
125
126 However, even with a pinned naming scheme version, network device names can
127 still change due to kernel or driver updates. In order to avoid name changes
128 for a particular network device altogether, you can manually override its name
129 using a link file (see xref:network_override_device_names[below]).
130
131 For more information on network interface names, see
132 https://systemd.io/PREDICTABLE_INTERFACE_NAMES/[Predictable Network Interface
133 Names].
134
135 [[network_pin_naming_scheme_version]]
136 Pinning a specific naming scheme version
137 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
138
139 You can pin a specific version of the naming scheme for network devices by
140 adding the `net.naming-scheme=<version>` parameter to the
141 xref:sysboot_edit_kernel_cmdline[kernel command line]. For a list of naming
142 scheme versions, see the
143 https://manpages.debian.org/stable/systemd/systemd.net-naming-scheme.7.en.html[
144 systemd.net-naming-scheme(7) manpage].
145
146 For example, to pin the version `v252`, which is the latest naming scheme
147 version for a fresh {pve} 8.0 installation, add the following kernel
148 command-line parameter:
149
150 ----
151 net.naming-scheme=v252
152 ----
153
154 See also xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel
155 command line. You need to reboot for the changes to take effect.
156
157 [[network_override_device_names]]
158 Overriding network device names
159 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
160
161 You can manually assign a name to a particular network device using a custom
162 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link
163 file]. This overrides the name that would be assigned according to the latest
164 network device naming scheme. This way, you can avoid naming changes due to
165 kernel updates, driver updates or newer versions of the naming scheme.
166
167 Custom link files should be placed in `/etc/systemd/network/` and named
168 `<n>-<id>.link`, where `n` is a priority smaller than `99` and `id` is some
169 identifier. A link file has two sections: `[Match]` determines which interfaces
170 the file will apply to; `[Link]` determines how these interfaces should be
171 configured, including their naming.
172
173 To assign a name to a particular network device, you need a way to uniquely and
174 permanently identify that device in the `[Match]` section. One possibility is
175 to match the device's MAC address using the `MACAddress` option, as it is
176 unlikely to change.
177
178 The `[Match]` section should also contain a `Type` option to make sure it only
179 matches the expected physical interface, and not bridge/bond/VLAN interfaces
180 with the same MAC address. In most setups, `Type` should be set to `ether` to
181 match only Ethernet devices, but some setups may require other choices. See the
182 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link(5)
183 manpage] for more details.
184
185 Then, you can assign a name using the `Name` option in the `[Link]` section.
186
187 For example, to assign the name `enwan0` to the Ethernet device with MAC
188 address `aa:bb:cc:dd:ee:ff`, create a file
189 `/etc/systemd/network/10-enwan0.link` with the following contents:
190
191 ----
192 [Match]
193 MACAddress=aa:bb:cc:dd:ee:ff
194 Type=ether
195
196 [Link]
197 Name=enwan0
198 ----
199
200 Do not forget to adjust `/etc/network/interfaces` to use the new name.
201 You need to reboot the node for the change to take effect.
202
203 NOTE: It is recommended to assign a name starting with `en` or `eth` so that
204 {pve} recognizes the interface as a physical network device which can then be
205 configured via the GUI. Also, you should ensure that the name will not clash
206 with other interface names in the future. One possibility is to assign a name
207 that does not match any name pattern that systemd uses for network interfaces
208 (xref:systemd_network_interface_names[see above]), such as `enwan0` in the
209 example above.
210
211 For more information on link files, see the
212 https://manpages.debian.org/stable/udev/systemd.link.5.en.html[systemd.link(5)
213 manpage].
214
215 Choosing a network configuration
216 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
217
218 Depending on your current network organization and your resources you can
219 choose either a bridged, routed, or masquerading networking setup.
220
221 {pve} server in a private LAN, using an external gateway to reach the internet
222 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
223
224 The *Bridged* model makes the most sense in this case, and this is also
225 the default mode on new {pve} installations.
226 Each of your Guest system will have a virtual interface attached to the
227 {pve} bridge. This is similar in effect to having the Guest network card
228 directly connected to a new switch on your LAN, the {pve} host playing the role
229 of the switch.
230
231 {pve} server at hosting provider, with public IP ranges for Guests
232 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
233
234 For this setup, you can use either a *Bridged* or *Routed* model, depending on
235 what your provider allows.
236
237 {pve} server at hosting provider, with a single public IP address
238 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239
240 In that case the only way to get outgoing network accesses for your guest
241 systems is to use *Masquerading*. For incoming network access to your guests,
242 you will need to configure *Port Forwarding*.
243
244 For further flexibility, you can configure
245 VLANs (IEEE 802.1q) and network bonding, also known as "link
246 aggregation". That way it is possible to build complex and flexible
247 virtual networks.
248
249 Default Configuration using a Bridge
250 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
251
252 [thumbnail="default-network-setup-bridge.svg"]
253 Bridges are like physical network switches implemented in software.
254 All virtual guests can share a single bridge, or you can create multiple
255 bridges to separate network domains. Each host can have up to 4094 bridges.
256
257 The installation program creates a single bridge named `vmbr0`, which
258 is connected to the first Ethernet card. The corresponding
259 configuration in `/etc/network/interfaces` might look like this:
260
261 ----
262 auto lo
263 iface lo inet loopback
264
265 iface eno1 inet manual
266
267 auto vmbr0
268 iface vmbr0 inet static
269 address 192.168.10.2/24
270 gateway 192.168.10.1
271 bridge-ports eno1
272 bridge-stp off
273 bridge-fd 0
274 ----
275
276 Virtual machines behave as if they were directly connected to the
277 physical network. The network, in turn, sees each virtual machine as
278 having its own MAC, even though there is only one network cable
279 connecting all of these VMs to the network.
280
281 [[sysadmin_network_routed]]
282 Routed Configuration
283 ~~~~~~~~~~~~~~~~~~~~
284
285 Most hosting providers do not support the above setup. For security
286 reasons, they disable networking as soon as they detect multiple MAC
287 addresses on a single interface.
288
289 TIP: Some providers allow you to register additional MACs through their
290 management interface. This avoids the problem, but can be clumsy to
291 configure because you need to register a MAC for each of your VMs.
292
293 You can avoid the problem by ``routing'' all traffic via a single
294 interface. This makes sure that all network packets use the same MAC
295 address.
296
297 [thumbnail="default-network-setup-routed.svg"]
298 A common scenario is that you have a public IP (assume `198.51.100.5`
299 for this example), and an additional IP block for your VMs
300 (`203.0.113.16/28`). We recommend the following setup for such
301 situations:
302
303 ----
304 auto lo
305 iface lo inet loopback
306
307 auto eno0
308 iface eno0 inet static
309 address 198.51.100.5/29
310 gateway 198.51.100.1
311 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
312 post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
313
314
315 auto vmbr0
316 iface vmbr0 inet static
317 address 203.0.113.17/28
318 bridge-ports none
319 bridge-stp off
320 bridge-fd 0
321 ----
322
323
324 [[sysadmin_network_masquerading]]
325 Masquerading (NAT) with `iptables`
326 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
327
328 Masquerading allows guests having only a private IP address to access the
329 network by using the host IP address for outgoing traffic. Each outgoing
330 packet is rewritten by `iptables` to appear as originating from the host,
331 and responses are rewritten accordingly to be routed to the original sender.
332
333 ----
334 auto lo
335 iface lo inet loopback
336
337 auto eno1
338 #real IP address
339 iface eno1 inet static
340 address 198.51.100.5/24
341 gateway 198.51.100.1
342
343 auto vmbr0
344 #private sub network
345 iface vmbr0 inet static
346 address 10.10.10.1/24
347 bridge-ports none
348 bridge-stp off
349 bridge-fd 0
350
351 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
352 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
353 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
354 ----
355
356 NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
357 needed for outgoing connections. Otherwise the firewall could block outgoing
358 connections since they will prefer the `POSTROUTING` of the VM bridge (and not
359 `MASQUERADE`).
360
361 Adding these lines in the `/etc/network/interfaces` can fix this problem:
362
363 ----
364 post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
365 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
366 ----
367
368 For more information about this, refer to the following links:
369
370 https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter Packet Flow]
371
372 https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack zones]
373
374 https://web.archive.org/web/20220610151210/https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by using TRACE in the raw table]
375
376
377 [[sysadmin_network_bond]]
378 Linux Bond
379 ~~~~~~~~~~
380
381 Bonding (also called NIC teaming or Link Aggregation) is a technique
382 for binding multiple NIC's to a single network device. It is possible
383 to achieve different goals, like make the network fault-tolerant,
384 increase the performance or both together.
385
386 High-speed hardware like Fibre Channel and the associated switching
387 hardware can be quite expensive. By doing link aggregation, two NICs
388 can appear as one logical interface, resulting in double speed. This
389 is a native Linux kernel feature that is supported by most
390 switches. If your nodes have multiple Ethernet ports, you can
391 distribute your points of failure by running network cables to
392 different switches and the bonded connection will failover to one
393 cable or the other in case of network trouble.
394
395 Aggregated links can improve live-migration delays and improve the
396 speed of replication of data between Proxmox VE Cluster nodes.
397
398 There are 7 modes for bonding:
399
400 * *Round-robin (balance-rr):* Transmit network packets in sequential
401 order from the first available network interface (NIC) slave through
402 the last. This mode provides load balancing and fault tolerance.
403
404 * *Active-backup (active-backup):* Only one NIC slave in the bond is
405 active. A different slave becomes active if, and only if, the active
406 slave fails. The single logical bonded interface's MAC address is
407 externally visible on only one NIC (port) to avoid distortion in the
408 network switch. This mode provides fault tolerance.
409
410 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
411 address XOR'd with destination MAC address) modulo NIC slave
412 count]. This selects the same NIC slave for each destination MAC
413 address. This mode provides load balancing and fault tolerance.
414
415 * *Broadcast (broadcast):* Transmit network packets on all slave
416 network interfaces. This mode provides fault tolerance.
417
418 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
419 aggregation groups that share the same speed and duplex
420 settings. Utilizes all slave network interfaces in the active
421 aggregator group according to the 802.3ad specification.
422
423 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
424 driver mode that does not require any special network-switch
425 support. The outgoing network packet traffic is distributed according
426 to the current load (computed relative to the speed) on each network
427 interface slave. Incoming traffic is received by one currently
428 designated slave network interface. If this receiving slave fails,
429 another slave takes over the MAC address of the failed receiving
430 slave.
431
432 * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
433 load balancing (rlb) for IPV4 traffic, and does not require any
434 special network switch support. The receive load balancing is achieved
435 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
436 by the local system on their way out and overwrites the source
437 hardware address with the unique hardware address of one of the NIC
438 slaves in the single logical bonded interface such that different
439 network-peers use different MAC addresses for their network packet
440 traffic.
441
442 If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
443 the corresponding bonding mode (802.3ad). Otherwise you should generally use the
444 active-backup mode.
445
446 For the cluster network (Corosync) we recommend configuring it with multiple
447 networks. Corosync does not need a bond for network reduncancy as it can switch
448 between networks by itself, if one becomes unusable.
449
450 The following bond configuration can be used as distributed/shared
451 storage network. The benefit would be that you get more speed and the
452 network will be fault-tolerant.
453
454 .Example: Use bond with fixed IP address
455 ----
456 auto lo
457 iface lo inet loopback
458
459 iface eno1 inet manual
460
461 iface eno2 inet manual
462
463 iface eno3 inet manual
464
465 auto bond0
466 iface bond0 inet static
467 bond-slaves eno1 eno2
468 address 192.168.1.2/24
469 bond-miimon 100
470 bond-mode 802.3ad
471 bond-xmit-hash-policy layer2+3
472
473 auto vmbr0
474 iface vmbr0 inet static
475 address 10.10.10.2/24
476 gateway 10.10.10.1
477 bridge-ports eno3
478 bridge-stp off
479 bridge-fd 0
480
481 ----
482
483
484 [thumbnail="default-network-setup-bond.svg"]
485 Another possibility it to use the bond directly as bridge port.
486 This can be used to make the guest network fault-tolerant.
487
488 .Example: Use a bond as bridge port
489 ----
490 auto lo
491 iface lo inet loopback
492
493 iface eno1 inet manual
494
495 iface eno2 inet manual
496
497 auto bond0
498 iface bond0 inet manual
499 bond-slaves eno1 eno2
500 bond-miimon 100
501 bond-mode 802.3ad
502 bond-xmit-hash-policy layer2+3
503
504 auto vmbr0
505 iface vmbr0 inet static
506 address 10.10.10.2/24
507 gateway 10.10.10.1
508 bridge-ports bond0
509 bridge-stp off
510 bridge-fd 0
511
512 ----
513
514
515 [[sysadmin_network_vlan]]
516 VLAN 802.1Q
517 ~~~~~~~~~~~
518
519 A virtual LAN (VLAN) is a broadcast domain that is partitioned and
520 isolated in the network at layer two. So it is possible to have
521 multiple networks (4096) in a physical network, each independent of
522 the other ones.
523
524 Each VLAN network is identified by a number often called 'tag'.
525 Network packages are then 'tagged' to identify which virtual network
526 they belong to.
527
528
529 VLAN for Guest Networks
530 ^^^^^^^^^^^^^^^^^^^^^^^
531
532 {pve} supports this setup out of the box. You can specify the VLAN tag
533 when you create a VM. The VLAN tag is part of the guest network
534 configuration. The networking layer supports different modes to
535 implement VLANs, depending on the bridge configuration:
536
537 * *VLAN awareness on the Linux bridge:*
538 In this case, each guest's virtual network card is assigned to a VLAN tag,
539 which is transparently supported by the Linux bridge.
540 Trunk mode is also possible, but that makes configuration
541 in the guest necessary.
542
543 * *"traditional" VLAN on the Linux bridge:*
544 In contrast to the VLAN awareness method, this method is not transparent
545 and creates a VLAN device with associated bridge for each VLAN.
546 That is, creating a guest on VLAN 5 for example, would create two
547 interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
548
549 * *Open vSwitch VLAN:*
550 This mode uses the OVS VLAN feature.
551
552 * *Guest configured VLAN:*
553 VLANs are assigned inside the guest. In this case, the setup is
554 completely done inside the guest and can not be influenced from the
555 outside. The benefit is that you can use more than one VLAN on a
556 single virtual NIC.
557
558
559 VLAN on the Host
560 ^^^^^^^^^^^^^^^^
561
562 To allow host communication with an isolated network. It is possible
563 to apply VLAN tags to any network device (NIC, Bond, Bridge). In
564 general, you should configure the VLAN on the interface with the least
565 abstraction layers between itself and the physical NIC.
566
567 For example, in a default configuration where you want to place
568 the host management address on a separate VLAN.
569
570
571 .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
572 ----
573 auto lo
574 iface lo inet loopback
575
576 iface eno1 inet manual
577
578 iface eno1.5 inet manual
579
580 auto vmbr0v5
581 iface vmbr0v5 inet static
582 address 10.10.10.2/24
583 gateway 10.10.10.1
584 bridge-ports eno1.5
585 bridge-stp off
586 bridge-fd 0
587
588 auto vmbr0
589 iface vmbr0 inet manual
590 bridge-ports eno1
591 bridge-stp off
592 bridge-fd 0
593
594 ----
595
596 .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
597 ----
598 auto lo
599 iface lo inet loopback
600
601 iface eno1 inet manual
602
603
604 auto vmbr0.5
605 iface vmbr0.5 inet static
606 address 10.10.10.2/24
607 gateway 10.10.10.1
608
609 auto vmbr0
610 iface vmbr0 inet manual
611 bridge-ports eno1
612 bridge-stp off
613 bridge-fd 0
614 bridge-vlan-aware yes
615 bridge-vids 2-4094
616 ----
617
618 The next example is the same setup but a bond is used to
619 make this network fail-safe.
620
621 .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
622 ----
623 auto lo
624 iface lo inet loopback
625
626 iface eno1 inet manual
627
628 iface eno2 inet manual
629
630 auto bond0
631 iface bond0 inet manual
632 bond-slaves eno1 eno2
633 bond-miimon 100
634 bond-mode 802.3ad
635 bond-xmit-hash-policy layer2+3
636
637 iface bond0.5 inet manual
638
639 auto vmbr0v5
640 iface vmbr0v5 inet static
641 address 10.10.10.2/24
642 gateway 10.10.10.1
643 bridge-ports bond0.5
644 bridge-stp off
645 bridge-fd 0
646
647 auto vmbr0
648 iface vmbr0 inet manual
649 bridge-ports bond0
650 bridge-stp off
651 bridge-fd 0
652
653 ----
654
655 Disabling IPv6 on the Node
656 ~~~~~~~~~~~~~~~~~~~~~~~~~~
657
658 {pve} works correctly in all environments, irrespective of whether IPv6 is
659 deployed or not. We recommend leaving all settings at the provided defaults.
660
661 Should you still need to disable support for IPv6 on your node, do so by
662 creating an appropriate `sysctl.conf (5)` snippet file and setting the proper
663 https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt[sysctls],
664 for example adding `/etc/sysctl.d/disable-ipv6.conf` with content:
665
666 ----
667 net.ipv6.conf.all.disable_ipv6 = 1
668 net.ipv6.conf.default.disable_ipv6 = 1
669 ----
670
671 This method is preferred to disabling the loading of the IPv6 module on the
672 https://www.kernel.org/doc/Documentation/networking/ipv6.rst[kernel commandline].
673
674
675 Disabling MAC Learning on a Bridge
676 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
677
678 By default, MAC learning is enabled on a bridge to ensure a smooth experience
679 with virtual guests and their networks.
680
681 But in some environments this can be undesired. Since {pve} 7.3 you can disable
682 MAC learning on the bridge by setting the `bridge-disable-mac-learning 1`
683 configuration on a bridge in `/etc/network/interfaces', for example:
684
685 ----
686 # ...
687
688 auto vmbr0
689 iface vmbr0 inet static
690 address 10.10.10.2/24
691 gateway 10.10.10.1
692 bridge-ports ens18
693 bridge-stp off
694 bridge-fd 0
695 bridge-disable-mac-learning 1
696 ----
697
698 Once enabled, {pve} will manually add the configured MAC address from VMs and
699 Containers to the bridges forwarding database to ensure that guest can still
700 use the network - but only when they are using their actual MAC address.
701
702 ////
703 TODO: explain IPv6 support?
704 TODO: explain OVS
705 ////