2 Software-Defined Network
3 ========================
8 The **S**oftware-**D**efined **N**etwork (SDN) feature in {pve} enables the
9 creation of virtual zones and networks (VNets). This functionality simplifies
10 advanced networking configurations and multitenancy setup."
16 The {pve} SDN allows for separation and fine-grained control of virtual guest
17 networks, using flexible, software-controlled configurations.
19 Separation is managed through *zones*, virtual networks (*VNets*), and
20 *subnets*. A zone is its own virtually separated network area. A VNet is a
21 virtual network that belongs to a zone. A subnet is an IP range inside a VNet.
23 Depending on the type of the zone, the network behaves differently and offers
24 specific features, advantages, and limitations.
26 Use cases for SDN range from an isolated private network on each individual node
27 to complex overlay networks across multiple PVE clusters on different locations.
29 After configuring an VNet in the cluster-wide datacenter SDN administration
30 interface, it is available as a common Linux bridge, locally on each node, to be
31 assigned to VMs and Containers.
34 [[pvesdn_support_status]]
41 The {pve} SDN stack has been available as an experimental feature since 2019 and
42 has been continuously improved and tested by many developers and users.
43 With its integration into the web interface in {pve} 6.2, a significant
44 milestone towards broader integration was achieved.
45 During the {pve} 7 release cycle, numerous improvements and features were added.
46 Based on user feedback, it became apparent that the fundamental design choices
47 and their implementation were quite sound and stable. Consequently, labeling it
48 as `experimental' did not do justice to the state of the SDN stack.
49 For {pve} 8, a decision was made to lay the groundwork for full integration of
50 the SDN feature by elevating the management of networks and interfaces to a core
51 component in the {pve} access control stack.
52 In {pve} 8.1, two major milestones were achieved: firstly, DHCP integration was
53 added to the IP address management (IPAM) feature, and secondly, the SDN
54 integration is now installed by default.
59 The current support status for the various layers of our SDN installation is as
62 - Core SDN, which includes VNet management and its integration with the {pve}
63 stack, is fully supported.
64 - IPAM, including DHCP management for virtual guests, is in tech preview.
65 - Complex routing via FRRouting and controller integration are in tech preview.
67 [[pvesdn_installation]]
74 Since {pve} 8.1 the core Software-Defined Network (SDN) packages are installed
77 If you upgrade from an older version, you need to install the
78 `libpve-network-perl` package on every node:
82 apt install libpve-network-perl
85 NOTE: {pve} version 7.0 and above have the `ifupdown2` package installed by
86 default. If you originally installed your system with an older version, you need
87 to explicitly install the `ifupdown2` package.
89 After installation, you need to add the following line to the end of the
90 `/etc/network/interfaces` configuration file, so that the SDN configuration gets
91 included and activated.
94 source /etc/network/interfaces.d/*
100 The DHCP integration into the IP Address Management stack currently uses
101 `dnsmasq` for giving out DHCP leases. This is currently opt-in.
103 To use that feature you need to install the `dnsmasq` package on every node:
113 The {pve} SDN stack uses the https://frrouting.org/[FRRouting] project for
114 advanced setups. This is currently opt-in.
116 To use the SDN routing integration you need to install the `frr-pythontools`
117 package on all nodes:
121 apt install frr-pythontools
124 [[pvesdn_main_configuration]]
125 Configuration Overview
126 ----------------------
128 Configuration is done at the web UI at datacenter level, separated into the
131 * SDN:: Here you get an overview of the current active SDN state, and you can
132 apply all pending changes to the whole cluster.
134 * xref:pvesdn_config_zone[Zones]: Create and manage the virtually separated
137 * xref:pvesdn_config_vnet[VNets] VNets: Create virtual network bridges and
140 The Options category allows adding and managing additional services to be used
143 * xref:pvesdn_config_controllers[Controllers]: For controlling layer 3 routing
146 * xref:pvesdn_config_ipam[IPAM]: Enables external for IP address management for
149 * xref:pvesdn_config_dns[DNS]: Define a DNS server integration for registering
150 virtual guests' hostname and IP addresses
152 [[pvesdn_tech_and_config_overview]]
153 Technology & Configuration
154 --------------------------
156 The {pve} Software-Defined Network implementation uses standard Linux networking
157 as much as possible. The reason for this is that modern Linux networking
158 provides almost all needs for a feature full SDN implementation and avoids adding
159 external dependencies and reduces the overall amount of components that can
162 The {pve} SDN configurations are located in `/etc/pve/sdn`, which is shared with
163 all other cluster nodes through the {pve} xref:chapter_pmxcfs[configuration file system].
164 Those configurations get translated to the respective configuration formats of
165 the tools that manage the underlying network stack (for example `ifupdown2` or
168 New changes are not immediately applied but recorded as pending first. You can
169 then apply a set of different changes all at once in the main 'SDN' overview
170 panel on the web interface. This system allows to roll-out various changes as
173 The SDN tracks the rolled-out state through the '.running-config' and '.version'
174 files located in '/etc/pve/sdn'.
176 // TODO: extend implementation and technology details.
178 [[pvesdn_config_zone]]
182 A zone defines a virtually separated network. Zones are restricted to
183 specific nodes and assigned permissions, in order to restrict users to a certain
184 zone and its contained VNets.
186 Different technologies can be used for separation:
188 * Simple: Isolated Bridge. A simple layer 3 routing bridge (NAT)
190 * VLAN: Virtual LANs are the classic method of subdividing a LAN
192 * QinQ: Stacked VLAN (formally known as `IEEE 802.1ad`)
194 * VXLAN: Layer 2 VXLAN network via a UDP tunnel
196 * EVPN (BGP EVPN): VXLAN with BGP to establish Layer 3 routing
199 [[pvesdn_config_common_options]]
203 The following options are available for all zone types:
205 Nodes:: The nodes which the zone and associated VNets should be deployed on.
207 IPAM:: Use an IP Address Management (IPAM) tool to manage IPs in the
208 zone. Optional, defaults to `pve`.
210 DNS:: DNS API server. Optional.
212 ReverseDNS:: Reverse DNS API server. Optional.
214 DNSZone:: DNS domain name. Used to register hostnames, such as
215 `<hostname>.<domain>`. The DNS zone must already exist on the DNS server. Optional.
218 [[pvesdn_zone_plugin_simple]]
222 This is the simplest plugin. It will create an isolated VNet bridge. This
223 bridge is not linked to a physical interface, and VM traffic is only local on
225 It can be used in NAT or routed setups.
228 [[pvesdn_zone_plugin_vlan]]
232 The VLAN plugin uses an existing local Linux or OVS bridge to connect to the
233 node's physical interface. It uses VLAN tagging defined in the VNet to isolate
234 the network segments. This allows connectivity of VMs between different nodes.
236 VLAN zone configuration options:
238 Bridge:: The local bridge or OVS switch, already configured on *each* node that
239 allows node-to-node connection.
242 [[pvesdn_zone_plugin_qinq]]
246 QinQ also known as VLAN stacking, that uses multiple layers of VLAN tags for
247 isolation. The QinQ zone defines the outer VLAN tag (the 'Service VLAN')
248 whereas the inner VLAN tag is defined by the VNet.
250 NOTE: Your physical network switches must support stacked VLANs for this
253 QinQ zone configuration options:
255 Bridge:: A local, VLAN-aware bridge that is already configured on each local
258 Service VLAN:: The main VLAN tag of this zone
260 Service VLAN Protocol:: Allows you to choose between an 802.1q (default) or
261 802.1ad service VLAN type.
263 MTU:: Due to the double stacking of tags, you need 4 more bytes for QinQ VLANs.
264 For example, you must reduce the MTU to `1496` if you physical interface MTU is
268 [[pvesdn_zone_plugin_vxlan]]
272 The VXLAN plugin establishes a tunnel (overlay) on top of an existing network
273 (underlay). This encapsulates layer 2 Ethernet frames within layer 4 UDP
274 datagrams using the default destination port `4789`.
276 You have to configure the underlay network yourself to enable UDP connectivity
279 You can, for example, create a VXLAN overlay network on top of public internet,
280 appearing to the VMs as if they share the same local Layer 2 network.
282 WARNING: VXLAN on its own does does not provide any encryption. When joining
283 multiple sites via VXLAN, make sure to establish a secure connection between
284 the site, for example by using a site-to-site VPN.
286 VXLAN zone configuration options:
288 Peers Address List:: A list of IP addresses of each node in the VXLAN zone. This
289 can be external nodes reachable at this IP address.
290 All nodes in the cluster need to be mentioned here.
292 MTU:: Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes
293 lower than the outgoing physical interface.
296 [[pvesdn_zone_plugin_evpn]]
300 The EVPN zone creates a routable Layer 3 network, capable of spanning across
301 multiple clusters. This is achieved by establishing a VPN and utilizing BGP as
302 the routing protocol.
304 The VNet of EVPN can have an anycast IP address and/or MAC address. The bridge
305 IP is the same on each node, meaning a virtual guest can use this address as
308 Routing can work across VNets from different zones through a VRF (Virtual
309 Routing and Forwarding) interface.
311 EVPN zone configuration options:
313 VRF VXLAN ID:: A VXLAN-ID used for dedicated routing interconnect between VNets.
314 It must be different than the VXLAN-ID of the VNets.
316 Controller:: The EVPN-controller to use for this zone. (See controller plugins
319 VNet MAC Address:: Anycast MAC address that gets assigned to all VNets in this
320 zone. Will be auto-generated if not defined.
322 Exit Nodes:: Nodes that shall be configured as exit gateways from the EVPN
323 network, through the real network. The configured nodes will announce a
324 default route in the EVPN network. Optional.
326 Primary Exit Node:: If you use multiple exit nodes, force traffic through this
327 primary exit node, instead of load-balancing on all nodes. Optional but
328 necessary if you want to use SNAT or if your upstream router doesn't support
331 Exit Nodes Local Routing:: This is a special option if you need to reach a VM/CT
332 service from an exit node. (By default, the exit nodes only allow forwarding
333 traffic between real network and EVPN network). Optional.
335 Advertise Subnets:: Announce the full subnet in the EVPN network.
336 If you have silent VMs/CTs (for example, if you have multiple IPs and the
337 anycast gateway doesn't see traffic from theses IPs, the IP addresses won't be
338 able to be reached inside the EVPN network). Optional.
340 Disable ARP ND Suppression:: Don't suppress ARP or ND (Neighbor Discovery)
341 packets. This is required if you use floating IPs in your VMs (IP and MAC
342 addresses are being moved between systems). Optional.
344 Route-target Import:: Allows you to import a list of external EVPN route
345 targets. Used for cross-DC or different EVPN network interconnects. Optional.
347 MTU:: Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes
348 less than the maximal MTU of the outgoing physical interface. Optional,
352 [[pvesdn_config_vnet]]
356 After creating a virtual network (VNet) through the SDN GUI, a local network
357 interface with the same name is available on each node. To connect a guest to the
358 VNet, assign the interface to the guest and set the IP address accordingly.
360 Depending on the zone, these options have different meanings and are explained
361 in the respective zone section in this document.
363 WARNING: In the current state, some options may have no effect or won't work in
366 VNet configuration options:
368 ID:: An up to 8 character ID to identify a VNet
370 Comment:: More descriptive identifier. Assigned as an alias on the interface. Optional
372 Zone:: The associated zone for this VNet
374 Tag:: The unique VLAN or VXLAN ID
376 VLAN Aware:: Enables vlan-aware option on the interface, enabling configuration
380 [[pvesdn_config_subnet]]
384 A subnet define a specific IP range, described by the CIDR network address.
385 Each VNet, can have one or more subnets.
387 A subnet can be used to:
389 * Restrict the IP addresses you can define on a specific VNet
390 * Assign routes/gateways on a VNet in layer 3 zones
391 * Enable SNAT on a VNet in layer 3 zones
392 * Auto assign IPs on virtual guests (VM or CT) through IPAM plugins
393 * DNS registration through DNS plugins
395 If an IPAM server is associated with the subnet zone, the subnet prefix will be
396 automatically registered in the IPAM.
398 Subnet configuration options:
400 ID:: A CIDR network address, for example 10.0.0.0/8
402 Gateway:: The IP address of the network's default gateway. On layer 3 zones
403 (Simple/EVPN plugins), it will be deployed on the VNet.
405 SNAT:: Enable Source NAT which allows VMs from inside a
406 VNet to connect to the outside network by forwarding the packets to the nodes
407 outgoing interface. On EVPN zones, forwarding is done on EVPN gateway-nodes.
410 DNS Zone Prefix:: Add a prefix to the domain registration, like
411 <hostname>.prefix.<domain> Optional.
414 [[pvesdn_config_controllers]]
418 Some zones implement a separated control and data plane that require an external
419 external controller to manage the VNet's control plane.
421 Currently, only the `EVPN` zone requires an external controller.
424 [[pvesdn_controller_plugin_evpn]]
428 The `EVPN`, zone requires an external controller to manage the control plane.
429 The EVPN controller plugin configures the Free Range Routing (frr) router.
431 To enable the EVPN controller, you need to install frr on every node that shall
432 participate in the EVPN zone.
435 apt install frr frr-pythontools
438 EVPN controller configuration options:
440 ASN #:: A unique BGP ASN number. It's highly recommended to use a private ASN
441 number (64512 – 65534, 4200000000 – 4294967294), as otherwise you could end up
442 breaking global routing by mistake.
444 Peers:: An IP list of all nodes that are part of the EVPN zone. (could also be
445 external nodes or route reflector servers)
448 [[pvesdn_controller_plugin_BGP]]
452 The BGP controller is not used directly by a zone.
453 You can use it to configure FRR to manage BGP peers.
455 For BGP-EVPN, it can be used to define a different ASN by node, so doing EBGP.
456 It can also be used to export EVPN routes to an external BGP peer.
458 NOTE: By default, for a simple full mesh EVPN, you don't need to define a BGP
461 BGP controller configuration options:
463 Node:: The node of this BGP controller
465 ASN #:: A unique BGP ASN number. It's highly recommended to use a private ASN
466 number in the range (64512 - 65534) or (4200000000 - 4294967294), as otherwise
467 you could break global routing by mistake.
469 Peer:: A list of peer IP addresses you want to communicate with using the
470 underlying BGP network.
472 EBGP:: If your peer's remote-AS is different, this enables EBGP.
474 Loopback Interface:: Use a loopback or dummy interface as the source of the EVPN network
477 ebgp-mutltihop:: Increase the number of hops to reach peers, in case they are
478 not directly connected or they use loopback.
480 bgp-multipath-as-path-relax:: Allow ECMP if your peers have different ASN.
483 [[pvesdn_controller_plugin_ISIS]]
487 The ISIS controller is not used directly by a zone.
488 You can use it to configure FRR to export EVPN routes to an ISIS domain.
490 ISIS controller configuration options:
492 Node:: The node of this ISIS controller.
494 Domain:: A unique ISIS domain.
496 Network Entity Title:: A Unique ISIS network address that identifies this node.
498 Interfaces:: A list of physical interface(s) used by ISIS.
500 Loopback:: Use a loopback or dummy interface as the source of the EVPN network
504 [[pvesdn_config_ipam]]
508 IP Address Management (IPAM) tools manage the IP addresses of clients on the
509 network. SDN in {pve} uses IPAM for example to find free IP addresses for new
512 A single IPAM instance can be associated with one or more zones.
515 [[pvesdn_ipam_plugin_pveipam]]
519 The default built-in IPAM for your {pve} cluster.
522 [[pvesdn_ipam_plugin_netbox]]
526 link:https://github.com/netbox-community/netbox[NetBox] is an open-source IP
527 Address Management (IPAM) and datacenter infrastructure management (DCIM) tool.
529 To integrate NetBox with {pve} SDN, create an API token in NetBox as described
530 here: https://docs.netbox.dev/en/stable/integrations/rest-api/#tokens
532 The NetBox configuration properties are:
534 URL:: The NetBox REST API endpoint: `http://yournetbox.domain.com/api`
536 Token:: An API access token
539 [[pvesdn_ipam_plugin_phpipam]]
543 In link:https://phpipam.net/[phpIPAM] you need to create an "application" and add
544 an API token with admin privileges to the application.
546 The phpIPAM configuration properties are:
548 URL:: The REST-API endpoint: `http://phpipam.domain.com/api/<appname>/`
550 Token:: An API access token
552 Section:: An integer ID. Sections are a group of subnets in phpIPAM. Default
553 installations use `sectionid=1` for customers.
556 [[pvesdn_config_dns]]
560 The DNS plugin in {pve} SDN is used to define a DNS API server for registration
561 of your hostname and IP address. A DNS configuration is associated with one or
562 more zones, to provide DNS registration for all the subnet IPs configured for
565 [[pvesdn_dns_plugin_powerdns]]
568 https://doc.powerdns.com/authoritative/http-api/index.html
570 You need to enable the web server and the API in your PowerDNS config:
574 api-key=arandomgeneratedstring
579 The PowerDNS configuration options are:
581 url:: The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
583 key:: An API access key
585 ttl:: The default TTL for records
588 [[pvesdn_setup_examples]]
592 This section presents multiple configuration examples tailored for common SDN
593 use cases. It aims to offer tangible implementations, providing additional
594 details to enhance comprehension of the available configuration options.
597 [[pvesdn_setup_example_simple]]
601 Simple zone networks create an isolated network for quests on a single host to
602 connect to each other.
604 TIP: connection between quests are possible if all quests reside on a same host
605 but cannot be reached on other nodes.
607 * Create a simple zone named `simple`.
608 * Add a VNet names `vnet1`.
609 * Create a Subnet with a gateway and the SNAT option enabled.
610 * This creates a network bridge `vnet1` on the node. Assign this bridge to the
611 quests that shall join the network and configure an IP address.
613 The network interface configuration in two VMs may look like this which allows
614 them to communicate via the 10.0.1.0/24 network.
618 iface ens19 inet static
624 iface ens19 inet static
629 [[pvesdn_setup_example_nat]]
633 If you want to allow outgoing connections for quests in the simple network zone
634 the simple zone offers a Source NAT (SNAT) option.
636 Starting from the configuration xref:pvesdn_setup_example_simple[above], Add a
637 Subnet to the VNet `vnet1`, set a gateway IP and enable the SNAT option.
640 Subnet: 172.16.0.0/24
645 In the quests configure the static IP address inside the subnet's IP range.
647 The node itself will join this network with the Gateway IP '172.16.0.1' and
648 function as the NAT gateway for quests within the subnet range.
651 [[pvesdn_setup_example_vlan]]
655 When VMs on different nodes need to communicate through an isolated network, the
656 VLAN zone allows network level isolation using VLAN tags.
658 Create a VLAN zone named `myvlanzone`:
665 Create a VNet named `myvnet1` with VLAN tag 10 and the previously created
674 Apply the configuration through the main SDN panel, to create VNets locally on
677 Create a Debian-based virtual machine ('vm1') on node1, with a vNIC on `myvnet1`.
679 Use the following network configuration for this VM:
683 iface eth0 inet static
684 address 10.0.3.100/24
687 Create a second virtual machine ('vm2') on node2, with a vNIC on the same VNet
690 Use the following network configuration for this VM:
694 iface eth0 inet static
695 address 10.0.3.101/24
698 Following this, you should be able to ping between both VMs using that network.
701 [[pvesdn_setup_example_qinq]]
706 This example configures two QinQ zones and adds two VMs to each zone to
707 demonstrate the additional layer of VLAN tags which allows the configuration of
710 A typical use case for this configuration is a hosting provider that provides an
711 isolated network to customers for VM communication but isolates the VMs from
714 Create a QinQ zone named `qinqzone1` with service VLAN 20
722 Create another QinQ zone named `qinqzone2` with service VLAN 30
729 Create a VNet named `myvnet1` with VLAN-ID 100 on the previously created
738 Create a `myvnet2` with VLAN-ID 100 on the `qinqzone2` zone.
746 Apply the configuration on the main SDN web-interface panel to create VNets
747 locally on each node.
749 Create four Debian-bases virtual machines (vm1, vm2, vm3, vm4) and add network
750 interfaces to vm1 and vm2 with bridge `qinqvnet1` and vm3 and vm4 with bridge
753 Inside the VM, configure the IP addresses of the interfaces, for example via
754 `/etc/network/interfaces`:
758 iface eth0 inet static
759 address 10.0.3.101/24
761 // TODO: systemd-network example
762 Configure all four VMs to have IP addresses from the '10.0.3.101' to
765 Now you should be able to ping between the VMs 'vm1' and 'vm2', as well as
766 between 'vm3' and 'vm4'. However, neither of VMs 'vm1' or 'vm2' can ping VMs
767 'vm3' or 'vm4', as they are on a different zone with a different service-VLAN.
770 [[pvesdn_setup_example_vxlan]]
774 The example assumes a cluster with three nodes, with the node IP addresses
775 192.168.0.1, 192.168.0.2 and 192.168.0.3.
777 Create a VXLAN zone named `myvxlanzone` and add all IPs from the nodes to the
778 peer address list. Use the default MTU of 1450 or configure accordingly.
782 Peers Address List: 192.168.0.1,192.168.0.2,192.168.0.3
785 Create a VNet named `vxvnet1` using the VXLAN zone `myvxlanzone` created
794 Apply the configuration on the main SDN web-interface panel to create VNets
795 locally on each nodes.
797 Create a Debian-based virtual machine ('vm1') on node1, with a vNIC on `vxvnet1`.
799 Use the following network configuration for this VM (note the lower MTU).
803 iface eth0 inet static
804 address 10.0.3.100/24
808 Create a second virtual machine ('vm2') on node3, with a vNIC on the same VNet
811 Use the following network configuration for this VM:
815 iface eth0 inet static
816 address 10.0.3.101/24
820 Then, you should be able to ping between between 'vm1' and 'vm2'.
823 [[pvesdn_setup_example_evpn]]
827 The example assumes a cluster with three nodes (node1, node2, node3) with IP
828 addresses 192.168.0.1, 192.168.0.2 and 192.168.0.3.
830 Create an EVPN controller, using a private ASN number and the above node
836 Peers: 192.168.0.1,192.168.0.2,192.168.0.3
839 Create an EVPN zone named `myevpnzone`, assign the previously created
840 EVPN-controller and define 'node1' and 'node2' as exit nodes.
845 Controller: myevpnctl
847 VNet MAC Address: 32:F4:05:FE:6C:0A
848 Exit Nodes: node1,node2
851 Create the first VNet named `myvnet1` using the EVPN zone `myevpnzone`.
859 Create a subnet on `myvnet1`:
866 Create the second VNet named `myvnet2` using the same EVPN zone `myevpnzone`.
874 Create a different subnet on `myvnet2``:
881 Apply the configuration from the main SDN web-interface panel to create VNets
882 locally on each node and generate the FRR configuration.
884 Create a Debian-based virtual machine ('vm1') on node1, with a vNIC on `myvnet1`.
886 Use the following network configuration for 'vm1':
890 iface eth0 inet static
891 address 10.0.1.100/24
896 Create a second virtual machine ('vm2') on node2, with a vNIC on the other VNet
899 Use the following network configuration for 'vm2':
903 iface eth0 inet static
904 address 10.0.2.100/24
910 Now you should be able to ping vm2 from vm1, and vm1 from vm2.
912 If you ping an external IP from 'vm2' on the non-gateway node3, the packet
913 will go to the configured 'myvnet2' gateway, then will be routed to the exit
914 nodes ('node1' or 'node2') and from there it will leave those nodes over the
915 default gateway configured on node1 or node2.
917 NOTE: You need to add reverse routes for the '10.0.1.0/24' and '10.0.2.0/24'
918 networks to node1 and node2 on your external gateway, so that the public network
921 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
922 and 10.0.2.0/24 in this example), will be announced dynamically.
929 Multiple EVPN Exit Nodes
930 ~~~~~~~~~~~~~~~~~~~~~~~~
932 If you have multiple gateway nodes, you should disable the `rp_filter` (Strict
933 Reverse Path Filter) option, because packets can arrive at one node but go out
936 Add the following to `/etc/sysctl.conf`:
939 net.ipv4.conf.default.rp_filter=0
940 net.ipv4.conf.all.rp_filter=0
943 VXLAN IPSEC Encryption
944 ~~~~~~~~~~~~~~~~~~~~~~
946 To add IPSEC encryption on top of a VXLAN, this example shows how to use
949 You`ll need to reduce the 'MTU' by additional 60 bytes for IPv4 or 80 bytes for
950 IPv6 to handle encryption.
952 So with default real 1500 MTU, you need to use a MTU of 1370 (1370 + 80 (IPSEC)
953 + 50 (VXLAN) == 1500).
955 Install strongswan on the host.
958 apt install strongswan
961 Add configuration to `/etc/ipsec.conf`. We only need to encrypt traffic from
962 the VXLAN UDP port '4789'.
966 ike=aes256-sha1-modp1024! # the fastest, but reasonably secure cipher on modern HW
968 leftfirewall=yes # this is necessary when using Proxmox VE firewall rules
971 rightsubnet=%dynamic[udp/4789]
978 leftsubnet=%dynamic[udp/4789]
984 Generate a pre-shared key with:
987 openssl rand -base64 128
990 and add the key to `/etc/ipsec.secrets`, so that the file contents looks like:
993 : PSK <generatedbase64key>
996 Copy the PSK and the configuration to all nodes participating in the VXLAN network.