2 Software Defined Network
3 ========================
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
16 [[pvesdn_installation]]
20 To enable the experimental SDN integration, you need to install the
21 `libpve-network-perl` and `ifupdown2` package on every node:
25 apt install libpve-network-perl ifupdown2
28 After that you need to add the following line:
31 source /etc/network/interfaces.d/*
33 at the end of the `/etc/network/interfaces` configuration file, so that the SDN
34 config gets included and activated.
40 The {pve} SDN allows separation and fine grained control of Virtual Guests
41 networks, using flexible software controlled configurations.
43 Separation consists of zones, a zone is it's own virtual separated network area.
44 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
45 type or plugin the zone uses it can behave differently and offer different
46 features, advantages or disadvantages.
47 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
48 'VXLAN' tag, but some can also use layer 3 routing for control.
49 The 'VNets' are deployed locally on each node, after configuration was committed
50 from the cluster-wide datacenter SDN administration interface.
56 The configuration is done at datacenter (cluster-wide) level, it will be saved
57 in configuration files located in the shared configuration file system:
60 On the web-interface SDN feature have 3 main sections for the configuration
62 * SDN: a overview of the SDN state
64 * Zones: Create and manage the virtual separated network Zones
66 * VNets: Create virtual network bridges + subnets management.
70 * Controller: For complex setups to control Layer 3 routing
72 * Sub-nets: Used to defined ip networks on VNets.
74 * IPAM: Allow to use external tools for IP address management (guest IPs)
76 * DNS: Allow to define a DNS server api for registering a virtual guests
77 hostname and IP-addresses
79 [[pvesdn_config_main_sdn]]
84 This is the main status panel. Here you can see deployment status of zones on
87 There is an 'Apply' button, to push and reload local configuration on all
91 [[pvesdn_local_deployment_monitoring]]
92 Local Deployment Monitoring
93 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
95 After applying the configuration through the main SDN web-interface panel,
96 the local network configuration is generated locally on each node in
97 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
99 You can monitor the status of local zones and vnets through the main tree.
102 [[pvesdn_config_zone]]
106 A zone will define a virtually separated network.
108 It can use different technologies for separation:
110 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
112 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
114 * VXLAN: (layer2 vxlan)
116 * Simple: Isolated Bridge, simple l3 routing bridge (NAT)
118 * bgp-evpn: vxlan using layer3 border gateway protocol routing
120 You can restrict a zone to specific nodes.
122 It's also possible to add permissions on a zone, to restrict user to use only a
123 specific zone and only the VNets in that zone
128 The following options are available for all zone types.
130 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
133 ipam:: Optional, if you want to use an ipam tool to manage ips in this zone
135 dns:: Optional, dns api server.
137 reversedns:: Optional, reverse dns api server.
139 dnszone:: Optional, dns domain name. Use to register hostname like
140 `<hostname>.<domain>`. The dns zone need to be already existing in dns server.
143 [[pvesdn_zone_plugin_simple]]
147 This is the simplest plugin, it will create an isolated vnet bridge.
148 This bridge is not linked to physical interfaces, VM traffic is only
149 local to the node(s).
150 It can be also used for NAT or routed setup.
152 [[pvesdn_zone_plugin_vlan]]
156 This plugin will reuse an existing local Linux or OVS bridge,
157 and manage VLANs on it.
158 The benefit of using SDN module, is that you can create different zones with
159 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
161 Specific `VLAN` configuration options:
163 bridge:: Reuse this local bridge or OVS switch, already
164 configured on *each* local node.
166 [[pvesdn_zone_plugin_qinq]]
170 QinQ is stacked VLAN. The first VLAN tag defined for the zone
171 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
173 NOTE: Your physical network switches must support stacked VLANs!
175 Specific QinQ configuration options:
177 bridge:: A local VLAN-aware bridge already configured on each local node
179 service vlan:: The main VLAN tag of this zone
181 service vlan protocol:: allow to define a 802.1q (default) or 802.1ad service vlan type.
183 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
184 For example, you reduce the MTU to `1496` if you physical interface MTU is
187 [[pvesdn_zone_plugin_vxlan]]
191 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
192 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
193 4 UDP datagrams, using `4789` as the default destination port. You can, for
194 example, create a private IPv4 VXLAN network on top of public internet network
196 This is a layer2 tunnel only, no routing between different VNets is possible.
198 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
200 Specific EVPN configuration options:
202 peers address list:: A list of IPs from all nodes through which you want to
203 communicate. Can also be external nodes.
205 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
206 lower than the outgoing physical interface.
208 [[pvesdn_zone_plugin_evpn]]
212 This is the most complex of all supported plugins.
214 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
215 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
216 node, with this a virtual guest can use that address as gateway.
218 Routing can work across VNets from different zones through a VRF (Virtual
219 Routing and Forwarding) interface.
221 Specific EVPN configuration options:
223 VRF VXLAN tag:: This is a vxlan-id used for routing interconnect between vnets,
224 it must be different than VXLAN-id of VNets
226 controller:: an EVPN-controller need to be defined first (see controller
229 VNet MAC address:: A unique anycast MAC address for all VNets in this zone.
230 Will be auto-generated if not defined.
232 Exit Nodes:: This is used if you want to define some proxmox nodes, as exit
233 gateway from evpn network through real network. The configured nodes will
234 announce a default route in the EVPN network.
236 MTU:: because VXLAN encapsulation use 50 bytes, the MTU needs to be 50 bytes
237 lower than the maximal MTU of the outgoing physical interface.
240 [[pvesdn_config_vnet]]
244 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
245 on the node and used for Virtual Machine communication.
249 ID:: a 8 characters ID to name and identify a VNet
251 Alias:: Optional longer name, if the ID isn't enough
253 Zone:: The associated zone for this VNet
255 Tag:: The unique VLAN or VXLAN id
257 VLAN Aware:: Allow to add an extra VLAN tag in the virtual machine or
258 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
260 [[pvesdn_config_subnet]]
265 A sub-network (subnet or sub-net) allows you to define a specific IP network
266 (IPv4 or IPv6). For each VNET, you can define one or more subnets.
268 A subnet can be used to:
270 * restrict IP-addresses you can define on a specific VNET
271 * assign routes/gateway on a VNET in layer 3 zones
272 * enable SNAT on a VNET in layer 3 zones
273 * auto assign IPs on virtual guests (VM or CT) through IPAM plugin
274 * DNS registration through DNS plugins
276 If an IPAM server is associated to the subnet zone, the subnet prefix will be
277 automatically registered in the IPAM.
280 Subnet properties are:
282 ID:: a cidr network address. Ex: 10.0.0.0/8
284 Gateway:: ip address for the default gateway of the network.
285 On layer3 zones (simple/evpn plugins), it'll be deployed on the vnet.
287 Snat:: Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet.
288 The subnet source ip will be natted to server outgoing interface/ip.
289 On evpn zone, it's done only on evpn gateway-nodes.
291 Dnszoneprefix:: Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
294 [[pvesdn_config_controllers]]
298 Some zone types need an external controller to manage the VNet control-plane.
299 Currently this is only required for the `bgp-evpn` zone plugin.
301 [[pvesdn_controller_plugin_evpn]]
305 For `BGP-EVPN`, we need a controller to manage the control plane.
306 The currently supported software controller is the "frr" router.
307 You may need to install it on each node where you want to deploy EVPN zones.
310 apt install frr frr-pythontools
313 Configuration options:
315 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
316 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
317 breaking, or get broken, by global routing by mistake.
319 peers:: An ip list of all nodes where you want to communicate for the EVPN (could be also
320 external nodes or route reflectors servers)
323 [[pvesdn_controller_plugin_BGP]]
327 The bgp controller is not used directly by a zone.
328 You can used it to configure frr to manage bgp peers.
330 For BGP-evpn, it can be use to define a different ASN by node, so doing EBGP.
332 Configuration options:
334 node:: The node of this BGP controller
336 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
337 number from the range (64512 - 65534) or (4200000000 - 4294967294), as else
338 you could end up breaking, or get broken, by global routing by mistake.
340 peers:: An IP list of peers you want to communicate with for the underlying
343 ebgp:: If your peer's remote-AS is different, it's enabling EBGP.
345 loopback:: If you want to use a loopback or dummy interface as source for the
346 evpn network. (for multipath)
348 ebgp-mutltihop:: if the peers are not directly connected or use loopback, you can increase the
349 number of hops to reach them.
351 [[pvesdn_config_ipam]]
354 IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network.
355 It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
357 An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
360 [[pvesdn_ipam_plugin_pveipam]]
364 This is the default internal IPAM for your proxmox cluster if you don't have
365 external ipam software
367 [[pvesdn_ipam_plugin_phpipam]]
372 You need to create an application in phpipam, and add an api token with admin
375 phpIPAM properties are:
377 url:: The REST-API endpoint: `http://phpipam.domain.com/api/<appname>/`
378 token:: An API access token
379 section:: An integer ID. Sections are group of subnets in phpIPAM. Default
380 installations use `sectionid=1` for customers.
382 [[pvesdn_ipam_plugin_netbox]]
386 NetBox is an IP address management (IPAM) and data center infrastructure
387 management (DCIM) tool, see the source code repository for details:
388 https://github.com/netbox-community/netbox
390 You need to create an api token in netbox
391 https://netbox.readthedocs.io/en/stable/api/authentication
393 NetBox properties are:
395 url:: The REST API endpoint: `http://yournetbox.domain.com/api`
396 token:: An API access token
398 [[pvesdn_config_dns]]
402 The DNS plugin in {pve} SDN is used to define a DNS API server for registration
403 of your hostname and IP-address. A DNS configuration is associated with one or
404 more zones, to provide DNS registration for all the sub-net IPs configured for
407 [[pvesdn_dns_plugin_powerdns]]
410 https://doc.powerdns.com/authoritative/http-api/index.html
412 You need to enable the webserver and the API in your PowerDNS config:
416 api-key=arandomgeneratedstring
421 Powerdns properties are:
423 url:: The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
424 key:: An API access key
425 ttl:: The default TTL for records
431 [[pvesdn_setup_example_vlan]]
435 TIP: While we show plain configuration content here, almost everything should
436 be configurable using the web-interface only.
438 Node1: /etc/network/interfaces
442 iface vmbr0 inet manual
446 bridge-vlan-aware yes
449 #management ip on vlan100
451 iface vmbr0.100 inet static
452 address 192.168.0.1/24
454 source /etc/network/interfaces.d/*
457 Node2: /etc/network/interfaces
461 iface vmbr0 inet manual
465 bridge-vlan-aware yes
468 #management ip on vlan100
470 iface vmbr0.100 inet static
471 address 192.168.0.2/24
473 source /etc/network/interfaces.d/*
476 Create a VLAN zone named `myvlanzone':
483 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
484 `myvlanzone' as it's zone.
492 Apply the configuration through the main SDN panel, to create VNets locally on
495 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
497 Use the following network configuration for this VM:
501 iface eth0 inet static
502 address 10.0.3.100/24
505 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
508 Use the following network configuration for this VM:
512 iface eth0 inet static
513 address 10.0.3.101/24
516 Then, you should be able to ping between both VMs over that network.
519 [[pvesdn_setup_example_qinq]]
523 TIP: While we show plain configuration content here, almost everything should
524 be configurable using the web-interface only.
526 Node1: /etc/network/interfaces
530 iface vmbr0 inet manual
534 bridge-vlan-aware yes
537 #management ip on vlan100
539 iface vmbr0.100 inet static
540 address 192.168.0.1/24
542 source /etc/network/interfaces.d/*
545 Node2: /etc/network/interfaces
549 iface vmbr0 inet manual
553 bridge-vlan-aware yes
556 #management ip on vlan100
558 iface vmbr0.100 inet static
559 address 192.168.0.2/24
561 source /etc/network/interfaces.d/*
564 Create an QinQ zone named `qinqzone1' with service VLAN 20
572 Create another QinQ zone named `qinqzone2' with service VLAN 30
580 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
581 created `qinqzone1' zone.
589 Create a `myvnet2' with customer VLAN-id 100 on the previously created
598 Apply the configuration on the main SDN web-interface panel to create VNets
599 locally on each nodes.
601 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
603 Use the following network configuration for this VM:
607 iface eth0 inet static
608 address 10.0.3.100/24
611 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
614 Use the following network configuration for this VM:
618 iface eth0 inet static
619 address 10.0.3.101/24
622 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
625 Use the following network configuration for this VM:
629 iface eth0 inet static
630 address 10.0.3.102/24
633 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
636 Use the following network configuration for this VM:
640 iface eth0 inet static
641 address 10.0.3.103/24
644 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
645 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
646 or 'vm4', as they are on a different zone with different service-vlan.
649 [[pvesdn_setup_example_vxlan]]
653 TIP: While we show plain configuration content here, almost everything should
654 be configurable using the web-interface only.
656 node1: /etc/network/interfaces
660 iface vmbr0 inet static
661 address 192.168.0.1/24
662 gateway 192.168.0.254
668 source /etc/network/interfaces.d/*
671 node2: /etc/network/interfaces
675 iface vmbr0 inet static
676 address 192.168.0.2/24
677 gateway 192.168.0.254
683 source /etc/network/interfaces.d/*
686 node3: /etc/network/interfaces
690 iface vmbr0 inet static
691 address 192.168.0.3/24
692 gateway 192.168.0.254
698 source /etc/network/interfaces.d/*
701 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
702 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
703 the nodes as peer address list.
707 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
711 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
720 Apply the configuration on the main SDN web-interface panel to create VNets
721 locally on each nodes.
723 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
725 Use the following network configuration for this VM, note the lower MTU here.
729 iface eth0 inet static
730 address 10.0.3.100/24
734 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
737 Use the following network configuration for this VM:
741 iface eth0 inet static
742 address 10.0.3.101/24
746 Then, you should be able to ping between between 'vm1' and 'vm2'.
749 [[pvesdn_setup_example_evpn]]
753 node1: /etc/network/interfaces
757 iface vmbr0 inet static
758 address 192.168.0.1/24
759 gateway 192.168.0.254
765 source /etc/network/interfaces.d/*
768 node2: /etc/network/interfaces
772 iface vmbr0 inet static
773 address 192.168.0.2/24
774 gateway 192.168.0.254
780 source /etc/network/interfaces.d/*
783 node3: /etc/network/interfaces
787 iface vmbr0 inet static
788 address 192.168.0.3/24
789 gateway 192.168.0.254
795 source /etc/network/interfaces.d/*
798 Create a EVPN controller, using a private ASN number and above node addreesses
804 peers: 192.168.0.1,192.168.0.2,192.168.0.3
807 Create an EVPN zone named `myevpnzone' using the previously created
808 EVPN-controller Define 'node1' and 'node2' as exit nodes.
813 controller: myevpnctl
815 vnet mac address: 32:F4:05:FE:6C:0A
816 exitnodes: node1,node2
819 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
826 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on vnet1
833 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
834 different IPv4 CIDR network.
842 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
850 Apply the configuration on the main SDN web-interface panel to create VNets
851 locally on each nodes and generate the FRR config.
853 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
855 Use the following network configuration for this VM:
859 iface eth0 inet static
860 address 10.0.1.100/24
861 gateway 10.0.1.1 #this is the ip of the vnet1
865 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
868 Use the following network configuration for this VM:
872 iface eth0 inet static
873 address 10.0.2.100/24
874 gateway 10.0.2.1 #this is the ip of the vnet2
879 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
881 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
882 will go to the configured 'myvnet2' gateway, then will be routed to the exit
883 nodes ('node1' or 'node2') and from there it will leave those nodes over the
884 default gateway configured on node1 or node2.
886 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
887 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
888 public network can reply back.
890 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
891 and 10.0.2.0/24 in this example), will be announced dynamically.