2 Software Defined Network
3 ========================
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
16 [[pvesdn_installation]]
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
24 apt install libpve-network-perl
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
36 source /etc/network/interfaces.d/*
38 at the end of /etc/network/interfaces to have the sdn config included
44 The {pve} SDN allows separation and fine grained control of Virtual Guests
45 networks, using flexible software controlled configurations.
47 Separation consists of zones, a zone is it's own virtual separated network area.
48 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
49 type or plugin the zone uses it can behave differently and offer different
50 features, advantages or disadvantages.
51 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
52 'VXLAN' tag, but some can also use layer 3 routing for control.
53 The 'VNets' are deployed locally on each node, after configuration was committed
54 from the cluster-wide datacenter SDN administration interface.
60 The configuration is done at datacenter (cluster-wide) level, it will be saved
61 in configuration files located in the shared configuration file system:
64 On the web-interface SDN feature have 3 main sections for the configuration
66 * SDN: a overview of the SDN state
68 * Zones: Create and manage the virtual separated network Zones
70 * VNets: Create virtual network bridges + subnets management.
74 * Controller: For complex setups to control Layer 3 routing
76 * Ipams: Allow to use external tools for ip managements (vm/ct ips)
78 * Dns: Allow to define a dns server api for register vm/ct hostname/ip addresses
81 [[pvesdn_config_main_sdn]]
86 This is the main status panel. Here you can see deployment status of zones on
89 There is an 'Apply' button, to push and reload local configuration on all
93 [[pvesdn_local_deployment_monitoring]]
94 Local Deployment Monitoring
95 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
97 After applying the configuration through the main SDN web-interface panel,
98 the local network configuration is generated locally on each node in
99 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
101 You can monitor the status of local zones and vnets through the main tree.
104 [[pvesdn_config_zone]]
108 A zone will define a virtually separated network.
110 It can use different technologies for separation:
112 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
114 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
116 * VXLAN: (layer2 vxlan)
118 * Simple: Isolated Bridge, simple l3 routing bridge (NAT)
120 * bgp-evpn: vxlan using layer3 border gateway protocol routing
122 You can restrict a zone to specific nodes.
124 It's also possible to add permissions on a zone, to restrict user to use only a
125 specific zone and only the VNets in that zone
130 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
133 Ipam:: Optional, if you want to use an ipam tool to manage ips in this zone
135 Dns:: Optional, dns api server.
137 ReverseDns:: Optional, reverse dns api server.
139 Dnszone:: Optional, dns domain name. Use to register hostname like <hostname>.<domain>
140 The dns zone need to be already existing in dns server.
143 [[pvesdn_zone_plugin_simple]]
147 This is the simplest plugin, it will create an isolated vnet bridge.
148 This bridge is not linked to physical interfaces, VM traffic is only
149 local to the node(s).
150 It can be also used for NAT or routed setup.
152 [[pvesdn_zone_plugin_vlan]]
156 This plugin will reuse an existing local Linux or OVS bridge,
157 and manage VLANs on it.
158 The benefit of using SDN module, is that you can create different zones with
159 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
161 Specific `VLAN` configuration options:
163 bridge:: Reuse this local bridge or OVS switch, already
164 configured on *each* local node.
166 [[pvesdn_zone_plugin_qinq]]
170 QinQ is stacked VLAN. The first VLAN tag defined for the zone
171 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
173 NOTE: Your physical network switches must support stacked VLANs!
175 Specific QinQ configuration options:
177 bridge:: A local VLAN-aware bridge already configured on each local node
179 service vlan:: The main VLAN tag of this zone
181 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
182 For example, you reduce the MTU to `1496` if you physical interface MTU is
185 [[pvesdn_zone_plugin_vxlan]]
189 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
190 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
191 4 UDP datagrams, using `4789` as the default destination port. You can, for
192 example, create a private IPv4 VXLAN network on top of public internet network
194 This is a layer2 tunnel only, no routing between different VNets is possible.
196 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
198 Specific EVPN configuration options:
200 peers address list:: A list of IPs from all nodes through which you want to
201 communicate. Can also be external nodes.
203 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
204 lower than the outgoing physical interface.
206 [[pvesdn_zone_plugin_evpn]]
210 This is the most complex of all supported plugins.
212 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
213 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
214 node, with this a virtual guest can use that address as gateway.
216 Routing can work across VNets from different zones through a VRF (Virtual
217 Routing and Forwarding) interface.
219 Specific EVPN configuration options:
221 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
222 it must be different than VXLAN-id of VNets
224 controller:: an EVPN-controller need to be defined first (see controller
228 Exit Nodes:: This is used if you want to defined some proxmox nodes, as
229 exit gateway from evpn network through real network. This nodes
230 will announce a default route in the evpn network.
232 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
233 lower than the outgoing physical interface.
236 [[pvesdn_config_vnet]]
240 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
241 on the node and used for Virtual Machine communication.
245 ID:: a 8 characters ID to name and identify a VNet
247 Alias:: Optional longer name, if the ID isn't enough
249 Zone:: The associated zone for this VNet
251 Tag:: The unique VLAN or VXLAN id
253 VLAN Aware:: Allow to add an extra VLAN tag in the virtual machine or
254 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
256 [[pvesdn_config_subnet]]
261 For each Vnet, you can define 1 or multiple subnets to define an ip network (ipv4 or ipv6).
263 It can be used to restrict ip addresses you can define on a specific vnet,
264 assign routes/gateway on vnet in layer3 zones,
265 enable snat in layer 3 zones,
266 auto assign ips on vm/ct through ipam plugin && dns registration through dns plugins.
268 If an ipam server is associated to the subnet zone, the subnet prefix will be automatically
269 registered in the ipam.
272 Subnet properties are:
274 ID:: a cidr network address. Ex: 10.0.0.0/8
276 Gateway:: ip address for the default gateway of the network.
277 On layer3 zones (simple/evpn plugins), it'll be deployed on the vnet.
279 Snat:: Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet.
280 The subnet source ip will be natted to server outgoing interface/ip.
281 On evpn zone, it's done only on evpn gateway-nodes.
283 Dnszoneprefix:: Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
288 [[pvesdn_config_controllers]]
292 Some zone types need an external controller to manage the VNet control-plane.
293 Currently this is only required for the `bgp-evpn` zone plugin.
295 [[pvesdn_controller_plugin_evpn]]
299 For `BGP-EVPN`, we need a controller to manage the control plane.
300 The currently supported software controller is the "frr" router.
301 You may need to install it on each node where you want to deploy EVPN zones.
304 apt install frr frr-pythontools
307 Configuration options:
309 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
310 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
311 breaking, or get broken, by global routing by mistake.
313 peers:: An ip list of all nodes where you want to communicate for the EVPN (could be also
314 external nodes or route reflectors servers)
317 [[pvesdn_controller_plugin_BGP]]
321 The bgp controller is not used directly by a zone.
322 You can used it to configure frr to manage bgp peers.
324 For Bgp-evpn, it can be use to define a different ASN by node,
327 Configuration options:
329 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
330 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
331 breaking, or get broken, by global routing by mistake.
333 peers:: An ip list of peers where you want to communicate for the underlay
336 ebgp:: if your peers remote-as is different, it's enabling ebgp.
338 node:: the node of this bgp controller
340 loopback:: If you want to use a loopback or dummy interface as source
341 for the evpn network. (for multipath)
344 [[pvesdn_config_ipam]]
347 IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network.
348 It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
350 An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
353 [[pvesdn_ipam_plugin_pveipam]]
357 This is the default internal ipam for your proxmox cluster if you don't have external ipam software
359 [[pvesdn_ipam_plugin_phpipam]]
364 You need to create an application in phpipam, and add an api token with admin permission
366 PHPipam properties are:
368 * Url: The rest api url : http://phpipam.domain.com/api/<appname>/
369 * Token: your api token
370 * Section: An integer id. Sections are group of subnets in phpipam.
371 Default install have sectionid=1 for customers
373 [[pvesdn_ipam_plugin_netbox]]
376 https://github.com/netbox-community/netbox
378 you need to create an api token in netbox
379 https://netbox.readthedocs.io/en/stable/api/authentication
381 PHPipam properties are:
383 Url:: The rest api url: http://yournetbox.domain.com/api
384 Token:: your api token
386 [[pvesdn_config_dns]]
389 Dns is used to define a dns api server for registration of your hostname/ip address
390 an DNS is associated to 1 or multiple zones, to provide dns registration
391 for all ips in subnets defined in this zone.
393 [[pvesdn_dns_plugin_powerdns]]
396 https://doc.powerdns.com/authoritative/http-api/index.html
398 you need to enable webserver && api in your powerdns config:
402 api-key=arandomgeneratedstring
407 Powerdns properties are:
409 Url:: The rest api url: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
411 ttl:: default ttl for records
417 [[pvesdn_setup_example_vlan]]
421 TIP: While we show plain configuration content here, almost everything should
422 be configurable using the web-interface only.
424 Node1: /etc/network/interfaces
428 iface vmbr0 inet manual
432 bridge-vlan-aware yes
435 #management ip on vlan100
437 iface vmbr0.100 inet static
438 address 192.168.0.1/24
440 source /etc/network/interfaces.d/*
443 Node2: /etc/network/interfaces
447 iface vmbr0 inet manual
451 bridge-vlan-aware yes
454 #management ip on vlan100
456 iface vmbr0.100 inet static
457 address 192.168.0.2/24
459 source /etc/network/interfaces.d/*
462 Create a VLAN zone named `myvlanzone':
469 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
470 `myvlanzone' as it's zone.
478 Apply the configuration through the main SDN panel, to create VNets locally on
481 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
483 Use the following network configuration for this VM:
487 iface eth0 inet static
488 address 10.0.3.100/24
491 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
494 Use the following network configuration for this VM:
498 iface eth0 inet static
499 address 10.0.3.101/24
502 Then, you should be able to ping between both VMs over that network.
505 [[pvesdn_setup_example_qinq]]
509 TIP: While we show plain configuration content here, almost everything should
510 be configurable using the web-interface only.
512 Node1: /etc/network/interfaces
516 iface vmbr0 inet manual
520 bridge-vlan-aware yes
523 #management ip on vlan100
525 iface vmbr0.100 inet static
526 address 192.168.0.1/24
528 source /etc/network/interfaces.d/*
531 Node2: /etc/network/interfaces
535 iface vmbr0 inet manual
539 bridge-vlan-aware yes
542 #management ip on vlan100
544 iface vmbr0.100 inet static
545 address 192.168.0.2/24
547 source /etc/network/interfaces.d/*
550 Create an QinQ zone named `qinqzone1' with service VLAN 20
558 Create another QinQ zone named `qinqzone2' with service VLAN 30
566 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
567 created `qinqzone1' zone.
575 Create a `myvnet2' with customer VLAN-id 100 on the previously created
584 Apply the configuration on the main SDN web-interface panel to create VNets
585 locally on each nodes.
587 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
589 Use the following network configuration for this VM:
593 iface eth0 inet static
594 address 10.0.3.100/24
597 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
600 Use the following network configuration for this VM:
604 iface eth0 inet static
605 address 10.0.3.101/24
608 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
611 Use the following network configuration for this VM:
615 iface eth0 inet static
616 address 10.0.3.102/24
619 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
622 Use the following network configuration for this VM:
626 iface eth0 inet static
627 address 10.0.3.103/24
630 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
631 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
632 or 'vm4', as they are on a different zone with different service-vlan.
635 [[pvesdn_setup_example_vxlan]]
639 TIP: While we show plain configuration content here, almost everything should
640 be configurable using the web-interface only.
642 node1: /etc/network/interfaces
646 iface vmbr0 inet static
647 address 192.168.0.1/24
648 gateway 192.168.0.254
654 source /etc/network/interfaces.d/*
657 node2: /etc/network/interfaces
661 iface vmbr0 inet static
662 address 192.168.0.2/24
663 gateway 192.168.0.254
669 source /etc/network/interfaces.d/*
672 node3: /etc/network/interfaces
676 iface vmbr0 inet static
677 address 192.168.0.3/24
678 gateway 192.168.0.254
684 source /etc/network/interfaces.d/*
687 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
688 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
689 the nodes as peer address list.
693 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
697 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
706 Apply the configuration on the main SDN web-interface panel to create VNets
707 locally on each nodes.
709 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
711 Use the following network configuration for this VM, note the lower MTU here.
715 iface eth0 inet static
716 address 10.0.3.100/24
720 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
723 Use the following network configuration for this VM:
727 iface eth0 inet static
728 address 10.0.3.101/24
732 Then, you should be able to ping between between 'vm1' and 'vm2'.
735 [[pvesdn_setup_example_evpn]]
739 node1: /etc/network/interfaces
743 iface vmbr0 inet static
744 address 192.168.0.1/24
745 gateway 192.168.0.254
751 source /etc/network/interfaces.d/*
754 node2: /etc/network/interfaces
758 iface vmbr0 inet static
759 address 192.168.0.2/24
760 gateway 192.168.0.254
766 source /etc/network/interfaces.d/*
769 node3: /etc/network/interfaces
773 iface vmbr0 inet static
774 address 192.168.0.3/24
775 gateway 192.168.0.254
781 source /etc/network/interfaces.d/*
784 Create a EVPN controller, using a private ASN number and above node addreesses
790 peers: 192.168.0.1,192.168.0.2,192.168.0.3
793 Create an EVPN zone named `myevpnzone' using the previously created
794 EVPN-controller Define 'node1' and 'node2' as exit nodes.
800 controller: myevpnctl
802 exitnodes: node1,node2
805 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
810 mac address: 8C:73:B2:7B:F9:60 #random generate mac address
813 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway
819 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
820 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
826 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
829 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway
836 Apply the configuration on the main SDN web-interface panel to create VNets
837 locally on each nodes and generate the FRR config.
840 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
842 Use the following network configuration for this VM:
846 iface eth0 inet static
847 address 10.0.1.100/24
848 gateway 10.0.1.1 #this is the ip of the vnet1
852 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
855 Use the following network configuration for this VM:
859 iface eth0 inet static
860 address 10.0.2.100/24
861 gateway 10.0.2.1 #this is the ip of the vnet2
866 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
868 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
869 will go to the configured 'myvnet2' gateway, then will be routed to the exit
870 nodes ('node1' or 'node2') and from there it will leave those nodes over the
871 default gateway configured on node1 or node2.
873 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
874 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
875 public network can reply back.
877 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
878 and 10.0.2.0/24 in this example), will be announced dynamically.