2 Software Defined Network
3 ========================
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
16 [[pvesdn_installation]]
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
24 apt install libpve-network-perl
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
36 source /etc/network/interfaces.d/*
38 at the end of /etc/network/interfaces to have the sdn config included
44 The {pve} SDN allows separation and fine grained control of Virtual Guests
45 networks, using flexible software controlled configurations.
47 Separation consists of zones, a zone is it's own virtual separated network area.
48 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
49 type or plugin the zone uses it can behave differently and offer different
50 features, advantages or disadvantages.
51 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
52 'VXLAN' tag, but some can also use layer 3 routing for control.
53 The 'VNets' are deployed locally on each node, after configuration was committed
54 from the cluster-wide datacenter SDN administration interface.
60 The configuration is done at datacenter (cluster-wide) level, it will be saved
61 in configuration files located in the shared configuration file system:
64 On the web-interface SDN feature have 3 main sections for the configuration
66 * SDN: a overview of the SDN state
68 * Zones: Create and manage the virtual separated network Zones
70 * VNets: Create virtual network bridges + subnets management.
74 * Controller: For complex setups to control Layer 3 routing
76 * Sub-nets: Used to defined ip networks on VNets.
78 * IPAM: Allow to use external tools for IP address management (guest IPs)
80 * DNS: Allow to define a DNS server api for registering a virtual guests
81 hostname and IP-addresses
83 [[pvesdn_config_main_sdn]]
88 This is the main status panel. Here you can see deployment status of zones on
91 There is an 'Apply' button, to push and reload local configuration on all
95 [[pvesdn_local_deployment_monitoring]]
96 Local Deployment Monitoring
97 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
99 After applying the configuration through the main SDN web-interface panel,
100 the local network configuration is generated locally on each node in
101 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
103 You can monitor the status of local zones and vnets through the main tree.
106 [[pvesdn_config_zone]]
110 A zone will define a virtually separated network.
112 It can use different technologies for separation:
114 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
116 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
118 * VXLAN: (layer2 vxlan)
120 * Simple: Isolated Bridge, simple l3 routing bridge (NAT)
122 * bgp-evpn: vxlan using layer3 border gateway protocol routing
124 You can restrict a zone to specific nodes.
126 It's also possible to add permissions on a zone, to restrict user to use only a
127 specific zone and only the VNets in that zone
132 The following options are available for all zone types.
134 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
137 ipam:: Optional, if you want to use an ipam tool to manage ips in this zone
139 dns:: Optional, dns api server.
141 reversedns:: Optional, reverse dns api server.
143 dnszone:: Optional, dns domain name. Use to register hostname like
144 `<hostname>.<domain>`. The dns zone need to be already existing in dns server.
147 [[pvesdn_zone_plugin_simple]]
151 This is the simplest plugin, it will create an isolated vnet bridge.
152 This bridge is not linked to physical interfaces, VM traffic is only
153 local to the node(s).
154 It can be also used for NAT or routed setup.
156 [[pvesdn_zone_plugin_vlan]]
160 This plugin will reuse an existing local Linux or OVS bridge,
161 and manage VLANs on it.
162 The benefit of using SDN module, is that you can create different zones with
163 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
165 Specific `VLAN` configuration options:
167 bridge:: Reuse this local bridge or OVS switch, already
168 configured on *each* local node.
170 [[pvesdn_zone_plugin_qinq]]
174 QinQ is stacked VLAN. The first VLAN tag defined for the zone
175 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
177 NOTE: Your physical network switches must support stacked VLANs!
179 Specific QinQ configuration options:
181 bridge:: A local VLAN-aware bridge already configured on each local node
183 service vlan:: The main VLAN tag of this zone
185 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
186 For example, you reduce the MTU to `1496` if you physical interface MTU is
189 [[pvesdn_zone_plugin_vxlan]]
193 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
194 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
195 4 UDP datagrams, using `4789` as the default destination port. You can, for
196 example, create a private IPv4 VXLAN network on top of public internet network
198 This is a layer2 tunnel only, no routing between different VNets is possible.
200 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
202 Specific EVPN configuration options:
204 peers address list:: A list of IPs from all nodes through which you want to
205 communicate. Can also be external nodes.
207 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
208 lower than the outgoing physical interface.
210 [[pvesdn_zone_plugin_evpn]]
214 This is the most complex of all supported plugins.
216 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
217 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
218 node, with this a virtual guest can use that address as gateway.
220 Routing can work across VNets from different zones through a VRF (Virtual
221 Routing and Forwarding) interface.
223 Specific EVPN configuration options:
225 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
226 it must be different than VXLAN-id of VNets
228 controller:: an EVPN-controller need to be defined first (see controller
232 Exit Nodes:: This is used if you want to defined some proxmox nodes, as
233 exit gateway from evpn network through real network. This nodes
234 will announce a default route in the evpn network.
236 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
237 lower than the outgoing physical interface.
240 [[pvesdn_config_vnet]]
244 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
245 on the node and used for Virtual Machine communication.
249 ID:: a 8 characters ID to name and identify a VNet
251 Alias:: Optional longer name, if the ID isn't enough
253 Zone:: The associated zone for this VNet
255 Tag:: The unique VLAN or VXLAN id
257 VLAN Aware:: Allow to add an extra VLAN tag in the virtual machine or
258 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
260 [[pvesdn_config_subnet]]
265 A sub-network (subnet or sub-net) allows you to define a specific IP network
266 (IPv4 or IPv6). For each VNET, you can define one or more subnets.
268 A subnet can be used to:
270 * restrict IP-addresses you can define on a specific VNET
271 * assign routes/gateway on a VNET in layer 3 zones
272 * enable SNAT on a VNET in layer 3 zones
273 * auto assign IPs on virtual guests (VM or CT) through IPAM plugin
274 * DNS registration through DNS plugins
276 If an IPAM server is associated to the subnet zone, the subnet prefix will be
277 automatically registered in the IPAM.
280 Subnet properties are:
282 ID:: a cidr network address. Ex: 10.0.0.0/8
284 Gateway:: ip address for the default gateway of the network.
285 On layer3 zones (simple/evpn plugins), it'll be deployed on the vnet.
287 Snat:: Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet.
288 The subnet source ip will be natted to server outgoing interface/ip.
289 On evpn zone, it's done only on evpn gateway-nodes.
291 Dnszoneprefix:: Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
294 [[pvesdn_config_controllers]]
298 Some zone types need an external controller to manage the VNet control-plane.
299 Currently this is only required for the `bgp-evpn` zone plugin.
301 [[pvesdn_controller_plugin_evpn]]
305 For `BGP-EVPN`, we need a controller to manage the control plane.
306 The currently supported software controller is the "frr" router.
307 You may need to install it on each node where you want to deploy EVPN zones.
310 apt install frr frr-pythontools
313 Configuration options:
315 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
316 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
317 breaking, or get broken, by global routing by mistake.
319 peers:: An ip list of all nodes where you want to communicate for the EVPN (could be also
320 external nodes or route reflectors servers)
323 [[pvesdn_controller_plugin_BGP]]
327 The bgp controller is not used directly by a zone.
328 You can used it to configure frr to manage bgp peers.
330 For BGP-evpn, it can be use to define a different ASN by node, so doing EBGP.
332 Configuration options:
334 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
335 number from the range (64512 - 65534) or (4200000000 - 4294967294), as else
336 you could end up breaking, or get broken, by global routing by mistake.
338 peers:: An IP list of peers you want to communicate with for the underlying
341 ebgp:: If your peer's remote-AS is different, it's enabling EBGP.
343 node:: The node of this BGP controller
345 loopback:: If you want to use a loopback or dummy interface as source for the
346 evpn network. (for multipath)
349 [[pvesdn_config_ipam]]
352 IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network.
353 It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
355 An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
358 [[pvesdn_ipam_plugin_pveipam]]
362 This is the default internal IPAM for your proxmox cluster if you don't have
363 external ipam software
365 [[pvesdn_ipam_plugin_phpipam]]
370 You need to create an application in phpipam, and add an api token with admin
373 phpIPAM properties are:
375 url:: The REST-API endpoint: `http://phpipam.domain.com/api/<appname>/`
376 token:: An API access token
377 section:: An integer ID. Sections are group of subnets in phpIPAM. Default
378 installations use `sectionid=1` for customers.
380 [[pvesdn_ipam_plugin_netbox]]
384 NetBox is an IP address management (IPAM) and data center infrastructure
385 management (DCIM) tool, see the source code repository for details:
386 https://github.com/netbox-community/netbox
388 You need to create an api token in netbox
389 https://netbox.readthedocs.io/en/stable/api/authentication
391 NetBox properties are:
393 url:: The REST API endpoint: `http://yournetbox.domain.com/api`
394 token:: An API access token
396 [[pvesdn_config_dns]]
400 The DNS plugin in {pve} SDN is used to define a DNS API server for registration
401 of your hostname and IP-address. A DNS configuration is associated with one or
402 more zones, to provide DNS registration for all the sub-net IPs configured for
405 [[pvesdn_dns_plugin_powerdns]]
408 https://doc.powerdns.com/authoritative/http-api/index.html
410 You need to enable the webserver and the API in your PowerDNS config:
414 api-key=arandomgeneratedstring
419 Powerdns properties are:
421 url:: The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
422 key:: An API access key
423 ttl:: The default TTL for records
429 [[pvesdn_setup_example_vlan]]
433 TIP: While we show plain configuration content here, almost everything should
434 be configurable using the web-interface only.
436 Node1: /etc/network/interfaces
440 iface vmbr0 inet manual
444 bridge-vlan-aware yes
447 #management ip on vlan100
449 iface vmbr0.100 inet static
450 address 192.168.0.1/24
452 source /etc/network/interfaces.d/*
455 Node2: /etc/network/interfaces
459 iface vmbr0 inet manual
463 bridge-vlan-aware yes
466 #management ip on vlan100
468 iface vmbr0.100 inet static
469 address 192.168.0.2/24
471 source /etc/network/interfaces.d/*
474 Create a VLAN zone named `myvlanzone':
481 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
482 `myvlanzone' as it's zone.
490 Apply the configuration through the main SDN panel, to create VNets locally on
493 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
495 Use the following network configuration for this VM:
499 iface eth0 inet static
500 address 10.0.3.100/24
503 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
506 Use the following network configuration for this VM:
510 iface eth0 inet static
511 address 10.0.3.101/24
514 Then, you should be able to ping between both VMs over that network.
517 [[pvesdn_setup_example_qinq]]
521 TIP: While we show plain configuration content here, almost everything should
522 be configurable using the web-interface only.
524 Node1: /etc/network/interfaces
528 iface vmbr0 inet manual
532 bridge-vlan-aware yes
535 #management ip on vlan100
537 iface vmbr0.100 inet static
538 address 192.168.0.1/24
540 source /etc/network/interfaces.d/*
543 Node2: /etc/network/interfaces
547 iface vmbr0 inet manual
551 bridge-vlan-aware yes
554 #management ip on vlan100
556 iface vmbr0.100 inet static
557 address 192.168.0.2/24
559 source /etc/network/interfaces.d/*
562 Create an QinQ zone named `qinqzone1' with service VLAN 20
570 Create another QinQ zone named `qinqzone2' with service VLAN 30
578 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
579 created `qinqzone1' zone.
587 Create a `myvnet2' with customer VLAN-id 100 on the previously created
596 Apply the configuration on the main SDN web-interface panel to create VNets
597 locally on each nodes.
599 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
601 Use the following network configuration for this VM:
605 iface eth0 inet static
606 address 10.0.3.100/24
609 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
612 Use the following network configuration for this VM:
616 iface eth0 inet static
617 address 10.0.3.101/24
620 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
623 Use the following network configuration for this VM:
627 iface eth0 inet static
628 address 10.0.3.102/24
631 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
634 Use the following network configuration for this VM:
638 iface eth0 inet static
639 address 10.0.3.103/24
642 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
643 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
644 or 'vm4', as they are on a different zone with different service-vlan.
647 [[pvesdn_setup_example_vxlan]]
651 TIP: While we show plain configuration content here, almost everything should
652 be configurable using the web-interface only.
654 node1: /etc/network/interfaces
658 iface vmbr0 inet static
659 address 192.168.0.1/24
660 gateway 192.168.0.254
666 source /etc/network/interfaces.d/*
669 node2: /etc/network/interfaces
673 iface vmbr0 inet static
674 address 192.168.0.2/24
675 gateway 192.168.0.254
681 source /etc/network/interfaces.d/*
684 node3: /etc/network/interfaces
688 iface vmbr0 inet static
689 address 192.168.0.3/24
690 gateway 192.168.0.254
696 source /etc/network/interfaces.d/*
699 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
700 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
701 the nodes as peer address list.
705 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
709 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
718 Apply the configuration on the main SDN web-interface panel to create VNets
719 locally on each nodes.
721 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
723 Use the following network configuration for this VM, note the lower MTU here.
727 iface eth0 inet static
728 address 10.0.3.100/24
732 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
735 Use the following network configuration for this VM:
739 iface eth0 inet static
740 address 10.0.3.101/24
744 Then, you should be able to ping between between 'vm1' and 'vm2'.
747 [[pvesdn_setup_example_evpn]]
751 node1: /etc/network/interfaces
755 iface vmbr0 inet static
756 address 192.168.0.1/24
757 gateway 192.168.0.254
763 source /etc/network/interfaces.d/*
766 node2: /etc/network/interfaces
770 iface vmbr0 inet static
771 address 192.168.0.2/24
772 gateway 192.168.0.254
778 source /etc/network/interfaces.d/*
781 node3: /etc/network/interfaces
785 iface vmbr0 inet static
786 address 192.168.0.3/24
787 gateway 192.168.0.254
793 source /etc/network/interfaces.d/*
796 Create a EVPN controller, using a private ASN number and above node addreesses
802 peers: 192.168.0.1,192.168.0.2,192.168.0.3
805 Create an EVPN zone named `myevpnzone' using the previously created
806 EVPN-controller Define 'node1' and 'node2' as exit nodes.
812 controller: myevpnctl
814 exitnodes: node1,node2
817 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
822 mac address: 8C:73:B2:7B:F9:60 #random generate mac address
825 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway
831 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
832 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
838 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
841 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway
848 Apply the configuration on the main SDN web-interface panel to create VNets
849 locally on each nodes and generate the FRR config.
852 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
854 Use the following network configuration for this VM:
858 iface eth0 inet static
859 address 10.0.1.100/24
860 gateway 10.0.1.1 #this is the ip of the vnet1
864 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
867 Use the following network configuration for this VM:
871 iface eth0 inet static
872 address 10.0.2.100/24
873 gateway 10.0.2.1 #this is the ip of the vnet2
878 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
880 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
881 will go to the configured 'myvnet2' gateway, then will be routed to the exit
882 nodes ('node1' or 'node2') and from there it will leave those nodes over the
883 default gateway configured on node1 or node2.
885 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
886 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
887 public network can reply back.
889 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
890 and 10.0.2.0/24 in this example), will be announced dynamically.