2 Software Defined Network
3 ========================
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
16 [[pvesdn_installation]]
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
24 apt install libpve-network-perl
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
36 source /etc/network/interfaces.d/*
38 at the end of /etc/network/interfaces to have the sdn config included
44 The {pve} SDN allows separation and fine grained control of Virtual Guests
45 networks, using flexible software controlled configurations.
47 Separation consists of zones, a zone is it's own virtual separated network area.
48 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
49 type or plugin the zone uses it can behave differently and offer different
50 features, advantages or disadvantages.
51 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
52 'VXLAN' tag, but some can also use layer 3 routing for control.
53 The 'VNets' are deployed locally on each node, after configuration was committed
54 from the cluster-wide datacenter SDN administration interface.
60 The configuration is done at datacenter (cluster-wide) level, it will be saved
61 in configuration files located in the shared configuration file system:
64 On the web-interface SDN feature have 3 main sections for the configuration
66 * SDN: a overview of the SDN state
68 * Zones: Create and manage the virtual separated network Zones
70 * VNets: Create virtual network bridges + subnets management.
74 * Controller: For complex setups to control Layer 3 routing
76 * Sub-nets: Used to defined ip networks on VNets.
78 * IPAM: Allow to use external tools for IP address management (guest IPs)
80 * DNS: Allow to define a DNS server api for registering a virtual guests
81 hostname and IP-addresses
83 [[pvesdn_config_main_sdn]]
88 This is the main status panel. Here you can see deployment status of zones on
91 There is an 'Apply' button, to push and reload local configuration on all
95 [[pvesdn_local_deployment_monitoring]]
96 Local Deployment Monitoring
97 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
99 After applying the configuration through the main SDN web-interface panel,
100 the local network configuration is generated locally on each node in
101 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
103 You can monitor the status of local zones and vnets through the main tree.
106 [[pvesdn_config_zone]]
110 A zone will define a virtually separated network.
112 It can use different technologies for separation:
114 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
116 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
118 * VXLAN: (layer2 vxlan)
120 * Simple: Isolated Bridge, simple l3 routing bridge (NAT)
122 * bgp-evpn: vxlan using layer3 border gateway protocol routing
124 You can restrict a zone to specific nodes.
126 It's also possible to add permissions on a zone, to restrict user to use only a
127 specific zone and only the VNets in that zone
132 The following options are available for all zone types.
134 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
137 ipam:: Optional, if you want to use an ipam tool to manage ips in this zone
139 dns:: Optional, dns api server.
141 reversedns:: Optional, reverse dns api server.
143 dnszone:: Optional, dns domain name. Use to register hostname like
144 `<hostname>.<domain>`. The dns zone need to be already existing in dns server.
147 [[pvesdn_zone_plugin_simple]]
151 This is the simplest plugin, it will create an isolated vnet bridge.
152 This bridge is not linked to physical interfaces, VM traffic is only
153 local to the node(s).
154 It can be also used for NAT or routed setup.
156 [[pvesdn_zone_plugin_vlan]]
160 This plugin will reuse an existing local Linux or OVS bridge,
161 and manage VLANs on it.
162 The benefit of using SDN module, is that you can create different zones with
163 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
165 Specific `VLAN` configuration options:
167 bridge:: Reuse this local bridge or OVS switch, already
168 configured on *each* local node.
170 [[pvesdn_zone_plugin_qinq]]
174 QinQ is stacked VLAN. The first VLAN tag defined for the zone
175 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
177 NOTE: Your physical network switches must support stacked VLANs!
179 Specific QinQ configuration options:
181 bridge:: A local VLAN-aware bridge already configured on each local node
183 service vlan:: The main VLAN tag of this zone
185 service vlan protocol:: allow to define a 802.1q (default) or 802.1ad service vlan type.
187 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
188 For example, you reduce the MTU to `1496` if you physical interface MTU is
191 [[pvesdn_zone_plugin_vxlan]]
195 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
196 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
197 4 UDP datagrams, using `4789` as the default destination port. You can, for
198 example, create a private IPv4 VXLAN network on top of public internet network
200 This is a layer2 tunnel only, no routing between different VNets is possible.
202 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
204 Specific EVPN configuration options:
206 peers address list:: A list of IPs from all nodes through which you want to
207 communicate. Can also be external nodes.
209 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
210 lower than the outgoing physical interface.
212 [[pvesdn_zone_plugin_evpn]]
216 This is the most complex of all supported plugins.
218 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
219 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
220 node, with this a virtual guest can use that address as gateway.
222 Routing can work across VNets from different zones through a VRF (Virtual
223 Routing and Forwarding) interface.
225 Specific EVPN configuration options:
227 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
228 it must be different than VXLAN-id of VNets
230 controller:: an EVPN-controller need to be defined first (see controller
233 Vnet Mac Address:: An unique, anycast macaddress for all vnets in this zone.
234 Auto-generated if you don't define it.
236 Exit Nodes:: This is used if you want to defined some proxmox nodes, as
237 exit gateway from evpn network through real network. This nodes
238 will announce a default route in the evpn network.
240 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
241 lower than the outgoing physical interface.
244 [[pvesdn_config_vnet]]
248 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
249 on the node and used for Virtual Machine communication.
253 ID:: a 8 characters ID to name and identify a VNet
255 Alias:: Optional longer name, if the ID isn't enough
257 Zone:: The associated zone for this VNet
259 Tag:: The unique VLAN or VXLAN id
261 VLAN Aware:: Allow to add an extra VLAN tag in the virtual machine or
262 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
264 [[pvesdn_config_subnet]]
269 A sub-network (subnet or sub-net) allows you to define a specific IP network
270 (IPv4 or IPv6). For each VNET, you can define one or more subnets.
272 A subnet can be used to:
274 * restrict IP-addresses you can define on a specific VNET
275 * assign routes/gateway on a VNET in layer 3 zones
276 * enable SNAT on a VNET in layer 3 zones
277 * auto assign IPs on virtual guests (VM or CT) through IPAM plugin
278 * DNS registration through DNS plugins
280 If an IPAM server is associated to the subnet zone, the subnet prefix will be
281 automatically registered in the IPAM.
284 Subnet properties are:
286 ID:: a cidr network address. Ex: 10.0.0.0/8
288 Gateway:: ip address for the default gateway of the network.
289 On layer3 zones (simple/evpn plugins), it'll be deployed on the vnet.
291 Snat:: Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet.
292 The subnet source ip will be natted to server outgoing interface/ip.
293 On evpn zone, it's done only on evpn gateway-nodes.
295 Dnszoneprefix:: Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
298 [[pvesdn_config_controllers]]
302 Some zone types need an external controller to manage the VNet control-plane.
303 Currently this is only required for the `bgp-evpn` zone plugin.
305 [[pvesdn_controller_plugin_evpn]]
309 For `BGP-EVPN`, we need a controller to manage the control plane.
310 The currently supported software controller is the "frr" router.
311 You may need to install it on each node where you want to deploy EVPN zones.
314 apt install frr frr-pythontools
317 Configuration options:
319 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
320 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
321 breaking, or get broken, by global routing by mistake.
323 peers:: An ip list of all nodes where you want to communicate for the EVPN (could be also
324 external nodes or route reflectors servers)
327 [[pvesdn_controller_plugin_BGP]]
331 The bgp controller is not used directly by a zone.
332 You can used it to configure frr to manage bgp peers.
334 For BGP-evpn, it can be use to define a different ASN by node, so doing EBGP.
336 Configuration options:
338 node:: The node of this BGP controller
340 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
341 number from the range (64512 - 65534) or (4200000000 - 4294967294), as else
342 you could end up breaking, or get broken, by global routing by mistake.
344 peers:: An IP list of peers you want to communicate with for the underlying
347 ebgp:: If your peer's remote-AS is different, it's enabling EBGP.
349 loopback:: If you want to use a loopback or dummy interface as source for the
350 evpn network. (for multipath)
352 ebgp-mutltihop:: if the peers are not directly connected or use loopback, you can increase the
353 number of hops to reach them.
355 [[pvesdn_config_ipam]]
358 IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network.
359 It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
361 An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
364 [[pvesdn_ipam_plugin_pveipam]]
368 This is the default internal IPAM for your proxmox cluster if you don't have
369 external ipam software
371 [[pvesdn_ipam_plugin_phpipam]]
376 You need to create an application in phpipam, and add an api token with admin
379 phpIPAM properties are:
381 url:: The REST-API endpoint: `http://phpipam.domain.com/api/<appname>/`
382 token:: An API access token
383 section:: An integer ID. Sections are group of subnets in phpIPAM. Default
384 installations use `sectionid=1` for customers.
386 [[pvesdn_ipam_plugin_netbox]]
390 NetBox is an IP address management (IPAM) and data center infrastructure
391 management (DCIM) tool, see the source code repository for details:
392 https://github.com/netbox-community/netbox
394 You need to create an api token in netbox
395 https://netbox.readthedocs.io/en/stable/api/authentication
397 NetBox properties are:
399 url:: The REST API endpoint: `http://yournetbox.domain.com/api`
400 token:: An API access token
402 [[pvesdn_config_dns]]
406 The DNS plugin in {pve} SDN is used to define a DNS API server for registration
407 of your hostname and IP-address. A DNS configuration is associated with one or
408 more zones, to provide DNS registration for all the sub-net IPs configured for
411 [[pvesdn_dns_plugin_powerdns]]
414 https://doc.powerdns.com/authoritative/http-api/index.html
416 You need to enable the webserver and the API in your PowerDNS config:
420 api-key=arandomgeneratedstring
425 Powerdns properties are:
427 url:: The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
428 key:: An API access key
429 ttl:: The default TTL for records
435 [[pvesdn_setup_example_vlan]]
439 TIP: While we show plain configuration content here, almost everything should
440 be configurable using the web-interface only.
442 Node1: /etc/network/interfaces
446 iface vmbr0 inet manual
450 bridge-vlan-aware yes
453 #management ip on vlan100
455 iface vmbr0.100 inet static
456 address 192.168.0.1/24
458 source /etc/network/interfaces.d/*
461 Node2: /etc/network/interfaces
465 iface vmbr0 inet manual
469 bridge-vlan-aware yes
472 #management ip on vlan100
474 iface vmbr0.100 inet static
475 address 192.168.0.2/24
477 source /etc/network/interfaces.d/*
480 Create a VLAN zone named `myvlanzone':
487 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
488 `myvlanzone' as it's zone.
496 Apply the configuration through the main SDN panel, to create VNets locally on
499 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
501 Use the following network configuration for this VM:
505 iface eth0 inet static
506 address 10.0.3.100/24
509 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
512 Use the following network configuration for this VM:
516 iface eth0 inet static
517 address 10.0.3.101/24
520 Then, you should be able to ping between both VMs over that network.
523 [[pvesdn_setup_example_qinq]]
527 TIP: While we show plain configuration content here, almost everything should
528 be configurable using the web-interface only.
530 Node1: /etc/network/interfaces
534 iface vmbr0 inet manual
538 bridge-vlan-aware yes
541 #management ip on vlan100
543 iface vmbr0.100 inet static
544 address 192.168.0.1/24
546 source /etc/network/interfaces.d/*
549 Node2: /etc/network/interfaces
553 iface vmbr0 inet manual
557 bridge-vlan-aware yes
560 #management ip on vlan100
562 iface vmbr0.100 inet static
563 address 192.168.0.2/24
565 source /etc/network/interfaces.d/*
568 Create an QinQ zone named `qinqzone1' with service VLAN 20
576 Create another QinQ zone named `qinqzone2' with service VLAN 30
584 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
585 created `qinqzone1' zone.
593 Create a `myvnet2' with customer VLAN-id 100 on the previously created
602 Apply the configuration on the main SDN web-interface panel to create VNets
603 locally on each nodes.
605 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
607 Use the following network configuration for this VM:
611 iface eth0 inet static
612 address 10.0.3.100/24
615 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
618 Use the following network configuration for this VM:
622 iface eth0 inet static
623 address 10.0.3.101/24
626 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
629 Use the following network configuration for this VM:
633 iface eth0 inet static
634 address 10.0.3.102/24
637 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
640 Use the following network configuration for this VM:
644 iface eth0 inet static
645 address 10.0.3.103/24
648 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
649 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
650 or 'vm4', as they are on a different zone with different service-vlan.
653 [[pvesdn_setup_example_vxlan]]
657 TIP: While we show plain configuration content here, almost everything should
658 be configurable using the web-interface only.
660 node1: /etc/network/interfaces
664 iface vmbr0 inet static
665 address 192.168.0.1/24
666 gateway 192.168.0.254
672 source /etc/network/interfaces.d/*
675 node2: /etc/network/interfaces
679 iface vmbr0 inet static
680 address 192.168.0.2/24
681 gateway 192.168.0.254
687 source /etc/network/interfaces.d/*
690 node3: /etc/network/interfaces
694 iface vmbr0 inet static
695 address 192.168.0.3/24
696 gateway 192.168.0.254
702 source /etc/network/interfaces.d/*
705 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
706 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
707 the nodes as peer address list.
711 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
715 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
724 Apply the configuration on the main SDN web-interface panel to create VNets
725 locally on each nodes.
727 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
729 Use the following network configuration for this VM, note the lower MTU here.
733 iface eth0 inet static
734 address 10.0.3.100/24
738 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
741 Use the following network configuration for this VM:
745 iface eth0 inet static
746 address 10.0.3.101/24
750 Then, you should be able to ping between between 'vm1' and 'vm2'.
753 [[pvesdn_setup_example_evpn]]
757 node1: /etc/network/interfaces
761 iface vmbr0 inet static
762 address 192.168.0.1/24
763 gateway 192.168.0.254
769 source /etc/network/interfaces.d/*
772 node2: /etc/network/interfaces
776 iface vmbr0 inet static
777 address 192.168.0.2/24
778 gateway 192.168.0.254
784 source /etc/network/interfaces.d/*
787 node3: /etc/network/interfaces
791 iface vmbr0 inet static
792 address 192.168.0.3/24
793 gateway 192.168.0.254
799 source /etc/network/interfaces.d/*
802 Create a EVPN controller, using a private ASN number and above node addreesses
808 peers: 192.168.0.1,192.168.0.2,192.168.0.3
811 Create an EVPN zone named `myevpnzone' using the previously created
812 EVPN-controller Define 'node1' and 'node2' as exit nodes.
818 controller: myevpnctl
820 vnet mac address: 32:F4:05:FE:6C:0A
821 exitnodes: node1,node2
824 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
831 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on vnet1
837 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
838 different IPv4 CIDR network.
846 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
853 Apply the configuration on the main SDN web-interface panel to create VNets
854 locally on each nodes and generate the FRR config.
857 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
859 Use the following network configuration for this VM:
863 iface eth0 inet static
864 address 10.0.1.100/24
865 gateway 10.0.1.1 #this is the ip of the vnet1
869 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
872 Use the following network configuration for this VM:
876 iface eth0 inet static
877 address 10.0.2.100/24
878 gateway 10.0.2.1 #this is the ip of the vnet2
883 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
885 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
886 will go to the configured 'myvnet2' gateway, then will be routed to the exit
887 nodes ('node1' or 'node2') and from there it will leave those nodes over the
888 default gateway configured on node1 or node2.
890 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
891 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
892 public network can reply back.
894 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
895 and 10.0.2.0/24 in this example), will be announced dynamically.