2 Software Defined Network
3 ========================
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
16 [[pvesdn_installation]]
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
24 apt install libpve-network-perl
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
37 The {pve} SDN allows separation and fine grained control of Virtual Guests
38 networks, using flexible software controlled configurations.
40 Separation consists of zones, a zone is it's own virtual separated network area.
41 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
42 type or plugin the zone uses it can behave differently and offer different
43 features, advantages or disadvantages.
44 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
45 'VXLAN' tag, but some can also use layer 3 routing for control.
46 The 'VNets' are deployed locally on each node, after configuration was committed
47 from the cluster wide datacenter SDN administration interface.
53 The configuration is done at datacenter (cluster-wide) level, it will be saved
54 in configuration files located in the shared configuration file system:
57 On the web-interface SDN feature have 4 main sections for the configuration
59 * SDN: a overview of the SDN state
61 * Zones: Create and manage the virtual separated network Zones
63 * VNets: The per-node building block to provide a Zone for VMs
65 * Controller: For complex setups to control Layer 3 routing
68 [[pvesdn_config_main_sdn]]
72 This is the main status panel. Here you can see deployment status of zones on
75 There is an 'Apply' button, to push and reload local configuration on all
79 [[pvesdn_config_zone]]
83 A zone will define a virtually separated network.
85 It can use different technologies for separation:
87 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
89 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
91 * VXLAN: (layer2 vxlan)
93 * bgp-evpn: vxlan using layer3 border gateway protocol routing
95 You can restrict a zone to specific nodes.
97 It's also possible to add permissions on a zone, to restrict user to use only a
98 specific zone and only the VNets in that zone
100 [[pvesdn_config_vnet]]
104 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
105 on the node and used for Virtual Machine communication.
109 * ID: a 8 characters ID to name and identify a VNet
111 * Alias: Optional longer name, if the ID isn't enough
113 * Zone: The associated zone for this VNet
115 * Tag: The unique VLAN or VXLAN id
117 * IPv4: an anycast IPv4 address, it will be configured on the underlying bridge
118 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
120 * IPv6: an anycast IPv6 address, it will be configured on the underlying bridge
121 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
124 [[pvesdn_config_controllers]]
128 Some zone types need an external controller to manage the VNet control-plane.
129 Currently this is only required for the `bgp-evpn` zone plugin.
132 [[pvesdn_zone_plugins]]
139 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
142 [[pvesdn_zone_plugin_vlan]]
146 This is the simplest plugin, it will reuse an existing local Linux or OVS
147 bridge, and manage VLANs on it.
148 The benefit of using SDN module, is that you can create different zones with
149 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
151 Specific `VLAN` configuration options:
153 bridge:: Reuse this local bridge or OVS switch, already
154 configured on *each* local node.
156 [[pvesdn_zone_plugin_qinq]]
160 QinQ is stacked VLAN. The first VLAN tag defined for the zone
161 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
163 NOTE: Your physical network switches must support stacked VLANs!
165 Specific QinQ configuration options:
167 bridge:: A local VLAN-aware bridge already configured on each local node
169 service vlan:: The main VLAN tag of this zone
171 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
172 For example, you reduce the MTU to `1496` if you physical interface MTU is
175 [[pvesdn_zone_plugin_vxlan]]
179 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
180 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
181 4 UDP datagrams, using `4789` as the default destination port. You can, for
182 example, create a private IPv4 VXLAN network on top of public internet network
184 This is a layer2 tunnel only, no routing between different VNets is possible.
186 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
188 Specific EVPN configuration options:
190 peers address list:: A list of IPs from all nodes through which you want to
191 communicate. Can also be external nodes.
193 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
194 lower than the outgoing physical interface.
196 [[pvesdn_zone_plugin_evpn]]
200 This is the most complex of all supported plugins.
202 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
203 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
204 node, with this a virtual guest can use that address as gateway.
206 Routing can work across VNets from different zones through a VRF (Virtual
207 Routing and Forwarding) interface.
209 Specific EVPN configuration options:
211 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
212 it must be different than VXLAN-id of VNets
214 controller:: an EVPN-controller need to be defined first (see controller
217 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
218 lower than the outgoing physical interface.
221 [[pvesdn_controller_plugins]]
225 For complex zones requiring a control plane.
227 [[pvesdn_controller_plugin_evpn]]
231 For `BGP-EVPN`, we need a controller to manage the control plane.
232 The currently supported software controller is the "frr" router.
233 You may need to install it on each node where you want to deploy EVPN zones.
239 Configuration options:
241 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
242 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
243 breaking, or get broken, by global routing by mistake.
245 peers:: An ip list of all nodes where you want to communicate (could be also
246 external nodes or route reflectors servers)
248 Additionally, if you want to route traffic from a SDN BGP-EVPN network to
251 gateway-nodes:: The proxmox nodes from where the bgp-evpn traffic will exit to
252 external through the nodes default gateway
254 gateway-external-peers:: If you want that gateway nodes don't use the default
255 gateway, but, for example, sent traffic to external BGP routers, which handle
256 (reverse) routing then dynamically you can use. For example
257 `192.168.0.253,192.168.0.254'
260 [[pvesdn_local_deployment_monitoring]]
261 Local Deployment Monitoring
262 ---------------------------
264 After applying the configuration through the main SDN web-interface panel,
265 the local network configuration is generated locally on each node in
266 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
270 source /etc/network/interfaces.d/*
272 at the end of /etc/network/interfaces to have the sdn config included
274 You can monitor the status of local zones and vnets through the main tree.
277 [[pvesdn_setup_example_vlan]]
281 TIP: While we show plain configuration content here, almost everything should
282 be configurable using the web-interface only.
284 Node1: /etc/network/interfaces
288 iface vmbr0 inet manual
292 bridge-vlan-aware yes
295 #management ip on vlan100
297 iface vmbr0.100 inet static
298 address 192.168.0.1/24
300 source /etc/network/interfaces.d/*
303 Node2: /etc/network/interfaces
307 iface vmbr0 inet manual
311 bridge-vlan-aware yes
314 #management ip on vlan100
316 iface vmbr0.100 inet static
317 address 192.168.0.2/24
319 source /etc/network/interfaces.d/*
322 Create a VLAN zone named `myvlanzone':
329 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
330 `myvlanzone' as it's zone.
338 Apply the configuration through the main SDN panel, to create VNets locally on
341 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
343 Use the following network configuration for this VM:
347 iface eth0 inet static
348 address 10.0.3.100/24
351 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
354 Use the following network configuration for this VM:
358 iface eth0 inet static
359 address 10.0.3.101/24
362 Then, you should be able to ping between both VMs over that network.
365 [[pvesdn_setup_example_qinq]]
369 TIP: While we show plain configuration content here, almost everything should
370 be configurable using the web-interface only.
372 Node1: /etc/network/interfaces
376 iface vmbr0 inet manual
380 bridge-vlan-aware yes
383 #management ip on vlan100
385 iface vmbr0.100 inet static
386 address 192.168.0.1/24
388 source /etc/network/interfaces.d/*
391 Node2: /etc/network/interfaces
395 iface vmbr0 inet manual
399 bridge-vlan-aware yes
402 #management ip on vlan100
404 iface vmbr0.100 inet static
405 address 192.168.0.2/24
407 source /etc/network/interfaces.d/*
410 Create an QinQ zone named `qinqzone1' with service VLAN 20
418 Create another QinQ zone named `qinqzone2' with service VLAN 30
426 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
427 created `qinqzone1' zone.
435 Create a `myvnet2' with customer VLAN-id 100 on the previously created
444 Apply the configuration on the main SDN web-interface panel to create VNets
445 locally on each nodes.
447 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
449 Use the following network configuration for this VM:
453 iface eth0 inet static
454 address 10.0.3.100/24
457 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
460 Use the following network configuration for this VM:
464 iface eth0 inet static
465 address 10.0.3.101/24
468 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
471 Use the following network configuration for this VM:
475 iface eth0 inet static
476 address 10.0.3.102/24
479 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
482 Use the following network configuration for this VM:
486 iface eth0 inet static
487 address 10.0.3.103/24
490 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
491 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
492 or 'vm4', as they are on a different zone with different service-vlan.
495 [[pvesdn_setup_example_vxlan]]
499 TIP: While we show plain configuration content here, almost everything should
500 be configurable using the web-interface only.
502 node1: /etc/network/interfaces
506 iface vmbr0 inet static
507 address 192.168.0.1/24
508 gateway 192.168.0.254
514 source /etc/network/interfaces.d/*
517 node2: /etc/network/interfaces
521 iface vmbr0 inet static
522 address 192.168.0.2/24
523 gateway 192.168.0.254
529 source /etc/network/interfaces.d/*
532 node3: /etc/network/interfaces
536 iface vmbr0 inet static
537 address 192.168.0.3/24
538 gateway 192.168.0.254
544 source /etc/network/interfaces.d/*
547 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
548 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
549 the nodes as peer address list.
553 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
557 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
566 Apply the configuration on the main SDN web-interface panel to create VNets
567 locally on each nodes.
569 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
571 Use the following network configuration for this VM, note the lower MTU here.
575 iface eth0 inet static
576 address 10.0.3.100/24
580 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
583 Use the following network configuration for this VM:
587 iface eth0 inet static
588 address 10.0.3.101/24
592 Then, you should be able to ping between between 'vm1' and 'vm2'.
595 [[pvesdn_setup_example_evpn]]
599 node1: /etc/network/interfaces
603 iface vmbr0 inet static
604 address 192.168.0.1/24
605 gateway 192.168.0.254
611 source /etc/network/interfaces.d/*
614 node2: /etc/network/interfaces
618 iface vmbr0 inet static
619 address 192.168.0.2/24
620 gateway 192.168.0.254
626 source /etc/network/interfaces.d/*
629 node3: /etc/network/interfaces
633 iface vmbr0 inet static
634 address 192.168.0.3/24
635 gateway 192.168.0.254
641 source /etc/network/interfaces.d/*
644 Create a EVPN controller, using a private ASN number and above node addreesses
645 as peers. Define 'node1' and 'node2' as gateway nodes.
650 peers: 192.168.0.1,192.168.0.2,192.168.0.3
651 gateway nodes: node1,node2
654 Create an EVPN zone named `myevpnzone' using the previously created
660 controller: myevpnctl
664 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone', a IPv4
665 CIDR network and a random MAC address.
672 mac address: 8C:73:B2:7B:F9:60 #random generate mac address
675 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
676 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
683 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
686 Apply the configuration on the main SDN web-interface panel to create VNets
687 locally on each nodes and generate the FRR config.
690 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
692 Use the following network configuration for this VM:
696 iface eth0 inet static
697 address 10.0.1.100/24
698 gateway 10.0.1.1 #this is the ip of the vnet1
702 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
705 Use the following network configuration for this VM:
709 iface eth0 inet static
710 address 10.0.2.100/24
711 gateway 10.0.2.1 #this is the ip of the vnet2
716 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
718 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
719 will go to the configured 'myvnet2' gateway, then will be routed to gateway
720 nodes ('node1' or 'node2') and from there it will leave those nodes over the
721 default gateway configured on node1 or node2.
723 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
724 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
725 public network can reply back.
727 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
728 and 10.0.2.0/24 in this example), will be announced dynamically.