2 Software Defined Network
3 ========================
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
16 [[pvesdn_installation]]
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
24 apt install libpve-network-perl
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
37 The {pve} SDN allows separation and fine grained control of Virtual Guests
38 networks, using flexible software controlled configurations.
40 Separation consists of zones, a zone is it's own virtual separated area.
41 A Zone can be used by one or more 'VNets'. A 'VNet' is virtual network in a
42 zone. Normally it shows up as a common Linux bridge with either a VLAN or
43 'VXLAN' tag, or using layer 3 routing for control.
44 The 'VNets' are deployed locally on each node, after configuration was commited
45 from the cluster wide datacenter level.
51 The configuration is done at datacenter (cluster-wide) level, it will be saved
52 in configuration files located in the shared configuration file system:
55 On the web-interface SDN feature have 4 main sections for the configuration
57 * SDN: a overview of the SDN state
59 * Zones: Create and manage the virtual separated network Zones
61 * VNets: The per-node building block to provide a Zone for VMs
66 [[pvesdn_config_main_sdn]]
70 This is the main status panel. Here you can see deployment status of zones on
73 There is an 'Apply' button, to push and reload local configuration on all
77 [[pvesdn_config_zone]]
81 A zone will define a virtually separated network.
83 It can use different technologies for separation:
85 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
87 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
89 * VXLAN: (layer2 vxlan)
91 * bgp-evpn: vxlan using layer3 border gateway protocol routing
93 You can restrict a zone to specific nodes.
95 It's also possible to add permissions on a zone, to restrict user to use only a
96 specific zone and only the VNets in that zone
98 [[pvesdn_config_vnet]]
102 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
103 on the node and used for Virtual Machine communication.
107 * ID: a 8 characters ID to name and identify a VNet
109 * Alias: Optional longer name, if the ID isn't enough
111 * Zone: The associated zone for this VNet
113 * Tag: The unique VLAN or VXLAN id
115 * IPv4: an anycast IPv4 address, it will be configured on the underlying bridge
116 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
118 * IPv6: an anycast IPv6 address, it will be configured on the underlying bridge
119 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
122 [[pvesdn_config_controllers]]
126 Some zone types need an external controller to manage the VNet control-plane.
127 Currently this is only required for the `bgp-evpn` zone plugin.
130 [[pvesdn_zone_plugins]]
137 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
140 [[pvesdn_zone_plugin_vlan]]
144 This is the simplest plugin, it will reuse an existing local Linux or OVS
145 bridge, and manage VLANs on it.
146 The benefit of using SDN module, is that you can create different zones with
147 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
149 Specific `VLAN` configuration options:
151 bridge:: Reuse this local VLAN-aware bridge, or OVS interface, already
152 configured on *each* local node.
154 [[pvesdn_zone_plugin_qinq]]
158 QinQ is stacked VLAN. The first VLAN tag defined for the zone
159 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
161 NOTE: Your physical network switchs must support stacked VLANs!
163 Specific QinQ configuration options:
165 bridge:: A local VLAN-aware bridge already configured on each local node
167 service vlan:: The main VLAN tag of this zone
169 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
170 For example, you reduce the MTU to `1496` if you physical interface MTU is
173 [[pvesdn_zone_plugin_vxlan]]
177 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
178 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
179 4 UDP datagrams, using `4789` as the default destination port. You can, for
180 example, create a private IPv4 VXLAN network on top of public internet network
182 This is a layer2 tunnel only, no routing between different VNets is possible.
184 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
186 Specific EVPN configuration options:
188 peers address list:: A list of IPs from all nodes through which you want to
189 communicate. Can also be external nodes.
191 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
192 lower than the outgoing physical interface.
194 [[pvesdn_zone_plugin_evpn]]
198 This is the most complex of all supported plugins.
200 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
201 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
202 node, with this a virtual guest can use that address as gateway.
204 Routing can work across VNets from different zones through a VRF (Virtual
205 Routing and Forwarding) interface.
207 Specific EVPN configuration options:
209 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
210 it must be different than VXLAN-id of VNets
212 controller:: an EVPN-controller need to be defined first (see controller
215 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
216 lower than the outgoing physical interface.
219 [[pvesdn_controller_plugins]]
223 [[pvesdn_controller_plugin_evpn]]
227 For `BGP-EVPN`, we need a controller to manage the control plane.
228 The currently supported software controller is the "frr" router.
229 You may need to install it on each node where you want to deploy EVPN zones.
235 Configuration options:
237 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
238 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
239 breaking, or get broken, by global routing by mistake.
241 peers:: An ip list of all nodes where you want to communicate (could be also
242 external nodes or route reflectors servers)
244 Additionally, if you want to route traffic from a SDN BGP-EVPN network to
247 gateway-nodes:: The proxmox nodes from where the bgp-evpn traffic will exit to
248 external through the nodes default gateway
250 gateway-external-peers:: If you want that gateway nodes don't use the default
251 gateway, but, for example, sent traffic to external BGP routers, which handle
252 (reverse) routing then dynamically you can use. For example
253 `192.168.0.253,192.168.0.254'
256 [[pvesdn_local_deployment_monitoring]]
257 Local Deployment Monitoring
258 ---------------------------
260 After applying the configuration through the main SDN web-interface panel,
261 the local network configuration is generated locally on each node in
262 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
264 You can monitor the status of local zones and vnets through the main tree.
267 [[pvesdn_setup_example_vlan]]
271 TIP: While we show plain configuration content here, almost everything should
272 be configurable using the web-interface only.
274 Node1: /etc/network/interfaces
278 iface vmbr0 inet manual
282 bridge-vlan-aware yes
285 #management ip on vlan100
287 iface vmbr0.100 inet static
288 address 192.168.0.1/24
290 source /etc/network/interfaces.d/*
293 Node2: /etc/network/interfaces
297 iface vmbr0 inet manual
301 bridge-vlan-aware yes
304 #management ip on vlan100
306 iface vmbr0.100 inet static
307 address 192.168.0.2/24
309 source /etc/network/interfaces.d/*
312 Create a VLAN zone named `myvlanzone':
319 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
320 `myvlanzone' as it's zone.
328 Apply the configuration through the main SDN panel, to create VNets locally on
331 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
333 Use the following network configuration for this VM:
337 iface eth0 inet static
338 address 10.0.3.100/24
341 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
344 Use the following network configuration for this VM:
348 iface eth0 inet static
349 address 10.0.3.101/24
352 Then, you should be able to ping between both VMs over that network.
355 [[pvesdn_setup_example_qinq]]
359 TIP: While we show plain configuration content here, almost everything should
360 be configurable using the web-interface only.
362 Node1: /etc/network/interfaces
366 iface vmbr0 inet manual
370 bridge-vlan-aware yes
373 #management ip on vlan100
375 iface vmbr0.100 inet static
376 address 192.168.0.1/24
378 source /etc/network/interfaces.d/*
381 Node2: /etc/network/interfaces
385 iface vmbr0 inet manual
389 bridge-vlan-aware yes
392 #management ip on vlan100
394 iface vmbr0.100 inet static
395 address 192.168.0.2/24
397 source /etc/network/interfaces.d/*
400 Create an QinQ zone named `qinqzone1' with service VLAN 20
408 Create another QinQ zone named `qinqzone2' with service VLAN 30
416 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
417 created `qinqzone1' zone.
425 Create a `myvnet2' with customer VLAN-id 100 on the previously created
434 Apply the configuration on the main SDN web-interface panel to create VNets
435 locally on each nodes.
437 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
439 Use the following network configuration for this VM:
443 iface eth0 inet static
444 address 10.0.3.100/24
447 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
450 Use the following network configuration for this VM:
454 iface eth0 inet static
455 address 10.0.3.101/24
458 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
461 Use the following network configuration for this VM:
465 iface eth0 inet static
466 address 10.0.3.102/24
469 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
472 Use the following network configuration for this VM:
476 iface eth0 inet static
477 address 10.0.3.103/24
480 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
481 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
482 or 'vm4', as they are on a different zone with different service-vlan.
485 [[pvesdn_setup_example_vxlan]]
489 TIP: While we show plain configuration content here, almost everything should
490 be configurable using the web-interface only.
492 node1: /etc/network/interfaces
496 iface vmbr0 inet static
497 address 192.168.0.1/24
498 gateway 192.168.0.254
504 source /etc/network/interfaces.d/*
507 node2: /etc/network/interfaces
511 iface vmbr0 inet static
512 address 192.168.0.2/24
513 gateway 192.168.0.254
519 source /etc/network/interfaces.d/*
522 node3: /etc/network/interfaces
526 iface vmbr0 inet static
527 address 192.168.0.3/24
528 gateway 192.168.0.254
534 source /etc/network/interfaces.d/*
537 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
538 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
539 the nodes as peer address list.
543 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
547 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
556 Apply the configuration on the main SDN web-interface panel to create VNets
557 locally on each nodes.
559 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
561 Use the following network configuration for this VM, note the lower MTU here.
565 iface eth0 inet static
566 address 10.0.3.100/24
570 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
573 Use the following network configuration for this VM:
577 iface eth0 inet static
578 address 10.0.3.101/24
582 Then, you should be able to ping between between 'vm1' and 'vm2'.
585 [[pvesdn_setup_example_evpn]]
589 node1: /etc/network/interfaces
593 iface vmbr0 inet static
594 address 192.168.0.1/24
595 gateway 192.168.0.254
601 source /etc/network/interfaces.d/*
604 node2: /etc/network/interfaces
608 iface vmbr0 inet static
609 address 192.168.0.2/24
610 gateway 192.168.0.254
616 source /etc/network/interfaces.d/*
619 node3: /etc/network/interfaces
623 iface vmbr0 inet static
624 address 192.168.0.3/24
625 gateway 192.168.0.254
631 source /etc/network/interfaces.d/*
634 Create a EVPN controller, using a private ASN number and above node addreesses
635 as peers. Define 'node1' and 'node2' as gateway nodes.
640 peers: 192.168.0.1,192.168.0.2,192.168.0.3
641 gateway nodes: node1,node2
644 Create an EVPN zone named `myevpnzone' using the previously created
650 controller: myevpnctl
654 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone', a IPv4
655 CIDR network and a random MAC address.
662 mac address: 8C:73:B2:7B:F9:60 #random generate mac addres
665 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
666 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
673 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
676 Apply the configuration on the main SDN web-interface panel to create VNets
677 locally on each nodes and generate the FRR config.
680 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
682 Use the following network configuration for this VM:
686 iface eth0 inet static
687 address 10.0.1.100/24
688 gateway 10.0.1.1 #this is the ip of the vnet1
692 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
695 Use the following network configuration for this VM:
699 iface eth0 inet static
700 address 10.0.2.100/24
701 gateway 10.0.2.1 #this is the ip of the vnet2
706 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
708 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
709 will go to the configured 'myvnet2' gateway, then will be routed to gateway
710 nodes ('node1' or 'node2') and from there it will leave those nodes over the
711 default gateway configured on node1 or node2.
713 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
714 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
715 public network can reply back.
717 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
718 and 10.0.2.0/24 in this example), will be announced dynamically.