2 Software Defined Network
3 ========================
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
16 [[pvesdn_installation]]
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
24 apt install libpve-network-perl
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
37 The {pve} SDN allows separation and fine grained control of Virtual Guests
38 networks, using flexible software controlled configurations.
40 Separation consists of zones, a zone is it's own virtual separated network area.
41 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
42 type or plugin the zone uses it can behave differently and offer different
43 features, advantages or disadvantages.
44 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
45 'VXLAN' tag, but some can also use layer 3 routing for control.
46 The 'VNets' are deployed locally on each node, after configuration was committed
47 from the cluster wide datacenter SDN administration interface.
53 The configuration is done at datacenter (cluster-wide) level, it will be saved
54 in configuration files located in the shared configuration file system:
57 On the web-interface SDN feature have 4 main sections for the configuration
59 * SDN: a overview of the SDN state
61 * Zones: Create and manage the virtual separated network Zones
63 * VNets: The per-node building block to provide a Zone for VMs
65 * Controller: For complex setups to control Layer 3 routing
68 [[pvesdn_config_main_sdn]]
72 This is the main status panel. Here you can see deployment status of zones on
75 There is an 'Apply' button, to push and reload local configuration on all
79 [[pvesdn_config_zone]]
83 A zone will define a virtually separated network.
85 It can use different technologies for separation:
87 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
89 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
91 * VXLAN: (layer2 vxlan)
93 * bgp-evpn: vxlan using layer3 border gateway protocol routing
95 You can restrict a zone to specific nodes.
97 It's also possible to add permissions on a zone, to restrict user to use only a
98 specific zone and only the VNets in that zone
100 [[pvesdn_config_vnet]]
104 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
105 on the node and used for Virtual Machine communication.
109 * ID: a 8 characters ID to name and identify a VNet
111 * Alias: Optional longer name, if the ID isn't enough
113 * Zone: The associated zone for this VNet
115 * Tag: The unique VLAN or VXLAN id
117 * VLAN Aware: Allow to add an extra VLAN tag in the virtual machine or
118 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
120 * IPv4: an anycast IPv4 address, it will be configured on the underlying bridge
121 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
123 * IPv6: an anycast IPv6 address, it will be configured on the underlying bridge
124 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
127 [[pvesdn_config_controllers]]
131 Some zone types need an external controller to manage the VNet control-plane.
132 Currently this is only required for the `bgp-evpn` zone plugin.
135 [[pvesdn_zone_plugins]]
142 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
145 [[pvesdn_zone_plugin_vlan]]
149 This is the simplest plugin, it will reuse an existing local Linux or OVS
150 bridge, and manage VLANs on it.
151 The benefit of using SDN module, is that you can create different zones with
152 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
154 Specific `VLAN` configuration options:
156 bridge:: Reuse this local bridge or OVS switch, already
157 configured on *each* local node.
159 [[pvesdn_zone_plugin_qinq]]
163 QinQ is stacked VLAN. The first VLAN tag defined for the zone
164 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
166 NOTE: Your physical network switches must support stacked VLANs!
168 Specific QinQ configuration options:
170 bridge:: A local VLAN-aware bridge already configured on each local node
172 service vlan:: The main VLAN tag of this zone
174 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
175 For example, you reduce the MTU to `1496` if you physical interface MTU is
178 [[pvesdn_zone_plugin_vxlan]]
182 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
183 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
184 4 UDP datagrams, using `4789` as the default destination port. You can, for
185 example, create a private IPv4 VXLAN network on top of public internet network
187 This is a layer2 tunnel only, no routing between different VNets is possible.
189 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
191 Specific EVPN configuration options:
193 peers address list:: A list of IPs from all nodes through which you want to
194 communicate. Can also be external nodes.
196 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
197 lower than the outgoing physical interface.
199 [[pvesdn_zone_plugin_evpn]]
203 This is the most complex of all supported plugins.
205 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
206 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
207 node, with this a virtual guest can use that address as gateway.
209 Routing can work across VNets from different zones through a VRF (Virtual
210 Routing and Forwarding) interface.
212 Specific EVPN configuration options:
214 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
215 it must be different than VXLAN-id of VNets
217 controller:: an EVPN-controller need to be defined first (see controller
220 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
221 lower than the outgoing physical interface.
224 [[pvesdn_controller_plugins]]
228 For complex zones requiring a control plane.
230 [[pvesdn_controller_plugin_evpn]]
234 For `BGP-EVPN`, we need a controller to manage the control plane.
235 The currently supported software controller is the "frr" router.
236 You may need to install it on each node where you want to deploy EVPN zones.
242 Configuration options:
244 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
245 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
246 breaking, or get broken, by global routing by mistake.
248 peers:: An ip list of all nodes where you want to communicate (could be also
249 external nodes or route reflectors servers)
251 Additionally, if you want to route traffic from a SDN BGP-EVPN network to
254 gateway-nodes:: The proxmox nodes from where the bgp-evpn traffic will exit to
255 external through the nodes default gateway
257 gateway-external-peers:: If you want that gateway nodes don't use the default
258 gateway, but, for example, sent traffic to external BGP routers, which handle
259 (reverse) routing then dynamically you can use. For example
260 `192.168.0.253,192.168.0.254'
263 [[pvesdn_local_deployment_monitoring]]
264 Local Deployment Monitoring
265 ---------------------------
267 After applying the configuration through the main SDN web-interface panel,
268 the local network configuration is generated locally on each node in
269 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
273 source /etc/network/interfaces.d/*
275 at the end of /etc/network/interfaces to have the sdn config included
277 You can monitor the status of local zones and vnets through the main tree.
280 [[pvesdn_setup_example_vlan]]
284 TIP: While we show plain configuration content here, almost everything should
285 be configurable using the web-interface only.
287 Node1: /etc/network/interfaces
291 iface vmbr0 inet manual
295 bridge-vlan-aware yes
298 #management ip on vlan100
300 iface vmbr0.100 inet static
301 address 192.168.0.1/24
303 source /etc/network/interfaces.d/*
306 Node2: /etc/network/interfaces
310 iface vmbr0 inet manual
314 bridge-vlan-aware yes
317 #management ip on vlan100
319 iface vmbr0.100 inet static
320 address 192.168.0.2/24
322 source /etc/network/interfaces.d/*
325 Create a VLAN zone named `myvlanzone':
332 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
333 `myvlanzone' as it's zone.
341 Apply the configuration through the main SDN panel, to create VNets locally on
344 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
346 Use the following network configuration for this VM:
350 iface eth0 inet static
351 address 10.0.3.100/24
354 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
357 Use the following network configuration for this VM:
361 iface eth0 inet static
362 address 10.0.3.101/24
365 Then, you should be able to ping between both VMs over that network.
368 [[pvesdn_setup_example_qinq]]
372 TIP: While we show plain configuration content here, almost everything should
373 be configurable using the web-interface only.
375 Node1: /etc/network/interfaces
379 iface vmbr0 inet manual
383 bridge-vlan-aware yes
386 #management ip on vlan100
388 iface vmbr0.100 inet static
389 address 192.168.0.1/24
391 source /etc/network/interfaces.d/*
394 Node2: /etc/network/interfaces
398 iface vmbr0 inet manual
402 bridge-vlan-aware yes
405 #management ip on vlan100
407 iface vmbr0.100 inet static
408 address 192.168.0.2/24
410 source /etc/network/interfaces.d/*
413 Create an QinQ zone named `qinqzone1' with service VLAN 20
421 Create another QinQ zone named `qinqzone2' with service VLAN 30
429 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
430 created `qinqzone1' zone.
438 Create a `myvnet2' with customer VLAN-id 100 on the previously created
447 Apply the configuration on the main SDN web-interface panel to create VNets
448 locally on each nodes.
450 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
452 Use the following network configuration for this VM:
456 iface eth0 inet static
457 address 10.0.3.100/24
460 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
463 Use the following network configuration for this VM:
467 iface eth0 inet static
468 address 10.0.3.101/24
471 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
474 Use the following network configuration for this VM:
478 iface eth0 inet static
479 address 10.0.3.102/24
482 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
485 Use the following network configuration for this VM:
489 iface eth0 inet static
490 address 10.0.3.103/24
493 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
494 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
495 or 'vm4', as they are on a different zone with different service-vlan.
498 [[pvesdn_setup_example_vxlan]]
502 TIP: While we show plain configuration content here, almost everything should
503 be configurable using the web-interface only.
505 node1: /etc/network/interfaces
509 iface vmbr0 inet static
510 address 192.168.0.1/24
511 gateway 192.168.0.254
517 source /etc/network/interfaces.d/*
520 node2: /etc/network/interfaces
524 iface vmbr0 inet static
525 address 192.168.0.2/24
526 gateway 192.168.0.254
532 source /etc/network/interfaces.d/*
535 node3: /etc/network/interfaces
539 iface vmbr0 inet static
540 address 192.168.0.3/24
541 gateway 192.168.0.254
547 source /etc/network/interfaces.d/*
550 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
551 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
552 the nodes as peer address list.
556 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
560 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
569 Apply the configuration on the main SDN web-interface panel to create VNets
570 locally on each nodes.
572 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
574 Use the following network configuration for this VM, note the lower MTU here.
578 iface eth0 inet static
579 address 10.0.3.100/24
583 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
586 Use the following network configuration for this VM:
590 iface eth0 inet static
591 address 10.0.3.101/24
595 Then, you should be able to ping between between 'vm1' and 'vm2'.
598 [[pvesdn_setup_example_evpn]]
602 node1: /etc/network/interfaces
606 iface vmbr0 inet static
607 address 192.168.0.1/24
608 gateway 192.168.0.254
614 source /etc/network/interfaces.d/*
617 node2: /etc/network/interfaces
621 iface vmbr0 inet static
622 address 192.168.0.2/24
623 gateway 192.168.0.254
629 source /etc/network/interfaces.d/*
632 node3: /etc/network/interfaces
636 iface vmbr0 inet static
637 address 192.168.0.3/24
638 gateway 192.168.0.254
644 source /etc/network/interfaces.d/*
647 Create a EVPN controller, using a private ASN number and above node addreesses
648 as peers. Define 'node1' and 'node2' as gateway nodes.
653 peers: 192.168.0.1,192.168.0.2,192.168.0.3
654 gateway nodes: node1,node2
657 Create an EVPN zone named `myevpnzone' using the previously created
663 controller: myevpnctl
667 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone', a IPv4
668 CIDR network and a random MAC address.
675 mac address: 8C:73:B2:7B:F9:60 #random generate mac address
678 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
679 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
686 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
689 Apply the configuration on the main SDN web-interface panel to create VNets
690 locally on each nodes and generate the FRR config.
693 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
695 Use the following network configuration for this VM:
699 iface eth0 inet static
700 address 10.0.1.100/24
701 gateway 10.0.1.1 #this is the ip of the vnet1
705 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
708 Use the following network configuration for this VM:
712 iface eth0 inet static
713 address 10.0.2.100/24
714 gateway 10.0.2.1 #this is the ip of the vnet2
719 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
721 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
722 will go to the configured 'myvnet2' gateway, then will be routed to gateway
723 nodes ('node1' or 'node2') and from there it will leave those nodes over the
724 default gateway configured on node1 or node2.
726 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
727 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
728 public network can reply back.
730 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
731 and 10.0.2.0/24 in this example), will be announced dynamically.