2 Software Defined Network
3 ========================
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
16 [[pvesdn_installation]]
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
24 apt install libpve-network-perl
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
37 The {pve} SDN allows separation and fine grained control of Virtual Guests
38 networks, using flexible software controlled configurations.
40 Separation consists of zones, a zone is it's own virtual separated network area.
41 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
42 type or plugin the zone uses it can behave differently and offer different
43 features, advantages or disadvantages.
44 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
45 'VXLAN' tag, but some can also use layer 3 routing for control.
46 The 'VNets' are deployed locally on each node, after configuration was committed
47 from the cluster wide datacenter SDN administration interface.
53 The configuration is done at datacenter (cluster-wide) level, it will be saved
54 in configuration files located in the shared configuration file system:
57 On the web-interface SDN feature have 4 main sections for the configuration
59 * SDN: a overview of the SDN state
61 * Zones: Create and manage the virtual separated network Zones
63 * VNets: The per-node building block to provide a Zone for VMs
65 * Controller: For complex setups to control Layer 3 routing
68 [[pvesdn_config_main_sdn]]
72 This is the main status panel. Here you can see deployment status of zones on
75 There is an 'Apply' button, to push and reload local configuration on all
79 [[pvesdn_config_zone]]
83 A zone will define a virtually separated network.
85 It can use different technologies for separation:
87 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
89 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
91 * VXLAN: (layer2 vxlan)
93 * bgp-evpn: vxlan using layer3 border gateway protocol routing
95 You can restrict a zone to specific nodes.
97 It's also possible to add permissions on a zone, to restrict user to use only a
98 specific zone and only the VNets in that zone
100 [[pvesdn_config_vnet]]
104 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
105 on the node and used for Virtual Machine communication.
109 * ID: a 8 characters ID to name and identify a VNet
111 * Alias: Optional longer name, if the ID isn't enough
113 * Zone: The associated zone for this VNet
115 * Tag: The unique VLAN or VXLAN id
117 * IPv4: an anycast IPv4 address, it will be configured on the underlying bridge
118 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
120 * IPv6: an anycast IPv6 address, it will be configured on the underlying bridge
121 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
124 [[pvesdn_config_controllers]]
128 Some zone types need an external controller to manage the VNet control-plane.
129 Currently this is only required for the `bgp-evpn` zone plugin.
132 [[pvesdn_zone_plugins]]
139 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
142 [[pvesdn_zone_plugin_vlan]]
146 This is the simplest plugin, it will reuse an existing local Linux or OVS
147 bridge, and manage VLANs on it.
148 The benefit of using SDN module, is that you can create different zones with
149 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
151 Specific `VLAN` configuration options:
153 bridge:: Reuse this local VLAN-aware bridge, or OVS interface, already
154 configured on *each* local node.
156 [[pvesdn_zone_plugin_qinq]]
160 QinQ is stacked VLAN. The first VLAN tag defined for the zone
161 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
163 NOTE: Your physical network switches must support stacked VLANs!
165 Specific QinQ configuration options:
167 bridge:: A local VLAN-aware bridge already configured on each local node
169 service vlan:: The main VLAN tag of this zone
171 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
172 For example, you reduce the MTU to `1496` if you physical interface MTU is
175 [[pvesdn_zone_plugin_vxlan]]
179 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
180 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
181 4 UDP datagrams, using `4789` as the default destination port. You can, for
182 example, create a private IPv4 VXLAN network on top of public internet network
184 This is a layer2 tunnel only, no routing between different VNets is possible.
186 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
188 Specific EVPN configuration options:
190 peers address list:: A list of IPs from all nodes through which you want to
191 communicate. Can also be external nodes.
193 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
194 lower than the outgoing physical interface.
196 [[pvesdn_zone_plugin_evpn]]
200 This is the most complex of all supported plugins.
202 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
203 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
204 node, with this a virtual guest can use that address as gateway.
206 Routing can work across VNets from different zones through a VRF (Virtual
207 Routing and Forwarding) interface.
209 Specific EVPN configuration options:
211 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
212 it must be different than VXLAN-id of VNets
214 controller:: an EVPN-controller need to be defined first (see controller
217 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
218 lower than the outgoing physical interface.
221 [[pvesdn_controller_plugins]]
225 For complex zones requiring a control plane.
227 [[pvesdn_controller_plugin_evpn]]
231 For `BGP-EVPN`, we need a controller to manage the control plane.
232 The currently supported software controller is the "frr" router.
233 You may need to install it on each node where you want to deploy EVPN zones.
239 Configuration options:
241 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
242 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
243 breaking, or get broken, by global routing by mistake.
245 peers:: An ip list of all nodes where you want to communicate (could be also
246 external nodes or route reflectors servers)
248 Additionally, if you want to route traffic from a SDN BGP-EVPN network to
251 gateway-nodes:: The proxmox nodes from where the bgp-evpn traffic will exit to
252 external through the nodes default gateway
254 gateway-external-peers:: If you want that gateway nodes don't use the default
255 gateway, but, for example, sent traffic to external BGP routers, which handle
256 (reverse) routing then dynamically you can use. For example
257 `192.168.0.253,192.168.0.254'
260 [[pvesdn_local_deployment_monitoring]]
261 Local Deployment Monitoring
262 ---------------------------
264 After applying the configuration through the main SDN web-interface panel,
265 the local network configuration is generated locally on each node in
266 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
268 You can monitor the status of local zones and vnets through the main tree.
271 [[pvesdn_setup_example_vlan]]
275 TIP: While we show plain configuration content here, almost everything should
276 be configurable using the web-interface only.
278 Node1: /etc/network/interfaces
282 iface vmbr0 inet manual
286 bridge-vlan-aware yes
289 #management ip on vlan100
291 iface vmbr0.100 inet static
292 address 192.168.0.1/24
294 source /etc/network/interfaces.d/*
297 Node2: /etc/network/interfaces
301 iface vmbr0 inet manual
305 bridge-vlan-aware yes
308 #management ip on vlan100
310 iface vmbr0.100 inet static
311 address 192.168.0.2/24
313 source /etc/network/interfaces.d/*
316 Create a VLAN zone named `myvlanzone':
323 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
324 `myvlanzone' as it's zone.
332 Apply the configuration through the main SDN panel, to create VNets locally on
335 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
337 Use the following network configuration for this VM:
341 iface eth0 inet static
342 address 10.0.3.100/24
345 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
348 Use the following network configuration for this VM:
352 iface eth0 inet static
353 address 10.0.3.101/24
356 Then, you should be able to ping between both VMs over that network.
359 [[pvesdn_setup_example_qinq]]
363 TIP: While we show plain configuration content here, almost everything should
364 be configurable using the web-interface only.
366 Node1: /etc/network/interfaces
370 iface vmbr0 inet manual
374 bridge-vlan-aware yes
377 #management ip on vlan100
379 iface vmbr0.100 inet static
380 address 192.168.0.1/24
382 source /etc/network/interfaces.d/*
385 Node2: /etc/network/interfaces
389 iface vmbr0 inet manual
393 bridge-vlan-aware yes
396 #management ip on vlan100
398 iface vmbr0.100 inet static
399 address 192.168.0.2/24
401 source /etc/network/interfaces.d/*
404 Create an QinQ zone named `qinqzone1' with service VLAN 20
412 Create another QinQ zone named `qinqzone2' with service VLAN 30
420 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
421 created `qinqzone1' zone.
429 Create a `myvnet2' with customer VLAN-id 100 on the previously created
438 Apply the configuration on the main SDN web-interface panel to create VNets
439 locally on each nodes.
441 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
443 Use the following network configuration for this VM:
447 iface eth0 inet static
448 address 10.0.3.100/24
451 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
454 Use the following network configuration for this VM:
458 iface eth0 inet static
459 address 10.0.3.101/24
462 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
465 Use the following network configuration for this VM:
469 iface eth0 inet static
470 address 10.0.3.102/24
473 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
476 Use the following network configuration for this VM:
480 iface eth0 inet static
481 address 10.0.3.103/24
484 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
485 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
486 or 'vm4', as they are on a different zone with different service-vlan.
489 [[pvesdn_setup_example_vxlan]]
493 TIP: While we show plain configuration content here, almost everything should
494 be configurable using the web-interface only.
496 node1: /etc/network/interfaces
500 iface vmbr0 inet static
501 address 192.168.0.1/24
502 gateway 192.168.0.254
508 source /etc/network/interfaces.d/*
511 node2: /etc/network/interfaces
515 iface vmbr0 inet static
516 address 192.168.0.2/24
517 gateway 192.168.0.254
523 source /etc/network/interfaces.d/*
526 node3: /etc/network/interfaces
530 iface vmbr0 inet static
531 address 192.168.0.3/24
532 gateway 192.168.0.254
538 source /etc/network/interfaces.d/*
541 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
542 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
543 the nodes as peer address list.
547 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
551 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
560 Apply the configuration on the main SDN web-interface panel to create VNets
561 locally on each nodes.
563 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
565 Use the following network configuration for this VM, note the lower MTU here.
569 iface eth0 inet static
570 address 10.0.3.100/24
574 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
577 Use the following network configuration for this VM:
581 iface eth0 inet static
582 address 10.0.3.101/24
586 Then, you should be able to ping between between 'vm1' and 'vm2'.
589 [[pvesdn_setup_example_evpn]]
593 node1: /etc/network/interfaces
597 iface vmbr0 inet static
598 address 192.168.0.1/24
599 gateway 192.168.0.254
605 source /etc/network/interfaces.d/*
608 node2: /etc/network/interfaces
612 iface vmbr0 inet static
613 address 192.168.0.2/24
614 gateway 192.168.0.254
620 source /etc/network/interfaces.d/*
623 node3: /etc/network/interfaces
627 iface vmbr0 inet static
628 address 192.168.0.3/24
629 gateway 192.168.0.254
635 source /etc/network/interfaces.d/*
638 Create a EVPN controller, using a private ASN number and above node addreesses
639 as peers. Define 'node1' and 'node2' as gateway nodes.
644 peers: 192.168.0.1,192.168.0.2,192.168.0.3
645 gateway nodes: node1,node2
648 Create an EVPN zone named `myevpnzone' using the previously created
654 controller: myevpnctl
658 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone', a IPv4
659 CIDR network and a random MAC address.
666 mac address: 8C:73:B2:7B:F9:60 #random generate mac address
669 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
670 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
677 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
680 Apply the configuration on the main SDN web-interface panel to create VNets
681 locally on each nodes and generate the FRR config.
684 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
686 Use the following network configuration for this VM:
690 iface eth0 inet static
691 address 10.0.1.100/24
692 gateway 10.0.1.1 #this is the ip of the vnet1
696 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
699 Use the following network configuration for this VM:
703 iface eth0 inet static
704 address 10.0.2.100/24
705 gateway 10.0.2.1 #this is the ip of the vnet2
710 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
712 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
713 will go to the configured 'myvnet2' gateway, then will be routed to gateway
714 nodes ('node1' or 'node2') and from there it will leave those nodes over the
715 default gateway configured on node1 or node2.
717 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
718 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
719 public network can reply back.
721 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
722 and 10.0.2.0/24 in this example), will be announced dynamically.