2 Software Defined Network
3 ========================
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
19 To enable the experimental SDN integration, you need to install
20 "libpve-network-perl" package
23 apt install libpve-network-perl
26 You need to have `ifupdown2` package installed on each node to manage local
27 configuration reloading without reboot:
36 The {pve} SDN allows separation and fine grained control of Virtual Guests
37 networks, using flexible software controlled configurations.
39 Separation consists of zones, a zone is it's own virtual separated area.
40 A Zone can be used by one or more 'VNets'. A 'VNet' is virtual network in a
41 zone. Normally it shows up as a common Linux bridge with either a VLAN or
42 'VXLAN' tag, or using layer 3 routing for control.
43 The 'VNets' are deployed locally on each node, after configuration was commited
44 from the cluster wide datacenter level.
50 The configuration is done at datacenter (cluster-wide) level, it will be saved
51 in configuration files located in the shared configuration file system:
54 On the web-interface SDN feature have 4 main sections for the configuration
56 * SDN: a overview of the SDN state
58 * Zones: Create and manage the virtual separated network Zones
60 * VNets: The per-node building block to provide a Zone for VMs
68 This is the main status panel. Here you can see deployment status of zones on
71 There is an 'Apply' button, to push and reload local configuration on all
78 A zone will define a virtually separated network.
80 It can use different technologies for separation:
82 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
84 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
86 * VXLAN: (layer2 vxlan)
88 * bgp-evpn: vxlan using layer3 border gateway protocol routing
90 You can restrict a zone to specific nodes.
92 It's also possible to add permissions on a zone, to restrict user to use only a
93 specific zone and only the VNets in that zone
98 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
99 on the node and used for Virtual Machine communication.
103 * ID: a 8 characters ID to name and identify a VNet
105 * Alias: Optional longer name, if the ID isn't enough
107 * Zone: The associated zone for this VNet
109 * Tag: The unique VLAN or VXLAN id
111 * IPv4: an anycast IPv4 address, it will be configured on the underlying bridge
112 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
114 * IPv6: an anycast IPv6 address, it will be configured on the underlying bridge
115 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
121 Some zone types (currently only the `bgp-evpn` plugin) need an external
122 controller to manage the VNet control-plane.
131 nodes:: deploy and allow to use a VNets configured for this Zone only on
138 This is the simplest plugin, it will reuse an existing local Linux or OVS
139 bridge, and manage VLANs on it.
140 The benefit of using SDN module, is that you can create different zones with
141 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
143 Specific `VLAN` configuration options:
145 bridge:: Reuse this local VLAN-aware bridge, or OVS interface, already
146 configured on *each* local node.
151 QinQ is stacked VLAN. The first VLAN tag defined for the zone
152 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
154 NOTE: Your physical network switchs must support stacked VLANs!
156 Specific QinQ configuration options:
158 bridge:: a local VLAN-aware bridge already configured on each local node
159 service vlan:: he main VLAN tag of this zone
160 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
161 For example, you reduce the MTU to `1496` if you physical interface MTU is
167 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
168 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
169 4 UDP datagrams, using `4789` as the default destination port. You can, for
170 example, create a private IPv4 VXLAN network on top of public internet network
172 This is a layer2 tunnel only, no routing between different VNets is possible.
174 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
176 Specific EVPN configuration options:
178 peers address list:: a list of IPs from all nodes where you want to communicate (can also be external nodes)
179 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes lower than the outgoing physical interface.
184 This is the most complex of all supported plugins.
186 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
187 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
188 node, with this a virtual guest can use that address as gateway.
190 Routing can work across VNets from different zones through a VRF (Virtual
191 Routing and Forwarding) interface.
193 Specific EVPN configuration options:
195 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
196 it must be different than VXLAN-id of VNets
198 controller:: an EVPN-controller need to be defined first (see controller
201 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
202 lower than the outgoing physical interface.
211 For `BGP-EVPN`, we need a controller to manage the control plane.
212 The currently supported software controller is the "frr" router.
213 You may need to install it on each node where you want to deploy EVPN zones.
219 Configuration options:
221 asn:: a unique BGP ASN number. It's highly recommended to use private ASN
222 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
223 breaking, or get broken, by global routing by mistake.
225 peers:: an ip list of all nodes where you want to communicate (could be also
226 external nodes or route reflectors servers)
228 Additionally, if you want to route traffic from a SDN BGP-EVPN network to
231 gateway-nodes:: The proxmox nodes from where the bgp-evpn traffic will exit to
232 external through the nodes default gateway
234 If you want that gateway nodes don't use the default gateway, but, for example,
235 sent traffic to external BGP routers
237 gateway-external-peers:: 192.168.0.253,192.168.0.254
240 Local Deployment Monitoring
241 ---------------------------
243 After applying the configuration through the main SDN web-interface panel,
244 the local network configuration is generated locally on each node in
245 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
247 You can monitor the status of local zones and vnets through the main tree.
253 TIP: While we show plain configuration content here, almost everything should
254 be configurable using the web-interface only.
256 Node1: /etc/network/interfaces
260 iface vmbr0 inet manual
264 bridge-vlan-aware yes
267 #management ip on vlan100
269 iface vmbr0.100 inet static
270 address 192.168.0.1/24
272 source /etc/network/interfaces.d/*
275 Node2: /etc/network/interfaces
279 iface vmbr0 inet manual
283 bridge-vlan-aware yes
286 #management ip on vlan100
288 iface vmbr0.100 inet static
289 address 192.168.0.2/24
291 source /etc/network/interfaces.d/*
294 Create a VLAN zone named `myvlanzone':
301 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
302 `myvlanzone' as it's zone.
310 Apply the configuration through the main SDN panel, to create VNets locally on
313 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
315 Use the following network configuration for this VM:
319 iface eth0 inet static
320 address 10.0.3.100/24
323 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
326 Use the following network configuration for this VM:
330 iface eth0 inet static
331 address 10.0.3.101/24
334 Then, you should be able to ping between both VMs over that network.
340 TIP: While we show plain configuration content here, almost everything should
341 be configurable using the web-interface only.
343 Node1: /etc/network/interfaces
347 iface vmbr0 inet manual
351 bridge-vlan-aware yes
354 #management ip on vlan100
356 iface vmbr0.100 inet static
357 address 192.168.0.1/24
359 source /etc/network/interfaces.d/*
362 Node2: /etc/network/interfaces
366 iface vmbr0 inet manual
370 bridge-vlan-aware yes
373 #management ip on vlan100
375 iface vmbr0.100 inet static
376 address 192.168.0.2/24
378 source /etc/network/interfaces.d/*
381 Create an QinQ zone named `qinqzone1' with service VLAN 20
389 Create another QinQ zone named `qinqzone2' with service VLAN 30
397 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
398 created `qinqzone1' zone.
406 Create a `myvnet2' with customer VLAN-id 100 on the previously created
415 Apply the configuration on the main SDN web-interface panel to create VNets
416 locally on each nodes.
418 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
420 Use the following network configuration for this VM:
424 iface eth0 inet static
425 address 10.0.3.100/24
428 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
431 Use the following network configuration for this VM:
435 iface eth0 inet static
436 address 10.0.3.101/24
439 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
442 Use the following network configuration for this VM:
446 iface eth0 inet static
447 address 10.0.3.102/24
450 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
453 Use the following network configuration for this VM:
457 iface eth0 inet static
458 address 10.0.3.103/24
461 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
462 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
463 or 'vm4', as they are on a different zone with different service-vlan.
469 node1: /etc/network/interfaces
473 iface vmbr0 inet static
474 address 192.168.0.1/24
475 gateway 192.168.0.254
481 source /etc/network/interfaces.d/*
484 node2: /etc/network/interfaces
488 iface vmbr0 inet static
489 address 192.168.0.2/24
490 gateway 192.168.0.254
496 source /etc/network/interfaces.d/*
499 node3: /etc/network/interfaces
503 iface vmbr0 inet static
504 address 192.168.0.3/24
505 gateway 192.168.0.254
511 source /etc/network/interfaces.d/*
514 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
515 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
516 the nodes as peer address list.
520 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
524 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
533 Apply the configuration on the main SDN web-interface panel to create VNets
534 locally on each nodes.
536 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
538 Use the following network configuration for this VM, note the lower MTU here.
542 iface eth0 inet static
543 address 10.0.3.100/24
547 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
550 Use the following network configuration for this VM:
554 iface eth0 inet static
555 address 10.0.3.101/24
559 Then, you should be able to ping between between 'vm1' and 'vm2'.
566 node1: /etc/network/interfaces
570 iface vmbr0 inet static
571 address 192.168.0.1/24
572 gateway 192.168.0.254
578 source /etc/network/interfaces.d/*
581 node2: /etc/network/interfaces
585 iface vmbr0 inet static
586 address 192.168.0.2/24
587 gateway 192.168.0.254
593 source /etc/network/interfaces.d/*
596 node3: /etc/network/interfaces
600 iface vmbr0 inet static
601 address 192.168.0.3/24
602 gateway 192.168.0.254
608 source /etc/network/interfaces.d/*
611 Create a EVPN controller, using a private ASN number and above node addreesses
612 as peers. Define 'node1' and 'node2' as gateway nodes.
617 peers: 192.168.0.1,192.168.0.2,192.168.0.3
618 gateway nodes: node1,node2
621 Create an EVPN zone named `myevpnzone' using the previously created
627 controller: myevpnctl
631 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone', a IPv4
632 CIDR network and a random MAC address.
639 mac address: 8C:73:B2:7B:F9:60 #random generate mac addres
642 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
643 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
650 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
653 Apply the configuration on the main SDN web-interface panel to create VNets
654 locally on each nodes and generate the FRR config.
657 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
659 Use the following network configuration for this VM:
663 iface eth0 inet static
664 address 10.0.1.100/24
665 gateway 10.0.1.1 #this is the ip of the vnet1
669 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
672 Use the following network configuration for this VM:
676 iface eth0 inet static
677 address 10.0.2.100/24
678 gateway 10.0.2.1 #this is the ip of the vnet2
683 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
685 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
686 will go to the configured 'myvnet2' gateway, then will be routed to gateway
687 nodes ('node1' or 'node2') and from there it will leave those nodes over the
688 default gateway configured on node1 or node2.
690 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
691 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
692 public network can reply back.
694 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
695 and 10.0.2.0/24 in this example), will be announced dynamically.