2 Software Defined Network
3 ========================
8 The SDN feature allow to create virtual networks (vnets)
11 To enable SDN feature, you need to install "libpve-network-perl" package
14 apt install libpve-network-perl
17 A vnet is a bridge with a vlan or vxlan tag.
19 The vnets are deployed locally on each node after configuration
20 commit at datacenter level.
22 You need to have "ifupdown2" package installed on each node to manage local
23 configuration reloading.
32 The configuration is done at datacenter level.
34 The sdn feature have 4 main sections for the configuration
48 [thumbnail="screenshot/gui-sdn-status.png"]
50 This is the Main panel, where you can see deployment of zones on differents nodes.
52 They are an "apply" button, to push && reload local configuration on differents nodes.
58 [thumbnail="screenshot/gui-sdn-zone.png"]
60 A zone will defined the kind of virtual network you want to defined.
68 * vxlan (layer2 vxlan)
70 * bgp-evpn (vxlan with layer3 routing)
72 You can restrict a zone to specific nodes.
74 It's also possible to add permissions on a zone, to restrict user
75 to use only a specific zone and the vnets in this zone
80 [thumbnail="screenshot/gui-sdn-vnet-evpn.png"]
82 A vnet is a bridge that will be deployed locally on the node,
83 for vm communication. (Like a classic vmbrX).
87 * ID: a 8 characters ID
89 * Alias: Optionnal bigger name
91 * Zone: The associated zone of the vnet
93 * Tag: unique vlan or vxlan id
95 * ipv4: an anycast ipv4 address (same bridge ip deployed on each node), for bgp-evpn routing only
97 * ipv6: an anycast ipv6 address (same bridge ip deployed on each node), for bgp-evpn routing only
103 [thumbnail="screenshot/gui-sdn-controller.png"]
105 Some zone plugins (Currently : bgp-evpn only),
106 need an external controller to manage the vnets control-plane.
114 * nodes: restrict deploy of the vnets of theses nodes only
120 [thumbnail="screenshot/gui-sdn-zone-vlan.png"]
122 This is the most simple plugin, it'll reuse an existing local bridge or ovs,
123 and manage vlan on it.
124 The benefit of using sdn module, is that you can create different zones with specific
125 vnets vlan tag, and restrict your customers on their zones.
127 specific qinq configuration options:
129 * bridge: a local vlan-aware bridge or ovs switch already configured on each local node
134 [thumbnail="screenshot/gui-sdn-zone-qinq.png"]
136 QinQ is stacked vlan.
137 you have the first vlan tag defined on the zone (service-vlan), and
138 the second vlan tag defined on the vnets
140 Your physical network switchs need to support stacked vlans !
142 specific qinq configuration options:
144 * bridge: a local vlan-aware bridge already configured on each local node
145 * service vlan: The main vlan tag of this zone
146 * mtu: you need 4 more bytes for the double tag vlan.
147 You can reduce the mtu to 1496 if you physical interface mtu is 1500.
152 [thumbnail="screenshot/gui-sdn-zone-vxlan.png"]
154 The vxlan plugin will established vxlan tunnel (overlay) on top of an existing network (underlay).
155 you can for example, create a private ipv4 vxlan network on top of public internet network nodes.
156 This is a layer2 tunnel only, no routing between different vnets is possible.
158 Each vnet will have a specific vxlan id ( 1 - 16777215 )
161 Specific evpn configuration options:
163 * peers address list: an ip list of all nodes where you want to communicate (could be also external nodes)
165 * mtu: because vxlan encapsulation use 50bytes, the mtu need to be 50 bytes lower
166 than the outgoing physical interface.
171 [thumbnail="screenshot/gui-sdn-zone-evpn.png"]
173 This is the most complex plugin.
175 BGP-evpn allow to create routable layer3 network.
176 The vnet of evpn can have an anycast ip address/mac address.
177 The bridge ip is the same on each node, then vm can use
179 The routing is working only across vnets of a specific zone through a vrf.
181 Specific evpn configuration options:
183 * vrf vxlan tag: This is a vxlan-id used for routing interconnect between vnets,
184 it must be different than vxlan-id of vnets
186 * controller: an evpn need to be defined first (see controller plugins section)
188 * mtu: because vxlan encapsulation use 50bytes, the mtu need to be 50 bytes lower
189 than the outgoing physical interface.
198 [thumbnail="screenshot/gui-sdn-controller-evpn.png"]
200 For bgp-evpn, we need a controller to manage the control plane.
201 The software controller is "frr" router.
202 You need to install it on each node where you want to deploy the evpn zone.
208 configuration options:
210 *asn: a unique bgp asn number.
211 It's recommended to use private asn number (64512 – 65534, 4200000000 – 4294967294)
213 *peers: an ip list of all nodes where you want to communicate (could be also external nodes or route reflectors servers)
215 If you want to route traffic from the sdn bgp-evpn network to external world:
217 * gateway-nodes: The proxmox nodes from where the bgp-evpn traffic will exit to external through the nodes default gateway
219 If you want that gateway nodes don't use the default gateway, but for example, sent traffic to external bgp routers
221 * gateway-external-peers: 192.168.0.253,192.168.0.254
224 Local deployment Monitoring
225 ---------------------------
227 [thumbnail="screenshot/gui-sdn-local-status.png"]
229 After apply configuration on the main sdn section,
230 the local configuration is generated locally on each node,
231 in /etc/network/interfaces.d/sdn, and reloaded.
233 You can monitor the status of local zones && vnets through the main tree.
239 node1: /etc/network/interfaces
242 iface vmbr0 inet manual
246 bridge-vlan-aware yes
249 #management ip on vlan100
251 iface vmbr0.100 inet static
252 address 192.168.0.1/24
254 source /etc/network/interfaces.d/*
258 node2: /etc/network/interfaces
262 iface vmbr0 inet manual
266 bridge-vlan-aware yes
269 #management ip on vlan100
271 iface vmbr0.100 inet static
272 address 192.168.0.2/24
274 source /etc/network/interfaces.d/*
284 create a vnet1 with vlan-id 10
292 Apply the configuration on the main sdn section, to create vnets locally on each nodes,
293 and generate frr config.
296 create a vm1, with 1 nic on vnet1 on node1
300 iface eth0 inet static
301 address 10.0.3.100/24
304 create a vm2, with 1 nic on vnet1 on node2
307 iface eth0 inet static
308 address 10.0.3.101/24
311 Then, you should be able to ping between between vm1 && vm2
316 node1: /etc/network/interfaces
319 iface vmbr0 inet manual
323 bridge-vlan-aware yes
326 #management ip on vlan100
328 iface vmbr0.100 inet static
329 address 192.168.0.1/24
331 source /etc/network/interfaces.d/*
334 node2: /etc/network/interfaces
338 iface vmbr0 inet manual
342 bridge-vlan-aware yes
345 #management ip on vlan100
347 iface vmbr0.100 inet static
348 address 192.168.0.2/24
350 source /etc/network/interfaces.d/*
353 create an qinq zone1 with service vlan 20
361 create an qinq zone2 with service vlan 30
369 create a vnet1 with customer vlan-id 100 on qinqzone1
377 create a vnet2 with customer vlan-id 100 on qinqzone2
385 Apply the configuration on the main sdn section, to create vnets locally on each nodes,
386 and generate frr config.
389 create a vm1, with 1 nic on vnet1 on node1
393 iface eth0 inet static
394 address 10.0.3.100/24
397 create a vm2, with 1 nic on vnet1 on node2
400 iface eth0 inet static
401 address 10.0.3.101/24
404 create a vm3, with 1 nic on vnet2 on node1
408 iface eth0 inet static
409 address 10.0.3.102/24
412 create a vm4, with 1 nic on vnet2 on node2
415 iface eth0 inet static
416 address 10.0.3.103/24
419 Then, you should be able to ping between between vm1 && vm2
420 vm3 && vm4 could ping together
422 but vm1 && vm2 couldn't ping vm3 && vm4,
423 as it's a different zone, with different service vlan
428 node1: /etc/network/interfaces
431 iface vmbr0 inet static
432 address 192.168.0.1/24
433 gateway 192.168.0.254
439 source /etc/network/interfaces.d/*
442 node2: /etc/network/interfaces
446 iface vmbr0 inet static
447 address 192.168.0.2/24
448 gateway 192.168.0.254
454 source /etc/network/interfaces.d/*
457 node3: /etc/network/interfaces
461 iface vmbr0 inet static
462 address 192.168.0.3/24
463 gateway 192.168.0.254
469 source /etc/network/interfaces.d/*
476 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
488 Apply the configuration on the main sdn section, to create vnets locally on each nodes,
489 and generate frr config.
492 create a vm1, with 1 nic on vnet1 on node2
496 iface eth0 inet static
497 address 10.0.3.100/24
501 create a vm2, with 1 nic on vnet1 on node3
504 iface eth0 inet static
505 address 10.0.3.101/24
509 Then, you should be able to ping between between vm1 && vm2
515 node1: /etc/network/interfaces
519 iface vmbr0 inet static
520 address 192.168.0.1/24
521 gateway 192.168.0.254
527 source /etc/network/interfaces.d/*
530 node2: /etc/network/interfaces
534 iface vmbr0 inet static
535 address 192.168.0.2/24
536 gateway 192.168.0.254
542 source /etc/network/interfaces.d/*
545 node3: /etc/network/interfaces
549 iface vmbr0 inet static
550 address 192.168.0.3/24
551 gateway 192.168.0.254
557 source /etc/network/interfaces.d/*
560 create a evpn controller
565 peers: 192.168.0.1,192.168.0.2,192.168.0.3
566 gateway nodes: node1,node2
574 controller: myevpnctl
585 mac address: 8C:73:B2:7B:F9:60 #random generate mac addres
595 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
598 Apply the configuration on the main sdn section, to create vnets locally on each nodes,
599 and generate frr config.
603 create a vm1, with 1 nic on vnet1 on node2
607 iface eth0 inet static
608 address 10.0.1.100/24
609 gateway 10.0.1.1 #this is the ip of the vnet1
613 create a vm2, with 1 nic on vnet2 on node3
616 iface eth0 inet static
617 address 10.0.2.100/24
618 gateway 10.0.2.1 #this is the ip of the vnet2
623 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
625 from vm2 on node3, if you ping an external ip, the packet will go
626 to the vnet2 gateway, then will be routed to gateway nodes (node1 or node2)
627 then the packet will be routed to the node1 or node2 default gw.
629 Of course you need to add reverse routes to 10.0.1.0/24 && 10.0.2.0/24 to node1,node2 on your external gateway.
631 If you have configured an external bgp router, the bgp-evpn routes (10.0.1.0/24 && 10.0.2.0/24),
632 will be announced dynamically.