]> git.proxmox.com Git - pve-docs.git/blob - pvesdn.adoc
vzdump: drop overly scary & outdated warning about fleecing
[pve-docs.git] / pvesdn.adoc
1 [[chapter_pvesdn]]
2 Software Defined Network
3 ========================
4 ifndef::manvolnum[]
5 :pve-toplevel:
6 endif::manvolnum[]
7
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows you to create
9 virtual networks (VNets) at the datacenter level.
10
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 documentation for it is also still under development. Ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
14
15
16 [[pvesdn_installation]]
17 Installation
18 ------------
19
20 To enable the experimental Software Defined Network (SDN) integration, you need
21 to install the `libpve-network-perl` and `ifupdown2` packages on every node:
22
23 ----
24 apt update
25 apt install libpve-network-perl ifupdown2
26 ----
27
28 NOTE: {pve} version 7 and above come installed with ifupdown2.
29
30 After this, you need to add the following line to the end of the
31 `/etc/network/interfaces` configuration file, so that the SDN configuration gets
32 included and activated.
33
34 ----
35 source /etc/network/interfaces.d/*
36 ----
37
38
39 Basic Overview
40 --------------
41
42 The {pve} SDN allows for separation and fine-grained control of virtual guest
43 networks, using flexible, software-controlled configurations.
44
45 Separation is managed through zones, where a zone is its own virtual separated
46 network area. A 'VNet' is a type of a virtual network connected to a zone.
47 Depending on which type or plugin the zone uses, it can behave differently and
48 offer different features, advantages, and disadvantages. Normally, a 'VNet'
49 appears as a common Linux bridge with either a VLAN or 'VXLAN' tag, however,
50 some can also use layer 3 routing for control. 'VNets' are deployed locally on
51 each node, after being configured from the cluster-wide datacenter SDN
52 administration interface.
53
54
55 Main Configuration
56 ~~~~~~~~~~~~~~~~~~
57
58 Configuration is done at the datacenter (cluster-wide) level and is saved in
59 files located in the shared configuration file system:
60 `/etc/pve/sdn`
61
62 On the web-interface, SDN features 3 main sections:
63
64 * SDN: An overview of the SDN state
65
66 * Zones: Create and manage the virtually separated network zones
67
68 * VNets: Create virtual network bridges and manage subnets
69
70 In addition to this, the following options are offered:
71
72 * Controller: For controlling layer 3 routing in complex setups
73
74 * Subnets: Used to defined IP networks on VNets
75
76 * IPAM: Enables the use of external tools for IP address management (guest
77 IPs)
78
79 * DNS: Define a DNS server API for registering virtual guests' hostname and IP
80 addresses
81
82 [[pvesdn_config_main_sdn]]
83
84 SDN
85 ~~~
86
87 This is the main status panel. Here you can see the deployment status of zones
88 on different nodes.
89
90 The 'Apply' button is used to push and reload local configuration on all cluster
91 nodes.
92
93
94 [[pvesdn_local_deployment_monitoring]]
95 Local Deployment Monitoring
96 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
97
98 After applying the configuration through the main SDN panel,
99 the local network configuration is generated locally on each node in
100 the file `/etc/network/interfaces.d/sdn`, and reloaded with ifupdown2.
101
102 You can monitor the status of local zones and VNets through the main tree.
103
104
105 [[pvesdn_config_zone]]
106 Zones
107 -----
108
109 A zone defines a virtually separated network. Zones can be restricted to
110 specific nodes and assigned permissions, in order to restrict users to a certain
111 zone and its contained VNets.
112
113 Different technologies can be used for separation:
114
115 * VLAN: Virtual LANs are the classic method of subdividing a LAN
116
117 * QinQ: Stacked VLAN (formally known as `IEEE 802.1ad`)
118
119 * VXLAN: Layer2 VXLAN
120
121 * Simple: Isolated Bridge. A simple layer 3 routing bridge (NAT)
122
123 * EVPN (BGP EVPN): VXLAN using layer 3 border gateway protocol (BGP) routing
124
125 Common options
126 ~~~~~~~~~~~~~~
127
128 The following options are available for all zone types:
129
130 nodes:: The nodes which the zone and associated VNets should be deployed on
131
132 ipam:: Optional. Use an IP Address Management (IPAM) tool to manage IPs in the
133 zone.
134
135 dns:: Optional. DNS API server.
136
137 reversedns:: Optional. Reverse DNS API server.
138
139 dnszone:: Optional. DNS domain name. Used to register hostnames, such as
140 `<hostname>.<domain>`. The DNS zone must already exist on the DNS server.
141
142
143 [[pvesdn_zone_plugin_simple]]
144 Simple Zones
145 ~~~~~~~~~~~~
146
147 This is the simplest plugin. It will create an isolated VNet bridge.
148 This bridge is not linked to a physical interface, and VM traffic is only
149 local between the node(s).
150 It can also be used in NAT or routed setups.
151
152 [[pvesdn_zone_plugin_vlan]]
153 VLAN Zones
154 ~~~~~~~~~~
155
156 This plugin reuses an existing local Linux or OVS bridge, and manages the VLANs
157 on it. The benefit of using the SDN module is that you can create different
158 zones with specific VNet VLAN tags, and restrict virtual machines to separated
159 zones.
160
161 Specific `VLAN` configuration options:
162
163 bridge:: Reuse this local bridge or OVS switch, already configured on *each*
164 local node.
165
166 [[pvesdn_zone_plugin_qinq]]
167 QinQ Zones
168 ~~~~~~~~~~
169
170 QinQ also known as VLAN stacking, wherein the first VLAN tag is defined for the
171 zone (the 'service-vlan'), and the second VLAN tag is defined for the
172 VNets.
173
174 NOTE: Your physical network switches must support stacked VLANs for this
175 configuration!
176
177 Below are the configuration options specific to QinQ:
178
179 bridge:: A local, VLAN-aware bridge that is already configured on each local
180 node
181
182 service vlan:: The main VLAN tag of this zone
183
184 service vlan protocol:: Allows you to choose between an 802.1q (default) or
185 802.1ad service VLAN type.
186
187 mtu:: Due to the double stacking of tags, you need 4 more bytes for QinQ VLANs.
188 For example, you must reduce the MTU to `1496` if you physical interface MTU is
189 `1500`.
190
191 [[pvesdn_zone_plugin_vxlan]]
192 VXLAN Zones
193 ~~~~~~~~~~~
194
195 The VXLAN plugin establishes a tunnel (overlay) on top of an existing
196 network (underlay). This encapsulates layer 2 Ethernet frames within layer
197 4 UDP datagrams, using `4789` as the default destination port. You can, for
198 example, create a private IPv4 VXLAN network on top of public internet network
199 nodes.
200
201 This is a layer 2 tunnel only, so no routing between different VNets is
202 possible.
203
204 Each VNet will have a specific VXLAN ID in the range 1 - 16777215.
205
206 Specific EVPN configuration options:
207
208 peers address list:: A list of IP addresses from each node through which you
209 want to communicate. Can also be external nodes.
210
211 mtu:: Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes
212 lower than the outgoing physical interface.
213
214 [[pvesdn_zone_plugin_evpn]]
215 EVPN Zones
216 ~~~~~~~~~~
217
218 This is the most complex of all the supported plugins.
219
220 BGP-EVPN allows you to create a routable layer 3 network. The VNet of EVPN can
221 have an anycast IP address and/or MAC address. The bridge IP is the same on each
222 node, meaning a virtual guest can use this address as gateway.
223
224 Routing can work across VNets from different zones through a VRF (Virtual
225 Routing and Forwarding) interface.
226
227 The configuration options specific to EVPN are as follows:
228
229 VRF VXLAN tag:: This is a VXLAN-ID used for routing interconnect between VNets.
230 It must be different than the VXLAN-ID of the VNets.
231
232 controller:: An EVPN-controller must to be defined first (see controller plugins
233 section).
234
235 VNet MAC address:: A unique, anycast MAC address for all VNets in this zone.
236 Will be auto-generated if not defined.
237
238 Exit Nodes:: Optional. This is used if you want to define some {pve} nodes as
239 exit gateways from the EVPN network, through the real network. The configured
240 nodes will announce a default route in the EVPN network.
241
242 Primary Exit Node:: Optional. If you use multiple exit nodes, this forces
243 traffic to a primary exit node, instead of load-balancing on all nodes. This
244 is required if you want to use SNAT or if your upstream router doesn't support
245 ECMP.
246
247 Exit Nodes local routing:: Optional. This is a special option if you need to
248 reach a VM/CT service from an exit node. (By default, the exit nodes only
249 allow forwarding traffic between real network and EVPN network).
250
251 Advertise Subnets:: Optional. If you have silent VMs/CTs (for example, if you
252 have multiple IPs and the anycast gateway doesn't see traffic from theses IPs,
253 the IP addresses won't be able to be reach inside the EVPN network). This
254 option will announce the full subnet in the EVPN network in this case.
255
256 Disable Arp-Nd Suppression:: Optional. Don't suppress ARP or ND packets.
257 This is required if you use floating IPs in your guest VMs
258 (IP are MAC addresses are being moved between systems).
259
260 Route-target import:: Optional. Allows you to import a list of external EVPN
261 route targets. Used for cross-DC or different EVPN network interconnects.
262
263 MTU:: Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes
264 less than the maximal MTU of the outgoing physical interface.
265
266
267 [[pvesdn_config_vnet]]
268 VNets
269 -----
270
271 A `VNet` is, in its basic form, a Linux bridge that will be deployed locally on
272 the node and used for virtual machine communication.
273
274 The VNet configuration properties are:
275
276 ID:: An 8 character ID to name and identify a VNet
277
278 Alias:: Optional longer name, if the ID isn't enough
279
280 Zone:: The associated zone for this VNet
281
282 Tag:: The unique VLAN or VXLAN ID
283
284 VLAN Aware:: Enable adding an extra VLAN tag in the virtual machine or
285 container's vNIC configuration, to allow the guest OS to manage the VLAN's tag.
286
287 [[pvesdn_config_subnet]]
288 Subnets
289 ~~~~~~~~
290
291 A subnetwork (subnet) allows you to define a specific IP network
292 (IPv4 or IPv6). For each VNet, you can define one or more subnets.
293
294 A subnet can be used to:
295
296 * Restrict the IP addresses you can define on a specific VNet
297 * Assign routes/gateways on a VNet in layer 3 zones
298 * Enable SNAT on a VNet in layer 3 zones
299 * Auto assign IPs on virtual guests (VM or CT) through IPAM plugins
300 * DNS registration through DNS plugins
301
302 If an IPAM server is associated with the subnet zone, the subnet prefix will be
303 automatically registered in the IPAM.
304
305 Subnet properties are:
306
307 ID:: A CIDR network address, for example 10.0.0.0/8
308
309 Gateway:: The IP address of the network's default gateway. On layer 3 zones
310 (Simple/EVPN plugins), it will be deployed on the VNet.
311
312 SNAT:: Optional. Enable SNAT for layer 3 zones (Simple/EVPN plugins), for this
313 subnet. The subnet's source IP will be NATted to server's outgoing interface/IP.
314 On EVPN zones, this is only done on EVPN gateway-nodes.
315
316 Dnszoneprefix:: Optional. Add a prefix to the domain registration, like
317 <hostname>.prefix.<domain>
318
319 [[pvesdn_config_controllers]]
320 Controllers
321 -----------
322
323 Some zone types need an external controller to manage the VNet control-plane.
324 Currently this is only required for the `bgp-evpn` zone plugin.
325
326 [[pvesdn_controller_plugin_evpn]]
327 EVPN Controller
328 ~~~~~~~~~~~~~~~
329
330 For `BGP-EVPN`, we need a controller to manage the control plane.
331 The currently supported software controller is the "frr" router.
332 You may need to install it on each node where you want to deploy EVPN zones.
333
334 ----
335 apt install frr frr-pythontools
336 ----
337
338 Configuration options:
339
340 asn:: A unique BGP ASN number. It's highly recommended to use a private ASN
341 number (64512 – 65534, 4200000000 – 4294967294), as otherwise you could end up
342 breaking global routing by mistake.
343
344 peers:: An IP list of all nodes where you want to communicate for the EVPN
345 (could also be external nodes or route reflectors servers)
346
347
348 [[pvesdn_controller_plugin_BGP]]
349 BGP Controller
350 ~~~~~~~~~~~~~~~
351
352 The BGP controller is not used directly by a zone.
353 You can use it to configure FRR to manage BGP peers.
354
355 For BGP-EVPN, it can be used to define a different ASN by node, so doing EBGP.
356
357 Configuration options:
358
359 node:: The node of this BGP controller
360
361 asn:: A unique BGP ASN number. It's highly recommended to use a private ASN
362 number in the range (64512 - 65534) or (4200000000 - 4294967294), as otherwise
363 you could break global routing by mistake.
364
365 peers:: A list of peer IP addresses you want to communicate with using the
366 underlying BGP network.
367
368 ebgp:: If your peer's remote-AS is different, this enables EBGP.
369
370 loopback:: Use a loopback or dummy interface as the source of the EVPN network
371 (for multipath).
372
373 ebgp-mutltihop:: Increase the number of hops to reach peers, in case they are
374 not directly connected or they use loopback.
375
376 bgp-multipath-as-path-relax:: Allow ECMP if your peers have different ASN.
377
378 [[pvesdn_config_ipam]]
379 IPAMs
380 -----
381
382 IPAM (IP Address Management) tools are used to manage/assign the IP addresses of
383 guests on the network. It can be used to find free IP addresses when you create
384 a VM/CT for example (not yet implemented).
385
386 An IPAM can be associated with one or more zones, to provide IP addresses
387 for all subnets defined in those zones.
388
389 [[pvesdn_ipam_plugin_pveipam]]
390 {pve} IPAM Plugin
391 ~~~~~~~~~~~~~~~~~
392
393 This is the default internal IPAM for your {pve} cluster, if you don't have
394 external IPAM software.
395
396 [[pvesdn_ipam_plugin_phpipam]]
397 phpIPAM Plugin
398 ~~~~~~~~~~~~~~
399 https://phpipam.net/
400
401 You need to create an application in phpIPAM and add an API token with admin
402 privileges.
403
404 The phpIPAM configuration properties are:
405
406 url:: The REST-API endpoint: `http://phpipam.domain.com/api/<appname>/`
407
408 token:: An API access token
409
410 section:: An integer ID. Sections are a group of subnets in phpIPAM. Default
411 installations use `sectionid=1` for customers.
412
413 [[pvesdn_ipam_plugin_netbox]]
414 NetBox IPAM Plugin
415 ~~~~~~~~~~~~~~~~~~
416
417 NetBox is an IP address management (IPAM) and datacenter infrastructure
418 management (DCIM) tool. See the source code repository for details:
419 https://github.com/netbox-community/netbox
420
421 You need to create an API token in NetBox to use it:
422 https://netbox.readthedocs.io/en/stable/api/authentication
423
424 The NetBox configuration properties are:
425
426 url:: The REST API endpoint: `http://yournetbox.domain.com/api`
427
428 token:: An API access token
429
430 [[pvesdn_config_dns]]
431 DNS
432 ---
433
434 The DNS plugin in {pve} SDN is used to define a DNS API server for registration
435 of your hostname and IP address. A DNS configuration is associated with one or
436 more zones, to provide DNS registration for all the subnet IPs configured for
437 a zone.
438
439 [[pvesdn_dns_plugin_powerdns]]
440 PowerDNS Plugin
441 ~~~~~~~~~~~~~~~
442 https://doc.powerdns.com/authoritative/http-api/index.html
443
444 You need to enable the web server and the API in your PowerDNS config:
445
446 ----
447 api=yes
448 api-key=arandomgeneratedstring
449 webserver=yes
450 webserver-port=8081
451 ----
452
453 The PowerDNS configuration options are:
454
455 url:: The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
456
457 key:: An API access key
458
459 ttl:: The default TTL for records
460
461
462 Examples
463 --------
464
465 [[pvesdn_setup_example_vlan]]
466 VLAN Setup Example
467 ~~~~~~~~~~~~~~~~~~
468
469 TIP: While we show plaintext configuration content here, almost everything
470 should be configurable using the web-interface only.
471
472 Node1: /etc/network/interfaces
473
474 ----
475 auto vmbr0
476 iface vmbr0 inet manual
477 bridge-ports eno1
478 bridge-stp off
479 bridge-fd 0
480 bridge-vlan-aware yes
481 bridge-vids 2-4094
482
483 #management ip on vlan100
484 auto vmbr0.100
485 iface vmbr0.100 inet static
486 address 192.168.0.1/24
487
488 source /etc/network/interfaces.d/*
489 ----
490
491 Node2: /etc/network/interfaces
492
493 ----
494 auto vmbr0
495 iface vmbr0 inet manual
496 bridge-ports eno1
497 bridge-stp off
498 bridge-fd 0
499 bridge-vlan-aware yes
500 bridge-vids 2-4094
501
502 #management ip on vlan100
503 auto vmbr0.100
504 iface vmbr0.100 inet static
505 address 192.168.0.2/24
506
507 source /etc/network/interfaces.d/*
508 ----
509
510 Create a VLAN zone named `myvlanzone':
511
512 ----
513 id: myvlanzone
514 bridge: vmbr0
515 ----
516
517 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
518 `myvlanzone' as its zone.
519
520 ----
521 id: myvnet1
522 zone: myvlanzone
523 tag: 10
524 ----
525
526 Apply the configuration through the main SDN panel, to create VNets locally on
527 each node.
528
529 Create a Debian-based virtual machine (vm1) on node1, with a vNIC on `myvnet1'.
530
531 Use the following network configuration for this VM:
532
533 ----
534 auto eth0
535 iface eth0 inet static
536 address 10.0.3.100/24
537 ----
538
539 Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet
540 `myvnet1' as vm1.
541
542 Use the following network configuration for this VM:
543
544 ----
545 auto eth0
546 iface eth0 inet static
547 address 10.0.3.101/24
548 ----
549
550 Following this, you should be able to ping between both VMs over that network.
551
552
553 [[pvesdn_setup_example_qinq]]
554 QinQ Setup Example
555 ~~~~~~~~~~~~~~~~~~
556
557 TIP: While we show plaintext configuration content here, almost everything
558 should be configurable using the web-interface only.
559
560 Node1: /etc/network/interfaces
561
562 ----
563 auto vmbr0
564 iface vmbr0 inet manual
565 bridge-ports eno1
566 bridge-stp off
567 bridge-fd 0
568 bridge-vlan-aware yes
569 bridge-vids 2-4094
570
571 #management ip on vlan100
572 auto vmbr0.100
573 iface vmbr0.100 inet static
574 address 192.168.0.1/24
575
576 source /etc/network/interfaces.d/*
577 ----
578
579 Node2: /etc/network/interfaces
580
581 ----
582 auto vmbr0
583 iface vmbr0 inet manual
584 bridge-ports eno1
585 bridge-stp off
586 bridge-fd 0
587 bridge-vlan-aware yes
588 bridge-vids 2-4094
589
590 #management ip on vlan100
591 auto vmbr0.100
592 iface vmbr0.100 inet static
593 address 192.168.0.2/24
594
595 source /etc/network/interfaces.d/*
596 ----
597
598 Create a QinQ zone named `qinqzone1' with service VLAN 20
599
600 ----
601 id: qinqzone1
602 bridge: vmbr0
603 service vlan: 20
604 ----
605
606 Create another QinQ zone named `qinqzone2' with service VLAN 30
607
608 ----
609 id: qinqzone2
610 bridge: vmbr0
611 service vlan: 30
612 ----
613
614 Create a VNet named `myvnet1' with customer VLAN-ID 100 on the previously
615 created `qinqzone1' zone.
616
617 ----
618 id: myvnet1
619 zone: qinqzone1
620 tag: 100
621 ----
622
623 Create a `myvnet2' with customer VLAN-ID 100 on the previously created
624 `qinqzone2' zone.
625
626 ----
627 id: myvnet2
628 zone: qinqzone2
629 tag: 100
630 ----
631
632 Apply the configuration on the main SDN web-interface panel to create VNets
633 locally on each nodes.
634
635 Create a Debian-based virtual machine (vm1) on node1, with a vNIC on `myvnet1'.
636
637 Use the following network configuration for this VM:
638
639 ----
640 auto eth0
641 iface eth0 inet static
642 address 10.0.3.100/24
643 ----
644
645 Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet
646 `myvnet1' as vm1.
647
648 Use the following network configuration for this VM:
649
650 ----
651 auto eth0
652 iface eth0 inet static
653 address 10.0.3.101/24
654 ----
655
656 Create a third virtual machine (vm3) on node1, with a vNIC on the other VNet
657 `myvnet2'.
658
659 Use the following network configuration for this VM:
660
661 ----
662 auto eth0
663 iface eth0 inet static
664 address 10.0.3.102/24
665 ----
666
667 Create another virtual machine (vm4) on node2, with a vNIC on the same VNet
668 `myvnet2' as vm3.
669
670 Use the following network configuration for this VM:
671
672 ----
673 auto eth0
674 iface eth0 inet static
675 address 10.0.3.103/24
676 ----
677
678 Then, you should be able to ping between the VMs 'vm1' and 'vm2', as well as
679 between 'vm3' and 'vm4'. However, neither of VMs 'vm1' or 'vm2' can ping VMs
680 'vm3' or 'vm4', as they are on a different zone with a different service-vlan.
681
682
683 [[pvesdn_setup_example_vxlan]]
684 VXLAN Setup Example
685 ~~~~~~~~~~~~~~~~~~~
686
687 TIP: While we show plaintext configuration content here, almost everything
688 is configurable through the web-interface.
689
690 node1: /etc/network/interfaces
691
692 ----
693 auto vmbr0
694 iface vmbr0 inet static
695 address 192.168.0.1/24
696 gateway 192.168.0.254
697 bridge-ports eno1
698 bridge-stp off
699 bridge-fd 0
700 mtu 1500
701
702 source /etc/network/interfaces.d/*
703 ----
704
705 node2: /etc/network/interfaces
706
707 ----
708 auto vmbr0
709 iface vmbr0 inet static
710 address 192.168.0.2/24
711 gateway 192.168.0.254
712 bridge-ports eno1
713 bridge-stp off
714 bridge-fd 0
715 mtu 1500
716
717 source /etc/network/interfaces.d/*
718 ----
719
720 node3: /etc/network/interfaces
721
722 ----
723 auto vmbr0
724 iface vmbr0 inet static
725 address 192.168.0.3/24
726 gateway 192.168.0.254
727 bridge-ports eno1
728 bridge-stp off
729 bridge-fd 0
730 mtu 1500
731
732 source /etc/network/interfaces.d/*
733 ----
734
735 Create a VXLAN zone named `myvxlanzone', using a lower MTU to ensure the extra
736 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
737 the nodes to the peer address list.
738
739 ----
740 id: myvxlanzone
741 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
742 mtu: 1450
743 ----
744
745 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
746 previously.
747
748 ----
749 id: myvnet1
750 zone: myvxlanzone
751 tag: 100000
752 ----
753
754 Apply the configuration on the main SDN web-interface panel to create VNets
755 locally on each nodes.
756
757 Create a Debian-based virtual machine (vm1) on node1, with a vNIC on `myvnet1'.
758
759 Use the following network configuration for this VM (note the lower MTU).
760
761 ----
762 auto eth0
763 iface eth0 inet static
764 address 10.0.3.100/24
765 mtu 1450
766 ----
767
768 Create a second virtual machine (vm2) on node3, with a vNIC on the same VNet
769 `myvnet1' as vm1.
770
771 Use the following network configuration for this VM:
772
773 ----
774 auto eth0
775 iface eth0 inet static
776 address 10.0.3.101/24
777 mtu 1450
778 ----
779
780 Then, you should be able to ping between between 'vm1' and 'vm2'.
781
782
783 [[pvesdn_setup_example_evpn]]
784 EVPN Setup Example
785 ~~~~~~~~~~~~~~~~~~
786
787 node1: /etc/network/interfaces
788
789 ----
790 auto vmbr0
791 iface vmbr0 inet static
792 address 192.168.0.1/24
793 gateway 192.168.0.254
794 bridge-ports eno1
795 bridge-stp off
796 bridge-fd 0
797 mtu 1500
798
799 source /etc/network/interfaces.d/*
800 ----
801
802 node2: /etc/network/interfaces
803
804 ----
805 auto vmbr0
806 iface vmbr0 inet static
807 address 192.168.0.2/24
808 gateway 192.168.0.254
809 bridge-ports eno1
810 bridge-stp off
811 bridge-fd 0
812 mtu 1500
813
814 source /etc/network/interfaces.d/*
815 ----
816
817 node3: /etc/network/interfaces
818
819 ----
820 auto vmbr0
821 iface vmbr0 inet static
822 address 192.168.0.3/24
823 gateway 192.168.0.254
824 bridge-ports eno1
825 bridge-stp off
826 bridge-fd 0
827 mtu 1500
828
829 source /etc/network/interfaces.d/*
830 ----
831
832 Create an EVPN controller, using a private ASN number and the above node
833 addresses as peers.
834
835 ----
836 id: myevpnctl
837 asn: 65000
838 peers: 192.168.0.1,192.168.0.2,192.168.0.3
839 ----
840
841 Create an EVPN zone named `myevpnzone', using the previously created
842 EVPN-controller. Define 'node1' and 'node2' as exit nodes.
843
844 ----
845 id: myevpnzone
846 vrf vxlan tag: 10000
847 controller: myevpnctl
848 mtu: 1450
849 vnet mac address: 32:F4:05:FE:6C:0A
850 exitnodes: node1,node2
851 ----
852
853 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
854 ----
855 id: myvnet1
856 zone: myevpnzone
857 tag: 11000
858 ----
859
860 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on `myvnet1`.
861
862 ----
863 subnet: 10.0.1.0/24
864 gateway: 10.0.1.1
865 ----
866
867 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
868 different IPv4 CIDR network.
869
870 ----
871 id: myvnet2
872 zone: myevpnzone
873 tag: 12000
874 ----
875
876 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
877
878 ----
879 subnet: 10.0.2.0/24
880 gateway: 10.0.2.1
881 ----
882
883
884 Apply the configuration from the main SDN web-interface panel to create VNets
885 locally on each node and generate the FRR config.
886
887 Create a Debian-based virtual machine (vm1) on node1, with a vNIC on `myvnet1'.
888
889 Use the following network configuration for this VM:
890
891 ----
892 auto eth0
893 iface eth0 inet static
894 address 10.0.1.100/24
895 gateway 10.0.1.1 #this is the ip of the vnet1
896 mtu 1450
897 ----
898
899 Create a second virtual machine (vm2) on node2, with a vNIC on the other VNet
900 `myvnet2'.
901
902 Use the following network configuration for this VM:
903
904 ----
905 auto eth0
906 iface eth0 inet static
907 address 10.0.2.100/24
908 gateway 10.0.2.1 #this is the ip of the myvnet2
909 mtu 1450
910 ----
911
912
913 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
914
915 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
916 will go to the configured 'myvnet2' gateway, then will be routed to the exit
917 nodes ('node1' or 'node2') and from there it will leave those nodes over the
918 default gateway configured on node1 or node2.
919
920 NOTE: You need to add reverse routes for the '10.0.1.0/24' and '10.0.2.0/24'
921 networks to node1 and node2 on your external gateway, so that the public network
922 can reply back.
923
924 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
925 and 10.0.2.0/24 in this example), will be announced dynamically.
926
927
928 Notes
929 -----
930
931 VXLAN IPSEC Encryption
932 ~~~~~~~~~~~~~~~~~~~~~~
933
934 If you need to add encryption on top of a VXLAN, it's possible to do so with
935 IPSEC, through `strongswan`. You'll need to reduce the 'MTU' by 60 bytes (IPv4)
936 or 80 bytes (IPv6) to handle encryption.
937
938 So with default real 1500 MTU, you need to use a MTU of 1370 (1370 + 80 (IPSEC)
939 + 50 (VXLAN) == 1500).
940
941 .Install strongswan
942 ----
943 apt install strongswan
944 ----
945
946 Add configuration to `/etc/ipsec.conf'. We only need to encrypt traffic from
947 the VXLAN UDP port '4789'.
948
949 ----
950 conn %default
951 ike=aes256-sha1-modp1024! # the fastest, but reasonably secure cipher on modern HW
952 esp=aes256-sha1!
953 leftfirewall=yes # this is necessary when using Proxmox VE firewall rules
954
955 conn output
956 rightsubnet=%dynamic[udp/4789]
957 right=%any
958 type=transport
959 authby=psk
960 auto=route
961
962 conn input
963 leftsubnet=%dynamic[udp/4789]
964 type=transport
965 authby=psk
966 auto=route
967 ----
968
969 Then generate a pre-shared key with:
970
971 ----
972 openssl rand -base64 128
973 ----
974
975 and add the key to `/etc/ipsec.secrets', so that the file contents looks like:
976
977 ----
978 : PSK <generatedbase64key>
979 ----
980
981 You need to copy the PSK and the configuration onto the other nodes.