]> git.proxmox.com Git - pve-docs.git/blob - pvesdn.adoc
backup: clarify that CLI means FS-level and highlight retention-note
[pve-docs.git] / pvesdn.adoc
1 [[chapter_pvesdn]]
2 Software Defined Network
3 ========================
4 ifndef::manvolnum[]
5 :pve-toplevel:
6 endif::manvolnum[]
7
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
10
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
14
15
16 [[pvesdn_installation]]
17 Installation
18 ------------
19
20 To enable the experimental SDN integration, you need to install the
21 `libpve-network-perl` and `ifupdown2` package on every node:
22
23 ----
24 apt update
25 apt install libpve-network-perl ifupdown2
26 ----
27
28 After that you need to add the following line:
29
30 ----
31 source /etc/network/interfaces.d/*
32 ----
33 at the end of the `/etc/network/interfaces` configuration file, so that the SDN
34 config gets included and activated.
35
36
37 Basic Overview
38 --------------
39
40 The {pve} SDN allows separation and fine grained control of Virtual Guests
41 networks, using flexible software controlled configurations.
42
43 Separation consists of zones, a zone is it's own virtual separated network area.
44 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
45 type or plugin the zone uses it can behave differently and offer different
46 features, advantages or disadvantages.
47 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
48 'VXLAN' tag, but some can also use layer 3 routing for control.
49 The 'VNets' are deployed locally on each node, after configuration was committed
50 from the cluster-wide datacenter SDN administration interface.
51
52
53 Main configuration
54 ~~~~~~~~~~~~~~~~~~
55
56 The configuration is done at datacenter (cluster-wide) level, it will be saved
57 in configuration files located in the shared configuration file system:
58 `/etc/pve/sdn`
59
60 On the web-interface SDN feature have 3 main sections for the configuration
61
62 * SDN: a overview of the SDN state
63
64 * Zones: Create and manage the virtual separated network Zones
65
66 * VNets: Create virtual network bridges + subnets management.
67
68 And some options:
69
70 * Controller: For complex setups to control Layer 3 routing
71
72 * Sub-nets: Used to defined ip networks on VNets.
73
74 * IPAM: Allow to use external tools for IP address management (guest IPs)
75
76 * DNS: Allow to define a DNS server api for registering a virtual guests
77 hostname and IP-addresses
78
79 [[pvesdn_config_main_sdn]]
80
81 SDN
82 ~~~
83
84 This is the main status panel. Here you can see deployment status of zones on
85 different nodes.
86
87 There is an 'Apply' button, to push and reload local configuration on all
88 cluster nodes.
89
90
91 [[pvesdn_local_deployment_monitoring]]
92 Local Deployment Monitoring
93 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
94
95 After applying the configuration through the main SDN web-interface panel,
96 the local network configuration is generated locally on each node in
97 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
98
99 You can monitor the status of local zones and vnets through the main tree.
100
101
102 [[pvesdn_config_zone]]
103 Zones
104 -----
105
106 A zone will define a virtually separated network.
107
108 It can use different technologies for separation:
109
110 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
111
112 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
113
114 * VXLAN: (layer2 vxlan)
115
116 * Simple: Isolated Bridge, simple l3 routing bridge (NAT)
117
118 * bgp-evpn: vxlan using layer3 border gateway protocol routing
119
120 You can restrict a zone to specific nodes.
121
122 It's also possible to add permissions on a zone, to restrict user to use only a
123 specific zone and only the VNets in that zone
124
125 Common options
126 ~~~~~~~~~~~~~~
127
128 The following options are available for all zone types.
129
130 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
131 nodes.
132
133 ipam:: Optional, if you want to use an ipam tool to manage ips in this zone
134
135 dns:: Optional, dns api server.
136
137 reversedns:: Optional, reverse dns api server.
138
139 dnszone:: Optional, dns domain name. Use to register hostname like
140 `<hostname>.<domain>`. The dns zone need to be already existing in dns server.
141
142
143 [[pvesdn_zone_plugin_simple]]
144 Simple Zones
145 ~~~~~~~~~~~~
146
147 This is the simplest plugin, it will create an isolated vnet bridge.
148 This bridge is not linked to physical interfaces, VM traffic is only
149 local to the node(s).
150 It can be also used for NAT or routed setup.
151
152 [[pvesdn_zone_plugin_vlan]]
153 VLAN Zones
154 ~~~~~~~~~~
155
156 This plugin will reuse an existing local Linux or OVS bridge,
157 and manage VLANs on it.
158 The benefit of using SDN module, is that you can create different zones with
159 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
160
161 Specific `VLAN` configuration options:
162
163 bridge:: Reuse this local bridge or OVS switch, already
164 configured on *each* local node.
165
166 [[pvesdn_zone_plugin_qinq]]
167 QinQ Zones
168 ~~~~~~~~~~
169
170 QinQ is stacked VLAN. The first VLAN tag defined for the zone
171 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
172
173 NOTE: Your physical network switches must support stacked VLANs!
174
175 Specific QinQ configuration options:
176
177 bridge:: A local VLAN-aware bridge already configured on each local node
178
179 service vlan:: The main VLAN tag of this zone
180
181 service vlan protocol:: allow to define a 802.1q (default) or 802.1ad service vlan type.
182
183 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
184 For example, you reduce the MTU to `1496` if you physical interface MTU is
185 `1500`.
186
187 [[pvesdn_zone_plugin_vxlan]]
188 VXLAN Zones
189 ~~~~~~~~~~~
190
191 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
192 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
193 4 UDP datagrams, using `4789` as the default destination port. You can, for
194 example, create a private IPv4 VXLAN network on top of public internet network
195 nodes.
196 This is a layer2 tunnel only, no routing between different VNets is possible.
197
198 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
199
200 Specific EVPN configuration options:
201
202 peers address list:: A list of IPs from all nodes through which you want to
203 communicate. Can also be external nodes.
204
205 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
206 lower than the outgoing physical interface.
207
208 [[pvesdn_zone_plugin_evpn]]
209 EVPN Zones
210 ~~~~~~~~~~
211
212 This is the most complex of all supported plugins.
213
214 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
215 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
216 node, with this a virtual guest can use that address as gateway.
217
218 Routing can work across VNets from different zones through a VRF (Virtual
219 Routing and Forwarding) interface.
220
221 Specific EVPN configuration options:
222
223 VRF VXLAN tag:: This is a vxlan-id used for routing interconnect between vnets,
224 it must be different than VXLAN-id of VNets
225
226 controller:: an EVPN-controller need to be defined first (see controller
227 plugins section)
228
229 VNet MAC address:: A unique anycast MAC address for all VNets in this zone.
230 Will be auto-generated if not defined.
231
232 Exit Nodes:: This is used if you want to define some proxmox nodes, as exit
233 gateway from evpn network through real network. The configured nodes will
234 announce a default route in the EVPN network.
235
236 Advertise Subnets:: Optional. If you have silent vms/CT (for example, multiples
237 ips by interfaces, and the anycast gateway don't see traffic from theses ips,
238 the ips addresses won't be able to be reach inside the evpn network). This
239 option will announce the full subnet in the evpn network in this case.
240
241 Exit Nodes local routing:: Optional. This is a special option if you need to
242 reach a vm/ct service from an exit node. (By default, the exit nodes only
243 allow forwarding traffic between real network and evpn network).
244
245 MTU:: because VXLAN encapsulation use 50 bytes, the MTU needs to be 50 bytes
246 lower than the maximal MTU of the outgoing physical interface.
247
248
249 [[pvesdn_config_vnet]]
250 VNets
251 -----
252
253 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
254 on the node and used for Virtual Machine communication.
255
256 VNet properties are:
257
258 ID:: a 8 characters ID to name and identify a VNet
259
260 Alias:: Optional longer name, if the ID isn't enough
261
262 Zone:: The associated zone for this VNet
263
264 Tag:: The unique VLAN or VXLAN id
265
266 VLAN Aware:: Allow to add an extra VLAN tag in the virtual machine or
267 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
268
269 [[pvesdn_config_subnet]]
270
271 Sub-Nets
272 ~~~~~~~~
273
274 A sub-network (subnet or sub-net) allows you to define a specific IP network
275 (IPv4 or IPv6). For each VNET, you can define one or more subnets.
276
277 A subnet can be used to:
278
279 * restrict IP-addresses you can define on a specific VNET
280 * assign routes/gateway on a VNET in layer 3 zones
281 * enable SNAT on a VNET in layer 3 zones
282 * auto assign IPs on virtual guests (VM or CT) through IPAM plugin
283 * DNS registration through DNS plugins
284
285 If an IPAM server is associated to the subnet zone, the subnet prefix will be
286 automatically registered in the IPAM.
287
288
289 Subnet properties are:
290
291 ID:: a cidr network address. Ex: 10.0.0.0/8
292
293 Gateway:: ip address for the default gateway of the network.
294 On layer3 zones (simple/evpn plugins), it'll be deployed on the vnet.
295
296 Snat:: Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet.
297 The subnet source ip will be natted to server outgoing interface/ip.
298 On evpn zone, it's done only on evpn gateway-nodes.
299
300 Dnszoneprefix:: Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
301
302
303 [[pvesdn_config_controllers]]
304 Controllers
305 -----------
306
307 Some zone types need an external controller to manage the VNet control-plane.
308 Currently this is only required for the `bgp-evpn` zone plugin.
309
310 [[pvesdn_controller_plugin_evpn]]
311 EVPN Controller
312 ~~~~~~~~~~~~~~~
313
314 For `BGP-EVPN`, we need a controller to manage the control plane.
315 The currently supported software controller is the "frr" router.
316 You may need to install it on each node where you want to deploy EVPN zones.
317
318 ----
319 apt install frr frr-pythontools
320 ----
321
322 Configuration options:
323
324 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
325 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
326 breaking, or get broken, by global routing by mistake.
327
328 peers:: An ip list of all nodes where you want to communicate for the EVPN (could be also
329 external nodes or route reflectors servers)
330
331
332 [[pvesdn_controller_plugin_BGP]]
333 BGP Controller
334 ~~~~~~~~~~~~~~~
335
336 The bgp controller is not used directly by a zone.
337 You can used it to configure frr to manage bgp peers.
338
339 For BGP-evpn, it can be use to define a different ASN by node, so doing EBGP.
340
341 Configuration options:
342
343 node:: The node of this BGP controller
344
345 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
346 number from the range (64512 - 65534) or (4200000000 - 4294967294), as else
347 you could end up breaking, or get broken, by global routing by mistake.
348
349 peers:: An IP list of peers you want to communicate with for the underlying
350 BGP network.
351
352 ebgp:: If your peer's remote-AS is different, it's enabling EBGP.
353
354 loopback:: If you want to use a loopback or dummy interface as source for the
355 evpn network. (for multipath)
356
357 ebgp-mutltihop:: if the peers are not directly connected or use loopback, you can increase the
358 number of hops to reach them.
359
360 [[pvesdn_config_ipam]]
361 IPAMs
362 -----
363 IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network.
364 It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
365
366 An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
367
368
369 [[pvesdn_ipam_plugin_pveipam]]
370 {pve} IPAM plugin
371 ~~~~~~~~~~~~~~~~~
372
373 This is the default internal IPAM for your proxmox cluster if you don't have
374 external ipam software
375
376 [[pvesdn_ipam_plugin_phpipam]]
377 phpIPAM plugin
378 ~~~~~~~~~~~~~~
379 https://phpipam.net/
380
381 You need to create an application in phpipam, and add an api token with admin
382 permission
383
384 phpIPAM properties are:
385
386 url:: The REST-API endpoint: `http://phpipam.domain.com/api/<appname>/`
387 token:: An API access token
388 section:: An integer ID. Sections are group of subnets in phpIPAM. Default
389 installations use `sectionid=1` for customers.
390
391 [[pvesdn_ipam_plugin_netbox]]
392 Netbox IPAM plugin
393 ~~~~~~~~~~~~~~~~~~
394
395 NetBox is an IP address management (IPAM) and data center infrastructure
396 management (DCIM) tool, see the source code repository for details:
397 https://github.com/netbox-community/netbox
398
399 You need to create an api token in netbox
400 https://netbox.readthedocs.io/en/stable/api/authentication
401
402 NetBox properties are:
403
404 url:: The REST API endpoint: `http://yournetbox.domain.com/api`
405 token:: An API access token
406
407 [[pvesdn_config_dns]]
408 DNS
409 ---
410
411 The DNS plugin in {pve} SDN is used to define a DNS API server for registration
412 of your hostname and IP-address. A DNS configuration is associated with one or
413 more zones, to provide DNS registration for all the sub-net IPs configured for
414 a zone.
415
416 [[pvesdn_dns_plugin_powerdns]]
417 PowerDNS plugin
418 ~~~~~~~~~~~~~~~
419 https://doc.powerdns.com/authoritative/http-api/index.html
420
421 You need to enable the webserver and the API in your PowerDNS config:
422
423 ----
424 api=yes
425 api-key=arandomgeneratedstring
426 webserver=yes
427 webserver-port=8081
428 ----
429
430 Powerdns properties are:
431
432 url:: The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
433 key:: An API access key
434 ttl:: The default TTL for records
435
436
437 Examples
438 --------
439
440 [[pvesdn_setup_example_vlan]]
441 VLAN Setup Example
442 ~~~~~~~~~~~~~~~~~~
443
444 TIP: While we show plain configuration content here, almost everything should
445 be configurable using the web-interface only.
446
447 Node1: /etc/network/interfaces
448
449 ----
450 auto vmbr0
451 iface vmbr0 inet manual
452 bridge-ports eno1
453 bridge-stp off
454 bridge-fd 0
455 bridge-vlan-aware yes
456 bridge-vids 2-4094
457
458 #management ip on vlan100
459 auto vmbr0.100
460 iface vmbr0.100 inet static
461 address 192.168.0.1/24
462
463 source /etc/network/interfaces.d/*
464 ----
465
466 Node2: /etc/network/interfaces
467
468 ----
469 auto vmbr0
470 iface vmbr0 inet manual
471 bridge-ports eno1
472 bridge-stp off
473 bridge-fd 0
474 bridge-vlan-aware yes
475 bridge-vids 2-4094
476
477 #management ip on vlan100
478 auto vmbr0.100
479 iface vmbr0.100 inet static
480 address 192.168.0.2/24
481
482 source /etc/network/interfaces.d/*
483 ----
484
485 Create a VLAN zone named `myvlanzone':
486
487 ----
488 id: myvlanzone
489 bridge: vmbr0
490 ----
491
492 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
493 `myvlanzone' as it's zone.
494
495 ----
496 id: myvnet1
497 zone: myvlanzone
498 tag: 10
499 ----
500
501 Apply the configuration through the main SDN panel, to create VNets locally on
502 each nodes.
503
504 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
505
506 Use the following network configuration for this VM:
507
508 ----
509 auto eth0
510 iface eth0 inet static
511 address 10.0.3.100/24
512 ----
513
514 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
515 `myvnet1' as vm1.
516
517 Use the following network configuration for this VM:
518
519 ----
520 auto eth0
521 iface eth0 inet static
522 address 10.0.3.101/24
523 ----
524
525 Then, you should be able to ping between both VMs over that network.
526
527
528 [[pvesdn_setup_example_qinq]]
529 QinQ Setup Example
530 ~~~~~~~~~~~~~~~~~~
531
532 TIP: While we show plain configuration content here, almost everything should
533 be configurable using the web-interface only.
534
535 Node1: /etc/network/interfaces
536
537 ----
538 auto vmbr0
539 iface vmbr0 inet manual
540 bridge-ports eno1
541 bridge-stp off
542 bridge-fd 0
543 bridge-vlan-aware yes
544 bridge-vids 2-4094
545
546 #management ip on vlan100
547 auto vmbr0.100
548 iface vmbr0.100 inet static
549 address 192.168.0.1/24
550
551 source /etc/network/interfaces.d/*
552 ----
553
554 Node2: /etc/network/interfaces
555
556 ----
557 auto vmbr0
558 iface vmbr0 inet manual
559 bridge-ports eno1
560 bridge-stp off
561 bridge-fd 0
562 bridge-vlan-aware yes
563 bridge-vids 2-4094
564
565 #management ip on vlan100
566 auto vmbr0.100
567 iface vmbr0.100 inet static
568 address 192.168.0.2/24
569
570 source /etc/network/interfaces.d/*
571 ----
572
573 Create an QinQ zone named `qinqzone1' with service VLAN 20
574
575 ----
576 id: qinqzone1
577 bridge: vmbr0
578 service vlan: 20
579 ----
580
581 Create another QinQ zone named `qinqzone2' with service VLAN 30
582
583 ----
584 id: qinqzone2
585 bridge: vmbr0
586 service vlan: 30
587 ----
588
589 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
590 created `qinqzone1' zone.
591
592 ----
593 id: myvnet1
594 zone: qinqzone1
595 tag: 100
596 ----
597
598 Create a `myvnet2' with customer VLAN-id 100 on the previously created
599 `qinqzone2' zone.
600
601 ----
602 id: myvnet2
603 zone: qinqzone2
604 tag: 100
605 ----
606
607 Apply the configuration on the main SDN web-interface panel to create VNets
608 locally on each nodes.
609
610 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
611
612 Use the following network configuration for this VM:
613
614 ----
615 auto eth0
616 iface eth0 inet static
617 address 10.0.3.100/24
618 ----
619
620 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
621 `myvnet1' as vm1.
622
623 Use the following network configuration for this VM:
624
625 ----
626 auto eth0
627 iface eth0 inet static
628 address 10.0.3.101/24
629 ----
630
631 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
632 `myvnet2'.
633
634 Use the following network configuration for this VM:
635
636 ----
637 auto eth0
638 iface eth0 inet static
639 address 10.0.3.102/24
640 ----
641
642 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
643 `myvnet2' as vm3.
644
645 Use the following network configuration for this VM:
646
647 ----
648 auto eth0
649 iface eth0 inet static
650 address 10.0.3.103/24
651 ----
652
653 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
654 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
655 or 'vm4', as they are on a different zone with different service-vlan.
656
657
658 [[pvesdn_setup_example_vxlan]]
659 VXLAN Setup Example
660 ~~~~~~~~~~~~~~~~~~~
661
662 TIP: While we show plain configuration content here, almost everything should
663 be configurable using the web-interface only.
664
665 node1: /etc/network/interfaces
666
667 ----
668 auto vmbr0
669 iface vmbr0 inet static
670 address 192.168.0.1/24
671 gateway 192.168.0.254
672 bridge-ports eno1
673 bridge-stp off
674 bridge-fd 0
675 mtu 1500
676
677 source /etc/network/interfaces.d/*
678 ----
679
680 node2: /etc/network/interfaces
681
682 ----
683 auto vmbr0
684 iface vmbr0 inet static
685 address 192.168.0.2/24
686 gateway 192.168.0.254
687 bridge-ports eno1
688 bridge-stp off
689 bridge-fd 0
690 mtu 1500
691
692 source /etc/network/interfaces.d/*
693 ----
694
695 node3: /etc/network/interfaces
696
697 ----
698 auto vmbr0
699 iface vmbr0 inet static
700 address 192.168.0.3/24
701 gateway 192.168.0.254
702 bridge-ports eno1
703 bridge-stp off
704 bridge-fd 0
705 mtu 1500
706
707 source /etc/network/interfaces.d/*
708 ----
709
710 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
711 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
712 the nodes as peer address list.
713
714 ----
715 id: myvxlanzone
716 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
717 mtu: 1450
718 ----
719
720 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
721 previously.
722
723 ----
724 id: myvnet1
725 zone: myvxlanzone
726 tag: 100000
727 ----
728
729 Apply the configuration on the main SDN web-interface panel to create VNets
730 locally on each nodes.
731
732 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
733
734 Use the following network configuration for this VM, note the lower MTU here.
735
736 ----
737 auto eth0
738 iface eth0 inet static
739 address 10.0.3.100/24
740 mtu 1450
741 ----
742
743 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
744 `myvnet1' as vm1.
745
746 Use the following network configuration for this VM:
747
748 ----
749 auto eth0
750 iface eth0 inet static
751 address 10.0.3.101/24
752 mtu 1450
753 ----
754
755 Then, you should be able to ping between between 'vm1' and 'vm2'.
756
757
758 [[pvesdn_setup_example_evpn]]
759 EVPN Setup Example
760 ~~~~~~~~~~~~~~~~~~
761
762 node1: /etc/network/interfaces
763
764 ----
765 auto vmbr0
766 iface vmbr0 inet static
767 address 192.168.0.1/24
768 gateway 192.168.0.254
769 bridge-ports eno1
770 bridge-stp off
771 bridge-fd 0
772 mtu 1500
773
774 source /etc/network/interfaces.d/*
775 ----
776
777 node2: /etc/network/interfaces
778
779 ----
780 auto vmbr0
781 iface vmbr0 inet static
782 address 192.168.0.2/24
783 gateway 192.168.0.254
784 bridge-ports eno1
785 bridge-stp off
786 bridge-fd 0
787 mtu 1500
788
789 source /etc/network/interfaces.d/*
790 ----
791
792 node3: /etc/network/interfaces
793
794 ----
795 auto vmbr0
796 iface vmbr0 inet static
797 address 192.168.0.3/24
798 gateway 192.168.0.254
799 bridge-ports eno1
800 bridge-stp off
801 bridge-fd 0
802 mtu 1500
803
804 source /etc/network/interfaces.d/*
805 ----
806
807 Create a EVPN controller, using a private ASN number and above node addreesses
808 as peers.
809
810 ----
811 id: myevpnctl
812 asn: 65000
813 peers: 192.168.0.1,192.168.0.2,192.168.0.3
814 ----
815
816 Create an EVPN zone named `myevpnzone' using the previously created
817 EVPN-controller Define 'node1' and 'node2' as exit nodes.
818
819 ----
820 id: myevpnzone
821 vrf vxlan tag: 10000
822 controller: myevpnctl
823 mtu: 1450
824 vnet mac address: 32:F4:05:FE:6C:0A
825 exitnodes: node1,node2
826 ----
827
828 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
829 ----
830 id: myvnet1
831 zone: myevpnzone
832 tag: 11000
833 ----
834
835 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on vnet1
836
837 ----
838 subnet: 10.0.1.0/24
839 gateway: 10.0.1.1
840 ----
841
842 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
843 different IPv4 CIDR network.
844
845 ----
846 id: myvnet2
847 zone: myevpnzone
848 tag: 12000
849 ----
850
851 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
852
853 ----
854 subnet: 10.0.2.0/24
855 gateway: 10.0.2.1
856 ----
857
858
859 Apply the configuration on the main SDN web-interface panel to create VNets
860 locally on each nodes and generate the FRR config.
861
862 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
863
864 Use the following network configuration for this VM:
865
866 ----
867 auto eth0
868 iface eth0 inet static
869 address 10.0.1.100/24
870 gateway 10.0.1.1 #this is the ip of the vnet1
871 mtu 1450
872 ----
873
874 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
875 `myvnet2'.
876
877 Use the following network configuration for this VM:
878
879 ----
880 auto eth0
881 iface eth0 inet static
882 address 10.0.2.100/24
883 gateway 10.0.2.1 #this is the ip of the vnet2
884 mtu 1450
885 ----
886
887
888 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
889
890 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
891 will go to the configured 'myvnet2' gateway, then will be routed to the exit
892 nodes ('node1' or 'node2') and from there it will leave those nodes over the
893 default gateway configured on node1 or node2.
894
895 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
896 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
897 public network can reply back.
898
899 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
900 and 10.0.2.0/24 in this example), will be announced dynamically.
901
902
903 Notes
904 -----
905
906 VXLAN IPSEC Encryption
907 ~~~~~~~~~~~~~~~~~~~~~~
908 If you need to add encryption on top of VXLAN, it's possible to do so with
909 IPSEC through `strongswan`. You'll need to reduce the 'MTU' by 60 bytes (IPv4)
910 or 80 bytes (IPv6) to handle encryption.
911
912 So with default real 1500 MTU, you need to use a MTU of 1370 (1370 + 80 (IPSEC)
913 + 50 (VXLAN) == 1500).
914
915 .Install strongswan
916 ----
917 apt install strongswan
918 ----
919
920 Add configuration in `/etc/ipsec.conf'. We only need to encrypt traffic from
921 the VXLAN UDP port '4789'.
922
923 ----
924 conn %default
925 ike=aes256-sha1-modp1024! # the fastest, but reasonably secure cipher on modern HW
926 esp=aes256-sha1!
927 leftfirewall=yes # this is necessary when using Proxmox VE firewall rules
928
929 conn output
930 rightsubnet=%dynamic[udp/4789]
931 right=%any
932 type=transport
933 authby=psk
934 auto=route
935
936 conn input
937 leftsubnet=%dynamic[udp/4789]
938 type=transport
939 authby=psk
940 auto=route
941 ----
942
943 Then generate a preshared key with
944
945 ----
946 openssl rand -base64 128
947 ----
948
949 and copy the key in `/etc/ipsec.secrets' so that the file content looks like:
950
951 ----
952 : PSK <generatedbase64key>
953 ----
954
955 You need to copy the PSK and the config on other nodes.