]> git.proxmox.com Git - pve-docs.git/blob - pvesdn.adoc
btrfs: document df weirdness and how to better get usage
[pve-docs.git] / pvesdn.adoc
1 [[chapter_pvesdn]]
2 Software Defined Network
3 ========================
4 ifndef::manvolnum[]
5 :pve-toplevel:
6 endif::manvolnum[]
7
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
10
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
14
15
16 [[pvesdn_installation]]
17 Installation
18 ------------
19
20 To enable the experimental SDN integration, you need to install the
21 `libpve-network-perl` and `ifupdown2` package on every node:
22
23 ----
24 apt update
25 apt install libpve-network-perl ifupdown2
26 ----
27
28 After that you need to add the following line:
29
30 ----
31 source /etc/network/interfaces.d/*
32 ----
33 at the end of the `/etc/network/interfaces` configuration file, so that the SDN
34 config gets included and activated.
35
36
37 Basic Overview
38 --------------
39
40 The {pve} SDN allows separation and fine grained control of Virtual Guests
41 networks, using flexible software controlled configurations.
42
43 Separation consists of zones, a zone is it's own virtual separated network area.
44 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
45 type or plugin the zone uses it can behave differently and offer different
46 features, advantages or disadvantages.
47 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
48 'VXLAN' tag, but some can also use layer 3 routing for control.
49 The 'VNets' are deployed locally on each node, after configuration was committed
50 from the cluster-wide datacenter SDN administration interface.
51
52
53 Main configuration
54 ~~~~~~~~~~~~~~~~~~
55
56 The configuration is done at datacenter (cluster-wide) level, it will be saved
57 in configuration files located in the shared configuration file system:
58 `/etc/pve/sdn`
59
60 On the web-interface SDN feature have 3 main sections for the configuration
61
62 * SDN: a overview of the SDN state
63
64 * Zones: Create and manage the virtual separated network Zones
65
66 * VNets: Create virtual network bridges + subnets management.
67
68 And some options:
69
70 * Controller: For complex setups to control Layer 3 routing
71
72 * Sub-nets: Used to defined ip networks on VNets.
73
74 * IPAM: Allow to use external tools for IP address management (guest IPs)
75
76 * DNS: Allow to define a DNS server api for registering a virtual guests
77 hostname and IP-addresses
78
79 [[pvesdn_config_main_sdn]]
80
81 SDN
82 ~~~
83
84 This is the main status panel. Here you can see deployment status of zones on
85 different nodes.
86
87 There is an 'Apply' button, to push and reload local configuration on all
88 cluster nodes.
89
90
91 [[pvesdn_local_deployment_monitoring]]
92 Local Deployment Monitoring
93 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
94
95 After applying the configuration through the main SDN web-interface panel,
96 the local network configuration is generated locally on each node in
97 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
98
99 You can monitor the status of local zones and vnets through the main tree.
100
101
102 [[pvesdn_config_zone]]
103 Zones
104 -----
105
106 A zone will define a virtually separated network.
107
108 It can use different technologies for separation:
109
110 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
111
112 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
113
114 * VXLAN: (layer2 vxlan)
115
116 * Simple: Isolated Bridge, simple l3 routing bridge (NAT)
117
118 * bgp-evpn: vxlan using layer3 border gateway protocol routing
119
120 You can restrict a zone to specific nodes.
121
122 It's also possible to add permissions on a zone, to restrict user to use only a
123 specific zone and only the VNets in that zone
124
125 Common options
126 ~~~~~~~~~~~~~~
127
128 The following options are available for all zone types.
129
130 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
131 nodes.
132
133 ipam:: Optional, if you want to use an ipam tool to manage ips in this zone
134
135 dns:: Optional, dns api server.
136
137 reversedns:: Optional, reverse dns api server.
138
139 dnszone:: Optional, dns domain name. Use to register hostname like
140 `<hostname>.<domain>`. The dns zone need to be already existing in dns server.
141
142
143 [[pvesdn_zone_plugin_simple]]
144 Simple Zones
145 ~~~~~~~~~~~~
146
147 This is the simplest plugin, it will create an isolated vnet bridge.
148 This bridge is not linked to physical interfaces, VM traffic is only
149 local to the node(s).
150 It can be also used for NAT or routed setup.
151
152 [[pvesdn_zone_plugin_vlan]]
153 VLAN Zones
154 ~~~~~~~~~~
155
156 This plugin will reuse an existing local Linux or OVS bridge,
157 and manage VLANs on it.
158 The benefit of using SDN module, is that you can create different zones with
159 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
160
161 Specific `VLAN` configuration options:
162
163 bridge:: Reuse this local bridge or OVS switch, already
164 configured on *each* local node.
165
166 [[pvesdn_zone_plugin_qinq]]
167 QinQ Zones
168 ~~~~~~~~~~
169
170 QinQ is stacked VLAN. The first VLAN tag defined for the zone
171 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
172
173 NOTE: Your physical network switches must support stacked VLANs!
174
175 Specific QinQ configuration options:
176
177 bridge:: A local VLAN-aware bridge already configured on each local node
178
179 service vlan:: The main VLAN tag of this zone
180
181 service vlan protocol:: allow to define a 802.1q (default) or 802.1ad service vlan type.
182
183 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
184 For example, you reduce the MTU to `1496` if you physical interface MTU is
185 `1500`.
186
187 [[pvesdn_zone_plugin_vxlan]]
188 VXLAN Zones
189 ~~~~~~~~~~~
190
191 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
192 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
193 4 UDP datagrams, using `4789` as the default destination port. You can, for
194 example, create a private IPv4 VXLAN network on top of public internet network
195 nodes.
196 This is a layer2 tunnel only, no routing between different VNets is possible.
197
198 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
199
200 Specific EVPN configuration options:
201
202 peers address list:: A list of IPs from all nodes through which you want to
203 communicate. Can also be external nodes.
204
205 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
206 lower than the outgoing physical interface.
207
208 [[pvesdn_zone_plugin_evpn]]
209 EVPN Zones
210 ~~~~~~~~~~
211
212 This is the most complex of all supported plugins.
213
214 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
215 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
216 node, with this a virtual guest can use that address as gateway.
217
218 Routing can work across VNets from different zones through a VRF (Virtual
219 Routing and Forwarding) interface.
220
221 Specific EVPN configuration options:
222
223 VRF VXLAN tag:: This is a vxlan-id used for routing interconnect between vnets,
224 it must be different than VXLAN-id of VNets
225
226 controller:: an EVPN-controller need to be defined first (see controller
227 plugins section)
228
229 VNet MAC address:: A unique anycast MAC address for all VNets in this zone.
230 Will be auto-generated if not defined.
231
232 Exit Nodes:: This is used if you want to define some proxmox nodes, as exit
233 gateway from evpn network through real network. The configured nodes will
234 announce a default route in the EVPN network.
235
236 MTU:: because VXLAN encapsulation use 50 bytes, the MTU needs to be 50 bytes
237 lower than the maximal MTU of the outgoing physical interface.
238
239
240 [[pvesdn_config_vnet]]
241 VNets
242 -----
243
244 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
245 on the node and used for Virtual Machine communication.
246
247 VNet properties are:
248
249 ID:: a 8 characters ID to name and identify a VNet
250
251 Alias:: Optional longer name, if the ID isn't enough
252
253 Zone:: The associated zone for this VNet
254
255 Tag:: The unique VLAN or VXLAN id
256
257 VLAN Aware:: Allow to add an extra VLAN tag in the virtual machine or
258 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
259
260 [[pvesdn_config_subnet]]
261
262 Sub-Nets
263 ~~~~~~~~
264
265 A sub-network (subnet or sub-net) allows you to define a specific IP network
266 (IPv4 or IPv6). For each VNET, you can define one or more subnets.
267
268 A subnet can be used to:
269
270 * restrict IP-addresses you can define on a specific VNET
271 * assign routes/gateway on a VNET in layer 3 zones
272 * enable SNAT on a VNET in layer 3 zones
273 * auto assign IPs on virtual guests (VM or CT) through IPAM plugin
274 * DNS registration through DNS plugins
275
276 If an IPAM server is associated to the subnet zone, the subnet prefix will be
277 automatically registered in the IPAM.
278
279
280 Subnet properties are:
281
282 ID:: a cidr network address. Ex: 10.0.0.0/8
283
284 Gateway:: ip address for the default gateway of the network.
285 On layer3 zones (simple/evpn plugins), it'll be deployed on the vnet.
286
287 Snat:: Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet.
288 The subnet source ip will be natted to server outgoing interface/ip.
289 On evpn zone, it's done only on evpn gateway-nodes.
290
291 Dnszoneprefix:: Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
292
293
294 [[pvesdn_config_controllers]]
295 Controllers
296 -----------
297
298 Some zone types need an external controller to manage the VNet control-plane.
299 Currently this is only required for the `bgp-evpn` zone plugin.
300
301 [[pvesdn_controller_plugin_evpn]]
302 EVPN Controller
303 ~~~~~~~~~~~~~~~
304
305 For `BGP-EVPN`, we need a controller to manage the control plane.
306 The currently supported software controller is the "frr" router.
307 You may need to install it on each node where you want to deploy EVPN zones.
308
309 ----
310 apt install frr frr-pythontools
311 ----
312
313 Configuration options:
314
315 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
316 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
317 breaking, or get broken, by global routing by mistake.
318
319 peers:: An ip list of all nodes where you want to communicate for the EVPN (could be also
320 external nodes or route reflectors servers)
321
322
323 [[pvesdn_controller_plugin_BGP]]
324 BGP Controller
325 ~~~~~~~~~~~~~~~
326
327 The bgp controller is not used directly by a zone.
328 You can used it to configure frr to manage bgp peers.
329
330 For BGP-evpn, it can be use to define a different ASN by node, so doing EBGP.
331
332 Configuration options:
333
334 node:: The node of this BGP controller
335
336 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
337 number from the range (64512 - 65534) or (4200000000 - 4294967294), as else
338 you could end up breaking, or get broken, by global routing by mistake.
339
340 peers:: An IP list of peers you want to communicate with for the underlying
341 BGP network.
342
343 ebgp:: If your peer's remote-AS is different, it's enabling EBGP.
344
345 loopback:: If you want to use a loopback or dummy interface as source for the
346 evpn network. (for multipath)
347
348 ebgp-mutltihop:: if the peers are not directly connected or use loopback, you can increase the
349 number of hops to reach them.
350
351 [[pvesdn_config_ipam]]
352 IPAMs
353 -----
354 IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network.
355 It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
356
357 An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
358
359
360 [[pvesdn_ipam_plugin_pveipam]]
361 {pve} IPAM plugin
362 ~~~~~~~~~~~~~~~~~
363
364 This is the default internal IPAM for your proxmox cluster if you don't have
365 external ipam software
366
367 [[pvesdn_ipam_plugin_phpipam]]
368 phpIPAM plugin
369 ~~~~~~~~~~~~~~
370 https://phpipam.net/
371
372 You need to create an application in phpipam, and add an api token with admin
373 permission
374
375 phpIPAM properties are:
376
377 url:: The REST-API endpoint: `http://phpipam.domain.com/api/<appname>/`
378 token:: An API access token
379 section:: An integer ID. Sections are group of subnets in phpIPAM. Default
380 installations use `sectionid=1` for customers.
381
382 [[pvesdn_ipam_plugin_netbox]]
383 Netbox IPAM plugin
384 ~~~~~~~~~~~~~~~~~~
385
386 NetBox is an IP address management (IPAM) and data center infrastructure
387 management (DCIM) tool, see the source code repository for details:
388 https://github.com/netbox-community/netbox
389
390 You need to create an api token in netbox
391 https://netbox.readthedocs.io/en/stable/api/authentication
392
393 NetBox properties are:
394
395 url:: The REST API endpoint: `http://yournetbox.domain.com/api`
396 token:: An API access token
397
398 [[pvesdn_config_dns]]
399 DNS
400 ---
401
402 The DNS plugin in {pve} SDN is used to define a DNS API server for registration
403 of your hostname and IP-address. A DNS configuration is associated with one or
404 more zones, to provide DNS registration for all the sub-net IPs configured for
405 a zone.
406
407 [[pvesdn_dns_plugin_powerdns]]
408 PowerDNS plugin
409 ~~~~~~~~~~~~~~~
410 https://doc.powerdns.com/authoritative/http-api/index.html
411
412 You need to enable the webserver and the API in your PowerDNS config:
413
414 ----
415 api=yes
416 api-key=arandomgeneratedstring
417 webserver=yes
418 webserver-port=8081
419 ----
420
421 Powerdns properties are:
422
423 url:: The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
424 key:: An API access key
425 ttl:: The default TTL for records
426
427
428 Examples
429 --------
430
431 [[pvesdn_setup_example_vlan]]
432 VLAN Setup Example
433 ~~~~~~~~~~~~~~~~~~
434
435 TIP: While we show plain configuration content here, almost everything should
436 be configurable using the web-interface only.
437
438 Node1: /etc/network/interfaces
439
440 ----
441 auto vmbr0
442 iface vmbr0 inet manual
443 bridge-ports eno1
444 bridge-stp off
445 bridge-fd 0
446 bridge-vlan-aware yes
447 bridge-vids 2-4094
448
449 #management ip on vlan100
450 auto vmbr0.100
451 iface vmbr0.100 inet static
452 address 192.168.0.1/24
453
454 source /etc/network/interfaces.d/*
455 ----
456
457 Node2: /etc/network/interfaces
458
459 ----
460 auto vmbr0
461 iface vmbr0 inet manual
462 bridge-ports eno1
463 bridge-stp off
464 bridge-fd 0
465 bridge-vlan-aware yes
466 bridge-vids 2-4094
467
468 #management ip on vlan100
469 auto vmbr0.100
470 iface vmbr0.100 inet static
471 address 192.168.0.2/24
472
473 source /etc/network/interfaces.d/*
474 ----
475
476 Create a VLAN zone named `myvlanzone':
477
478 ----
479 id: myvlanzone
480 bridge: vmbr0
481 ----
482
483 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
484 `myvlanzone' as it's zone.
485
486 ----
487 id: myvnet1
488 zone: myvlanzone
489 tag: 10
490 ----
491
492 Apply the configuration through the main SDN panel, to create VNets locally on
493 each nodes.
494
495 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
496
497 Use the following network configuration for this VM:
498
499 ----
500 auto eth0
501 iface eth0 inet static
502 address 10.0.3.100/24
503 ----
504
505 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
506 `myvnet1' as vm1.
507
508 Use the following network configuration for this VM:
509
510 ----
511 auto eth0
512 iface eth0 inet static
513 address 10.0.3.101/24
514 ----
515
516 Then, you should be able to ping between both VMs over that network.
517
518
519 [[pvesdn_setup_example_qinq]]
520 QinQ Setup Example
521 ~~~~~~~~~~~~~~~~~~
522
523 TIP: While we show plain configuration content here, almost everything should
524 be configurable using the web-interface only.
525
526 Node1: /etc/network/interfaces
527
528 ----
529 auto vmbr0
530 iface vmbr0 inet manual
531 bridge-ports eno1
532 bridge-stp off
533 bridge-fd 0
534 bridge-vlan-aware yes
535 bridge-vids 2-4094
536
537 #management ip on vlan100
538 auto vmbr0.100
539 iface vmbr0.100 inet static
540 address 192.168.0.1/24
541
542 source /etc/network/interfaces.d/*
543 ----
544
545 Node2: /etc/network/interfaces
546
547 ----
548 auto vmbr0
549 iface vmbr0 inet manual
550 bridge-ports eno1
551 bridge-stp off
552 bridge-fd 0
553 bridge-vlan-aware yes
554 bridge-vids 2-4094
555
556 #management ip on vlan100
557 auto vmbr0.100
558 iface vmbr0.100 inet static
559 address 192.168.0.2/24
560
561 source /etc/network/interfaces.d/*
562 ----
563
564 Create an QinQ zone named `qinqzone1' with service VLAN 20
565
566 ----
567 id: qinqzone1
568 bridge: vmbr0
569 service vlan: 20
570 ----
571
572 Create another QinQ zone named `qinqzone2' with service VLAN 30
573
574 ----
575 id: qinqzone2
576 bridge: vmbr0
577 service vlan: 30
578 ----
579
580 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
581 created `qinqzone1' zone.
582
583 ----
584 id: myvnet1
585 zone: qinqzone1
586 tag: 100
587 ----
588
589 Create a `myvnet2' with customer VLAN-id 100 on the previously created
590 `qinqzone2' zone.
591
592 ----
593 id: myvnet2
594 zone: qinqzone2
595 tag: 100
596 ----
597
598 Apply the configuration on the main SDN web-interface panel to create VNets
599 locally on each nodes.
600
601 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
602
603 Use the following network configuration for this VM:
604
605 ----
606 auto eth0
607 iface eth0 inet static
608 address 10.0.3.100/24
609 ----
610
611 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
612 `myvnet1' as vm1.
613
614 Use the following network configuration for this VM:
615
616 ----
617 auto eth0
618 iface eth0 inet static
619 address 10.0.3.101/24
620 ----
621
622 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
623 `myvnet2'.
624
625 Use the following network configuration for this VM:
626
627 ----
628 auto eth0
629 iface eth0 inet static
630 address 10.0.3.102/24
631 ----
632
633 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
634 `myvnet2' as vm3.
635
636 Use the following network configuration for this VM:
637
638 ----
639 auto eth0
640 iface eth0 inet static
641 address 10.0.3.103/24
642 ----
643
644 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
645 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
646 or 'vm4', as they are on a different zone with different service-vlan.
647
648
649 [[pvesdn_setup_example_vxlan]]
650 VXLAN Setup Example
651 ~~~~~~~~~~~~~~~~~~~
652
653 TIP: While we show plain configuration content here, almost everything should
654 be configurable using the web-interface only.
655
656 node1: /etc/network/interfaces
657
658 ----
659 auto vmbr0
660 iface vmbr0 inet static
661 address 192.168.0.1/24
662 gateway 192.168.0.254
663 bridge-ports eno1
664 bridge-stp off
665 bridge-fd 0
666 mtu 1500
667
668 source /etc/network/interfaces.d/*
669 ----
670
671 node2: /etc/network/interfaces
672
673 ----
674 auto vmbr0
675 iface vmbr0 inet static
676 address 192.168.0.2/24
677 gateway 192.168.0.254
678 bridge-ports eno1
679 bridge-stp off
680 bridge-fd 0
681 mtu 1500
682
683 source /etc/network/interfaces.d/*
684 ----
685
686 node3: /etc/network/interfaces
687
688 ----
689 auto vmbr0
690 iface vmbr0 inet static
691 address 192.168.0.3/24
692 gateway 192.168.0.254
693 bridge-ports eno1
694 bridge-stp off
695 bridge-fd 0
696 mtu 1500
697
698 source /etc/network/interfaces.d/*
699 ----
700
701 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
702 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
703 the nodes as peer address list.
704
705 ----
706 id: myvxlanzone
707 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
708 mtu: 1450
709 ----
710
711 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
712 previously.
713
714 ----
715 id: myvnet1
716 zone: myvxlanzone
717 tag: 100000
718 ----
719
720 Apply the configuration on the main SDN web-interface panel to create VNets
721 locally on each nodes.
722
723 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
724
725 Use the following network configuration for this VM, note the lower MTU here.
726
727 ----
728 auto eth0
729 iface eth0 inet static
730 address 10.0.3.100/24
731 mtu 1450
732 ----
733
734 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
735 `myvnet1' as vm1.
736
737 Use the following network configuration for this VM:
738
739 ----
740 auto eth0
741 iface eth0 inet static
742 address 10.0.3.101/24
743 mtu 1450
744 ----
745
746 Then, you should be able to ping between between 'vm1' and 'vm2'.
747
748
749 [[pvesdn_setup_example_evpn]]
750 EVPN Setup Example
751 ~~~~~~~~~~~~~~~~~~
752
753 node1: /etc/network/interfaces
754
755 ----
756 auto vmbr0
757 iface vmbr0 inet static
758 address 192.168.0.1/24
759 gateway 192.168.0.254
760 bridge-ports eno1
761 bridge-stp off
762 bridge-fd 0
763 mtu 1500
764
765 source /etc/network/interfaces.d/*
766 ----
767
768 node2: /etc/network/interfaces
769
770 ----
771 auto vmbr0
772 iface vmbr0 inet static
773 address 192.168.0.2/24
774 gateway 192.168.0.254
775 bridge-ports eno1
776 bridge-stp off
777 bridge-fd 0
778 mtu 1500
779
780 source /etc/network/interfaces.d/*
781 ----
782
783 node3: /etc/network/interfaces
784
785 ----
786 auto vmbr0
787 iface vmbr0 inet static
788 address 192.168.0.3/24
789 gateway 192.168.0.254
790 bridge-ports eno1
791 bridge-stp off
792 bridge-fd 0
793 mtu 1500
794
795 source /etc/network/interfaces.d/*
796 ----
797
798 Create a EVPN controller, using a private ASN number and above node addreesses
799 as peers.
800
801 ----
802 id: myevpnctl
803 asn: 65000
804 peers: 192.168.0.1,192.168.0.2,192.168.0.3
805 ----
806
807 Create an EVPN zone named `myevpnzone' using the previously created
808 EVPN-controller Define 'node1' and 'node2' as exit nodes.
809
810 ----
811 id: myevpnzone
812 vrf vxlan tag: 10000
813 controller: myevpnctl
814 mtu: 1450
815 vnet mac address: 32:F4:05:FE:6C:0A
816 exitnodes: node1,node2
817 ----
818
819 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
820 ----
821 id: myvnet1
822 zone: myevpnzone
823 tag: 11000
824 ----
825
826 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on vnet1
827
828 ----
829 subnet: 10.0.1.0/24
830 gateway: 10.0.1.1
831 ----
832
833 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
834 different IPv4 CIDR network.
835
836 ----
837 id: myvnet2
838 zone: myevpnzone
839 tag: 12000
840 ----
841
842 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
843
844 ----
845 subnet: 10.0.2.0/24
846 gateway: 10.0.2.1
847 ----
848
849
850 Apply the configuration on the main SDN web-interface panel to create VNets
851 locally on each nodes and generate the FRR config.
852
853 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
854
855 Use the following network configuration for this VM:
856
857 ----
858 auto eth0
859 iface eth0 inet static
860 address 10.0.1.100/24
861 gateway 10.0.1.1 #this is the ip of the vnet1
862 mtu 1450
863 ----
864
865 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
866 `myvnet2'.
867
868 Use the following network configuration for this VM:
869
870 ----
871 auto eth0
872 iface eth0 inet static
873 address 10.0.2.100/24
874 gateway 10.0.2.1 #this is the ip of the vnet2
875 mtu 1450
876 ----
877
878
879 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
880
881 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
882 will go to the configured 'myvnet2' gateway, then will be routed to the exit
883 nodes ('node1' or 'node2') and from there it will leave those nodes over the
884 default gateway configured on node1 or node2.
885
886 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
887 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
888 public network can reply back.
889
890 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
891 and 10.0.2.0/24 in this example), will be announced dynamically.